Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win


Blade Server and VMWARE

Posted on 2013-05-29
Medium Priority
Last Modified: 2016-11-23
Currently we are in Vmware environment and its installed in diffrent DELL 2950 and R710 host servers. Basically all these for our own Application, Web and SQL..Currently we have new requirement to have another 30+ VMs. Planning to upgrade  and to buy new host machines..

 My question is buying host machines every time as a very expensive process. Is there any alternative solution that we don't need to spend too much for HW in this growing environment. Is there a good choice to buy Blade servers for VMs. We need to have 6 NICs for each server. Also we are in Netapps SAN Storage..
Question by:sumod_jacob
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 124
ID: 39205080
if you purchase Blade Servers, make sure eventually they will be fully populated, or consider using the Cloud!

otherwise you will have expensive empty chasis.
LVL 42

Expert Comment

by:Paul Solovyovsky
ID: 39205098
I still like servers versus blades for VMware.  The blade chassis (unless there are two) is still a SPOF and you can get better port density in the individual servers than the blades.

Are you using FC NFS or ISCSI for the VMs off the Netapp?
LVL 56

Assisted Solution

andyalder earned 664 total points
ID: 39205200
One advantage of blade servers over traditional servers for VMware is that the vMotion network never goes out of the enclosure's internal switches; but the other disadvantages still apply. You may well be able to get away with just 2 * 10Gb NICs though and partition them into 8 logical NICs in hardware.
Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.


Author Comment

ID: 39205434
we use ISCSI between VM and Netapps
LVL 42

Assisted Solution

by:Paul Solovyovsky
Paul Solovyovsky earned 668 total points
ID: 39205873
You may want to look at using NFS as well as the datastores are way easier to manage and with Snapdrive 6.4.2 and up you can do space reclaimation on the VMDKs without downtime.

At the end of the day both will work, just depends on what your preference is and what type of infrastructure you would like to have.  Blades give you better control and management while servers give you more redundancy (multiple chassis') and port density.  When I priced it out the blades were more expensive due to chassis components, the servers we get now are bare bones with lots of memory so the cost goes versus having something like virtual connects, blade switches, etc..

Accepted Solution

costa73 earned 668 total points
ID: 39207369
I fully agree with paulsolov's and andyalder's comments.

In the short term it comes down to how much you will have to spend on servers and NICs/HBAs versus network and storage infrastructure like FC or Ethernet switches. In the long run YMMV depending on the manufacturer you select (hint:check their roadmap and track record). Here's my experience so far...

Assuming you don't have to spend on FC or Ethernet switches for this upgrade (you either are replacing hosts or have port capacity to add a few more), going for rack servers will be less expensive, and you may achieve greater VM density (more RAM/CPU/HBA/NIC per host for less money).

If you have to add FC and/or Ethernet switches (and I'm considering that we're talking about 8Gb FC and 10Gb Ethernet), you may consider blades as a valid option to have less cabling, easier management, and a more compact solution in the datacenter. However, if you choose blades, be aware that you're buying into a supplier, you'll have to rip and replace EVERYTHING if you decide to switch brands... That's why you'll be able to get VERY deep discounts on the chassis.

We're presently doing an upgrade to our VMware cluster based on bladeservers from IBM. For us, the biggest advantage of going this path is flexibility, rack density and ease of management both for blades as well as Ethernet/storage (we already had blades on Bladecenter E, and we have everything externally co-located). Blades are a very space and power effective solution for having virtualized and non-virtualized servers mixed together as we do. We're keeping IBM because it has the broadest choice of networking and storage options , and the most resilient solution, in the chassis (redundant everything). There are not so many choices in terms of blade servers (e.g. compared to HP), but it's good enough - memory/CPU density is your priority, you can always go for e.g. dual HX5 blades. As for port count, we're going for Virtual Fabric and CNAs, effectivelly splitting 2 or 4 10Gb adapters into multiple Ethernet/iSCSI/FCoE ports lke andyalder suggested. Keep in mind that high port counts also créate a lot of management overhead in traditional 1Gb rack switching.

However, this is because we already had an investment on bladeservers, as the path now seems to be PureFlex/HP Bladesystem/etc. i.e. special purpose, not fully redundant hardware geared towards manageability and virtualized workloads. This is not good enough for my mixed virtual/physical servers with business critical workloads.

Right now, if we were completely replacing, I'd probably go for VMware HCL certified rack equipment (if rack space an power requirements permit it) with lots of memory, 10Gb rackswitches, and if costs permited stick to Virtual Fabric / CNAs to drop FC for iSCSI while keeping management easy.

Featured Post


Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

If we need to check who deleted a Virtual Machine from our vCenter. Looking this task in logs can be painful and spend lot of time, so the best way to check this is in the vCenter DB. Just connect to vCenter DB(default DB should be VCDB and using…
Giving access to ESXi shell console is always an issue for IT departments to other Teams, or Projects. We need to find a way so that teams can use ESXTOP for their POCs, or tests without giving them the access to ESXi host shell console with a root …
Teach the user how to configure vSphere Replication and how to protect and recover VMs Open vSphere Web Client: Verify vsphere Replication is enabled: Enable vSphere Replication for a virtual machine: Verify replicated VM is created: Recover replica…
Teach the user how to configure vSphere clusters to support the VMware FT feature Open vSphere Web Client: Verify vSphere HA is enabled: Verify netowrking for vMotion and FT Logging is in place or create it: Turn On FT for a virtual machine: Verify …

618 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question