• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 638
  • Last Modified:

Blade Server and VMWARE

Currently we are in Vmware environment and its installed in diffrent DELL 2950 and R710 host servers. Basically all these for our own Application, Web and SQL..Currently we have new requirement to have another 30+ VMs. Planning to upgrade  and to buy new host machines..

 My question is buying host machines every time as a very expensive process. Is there any alternative solution that we don't need to spend too much for HW in this growing environment. Is there a good choice to buy Blade servers for VMs. We need to have 6 NICs for each server. Also we are in Netapps SAN Storage..
0
sumod_jacob
Asked:
sumod_jacob
3 Solutions
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
if you purchase Blade Servers, make sure eventually they will be fully populated, or consider using the Cloud!

otherwise you will have expensive empty chasis.
0
 
Paul SolovyovskyCommented:
I still like servers versus blades for VMware.  The blade chassis (unless there are two) is still a SPOF and you can get better port density in the individual servers than the blades.

Are you using FC NFS or ISCSI for the VMs off the Netapp?
0
 
andyalderSaggar makers bottom knockerCommented:
One advantage of blade servers over traditional servers for VMware is that the vMotion network never goes out of the enclosure's internal switches; but the other disadvantages still apply. You may well be able to get away with just 2 * 10Gb NICs though and partition them into 8 logical NICs in hardware.
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 
sumod_jacobAuthor Commented:
we use ISCSI between VM and Netapps
0
 
Paul SolovyovskyCommented:
You may want to look at using NFS as well as the datastores are way easier to manage and with Snapdrive 6.4.2 and up you can do space reclaimation on the VMDKs without downtime.

At the end of the day both will work, just depends on what your preference is and what type of infrastructure you would like to have.  Blades give you better control and management while servers give you more redundancy (multiple chassis') and port density.  When I priced it out the blades were more expensive due to chassis components, the servers we get now are bare bones with lots of memory so the cost goes versus having something like virtual connects, blade switches, etc..
0
 
costa73Commented:
I fully agree with paulsolov's and andyalder's comments.

In the short term it comes down to how much you will have to spend on servers and NICs/HBAs versus network and storage infrastructure like FC or Ethernet switches. In the long run YMMV depending on the manufacturer you select (hint:check their roadmap and track record). Here's my experience so far...

Assuming you don't have to spend on FC or Ethernet switches for this upgrade (you either are replacing hosts or have port capacity to add a few more), going for rack servers will be less expensive, and you may achieve greater VM density (more RAM/CPU/HBA/NIC per host for less money).

If you have to add FC and/or Ethernet switches (and I'm considering that we're talking about 8Gb FC and 10Gb Ethernet), you may consider blades as a valid option to have less cabling, easier management, and a more compact solution in the datacenter. However, if you choose blades, be aware that you're buying into a supplier, you'll have to rip and replace EVERYTHING if you decide to switch brands... That's why you'll be able to get VERY deep discounts on the chassis.

We're presently doing an upgrade to our VMware cluster based on bladeservers from IBM. For us, the biggest advantage of going this path is flexibility, rack density and ease of management both for blades as well as Ethernet/storage (we already had blades on Bladecenter E, and we have everything externally co-located). Blades are a very space and power effective solution for having virtualized and non-virtualized servers mixed together as we do. We're keeping IBM because it has the broadest choice of networking and storage options , and the most resilient solution, in the chassis (redundant everything). There are not so many choices in terms of blade servers (e.g. compared to HP), but it's good enough - memory/CPU density is your priority, you can always go for e.g. dual HX5 blades. As for port count, we're going for Virtual Fabric and CNAs, effectivelly splitting 2 or 4 10Gb adapters into multiple Ethernet/iSCSI/FCoE ports lke andyalder suggested. Keep in mind that high port counts also créate a lot of management overhead in traditional 1Gb rack switching.

However, this is because we already had an investment on bladeservers, as the path now seems to be PureFlex/HP Bladesystem/etc. i.e. special purpose, not fully redundant hardware geared towards manageability and virtualized workloads. This is not good enough for my mixed virtual/physical servers with business critical workloads.

Right now, if we were completely replacing, I'd probably go for VMware HCL certified rack equipment (if rack space an power requirements permit it) with lots of memory, 10Gb rackswitches, and if costs permited stick to Virtual Fabric / CNAs to drop FC for iSCSI while keeping management easy.
0

Featured Post

Free Backup Tool for VMware and Hyper-V

Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.  

Tackle projects and never again get stuck behind a technical roadblock.
Join Now