Solved

Blade Server and VMWARE

Posted on 2013-05-29
6
633 Views
Last Modified: 2016-11-23
Currently we are in Vmware environment and its installed in diffrent DELL 2950 and R710 host servers. Basically all these for our own Application, Web and SQL..Currently we have new requirement to have another 30+ VMs. Planning to upgrade  and to buy new host machines..

 My question is buying host machines every time as a very expensive process. Is there any alternative solution that we don't need to spend too much for HW in this growing environment. Is there a good choice to buy Blade servers for VMs. We need to have 6 NICs for each server. Also we are in Netapps SAN Storage..
0
Comment
Question by:sumod_jacob
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
6 Comments
 
LVL 121
ID: 39205080
if you purchase Blade Servers, make sure eventually they will be fully populated, or consider using the Cloud!

otherwise you will have expensive empty chasis.
0
 
LVL 42

Expert Comment

by:paulsolov
ID: 39205098
I still like servers versus blades for VMware.  The blade chassis (unless there are two) is still a SPOF and you can get better port density in the individual servers than the blades.

Are you using FC NFS or ISCSI for the VMs off the Netapp?
0
 
LVL 55

Assisted Solution

by:andyalder
andyalder earned 166 total points
ID: 39205200
One advantage of blade servers over traditional servers for VMware is that the vMotion network never goes out of the enclosure's internal switches; but the other disadvantages still apply. You may well be able to get away with just 2 * 10Gb NICs though and partition them into 8 logical NICs in hardware.
0
Simplifying Server Workload Migrations

This use case outlines the migration challenges that organizations face and how the Acronis AnyData Engine supports physical-to-physical (P2P), physical-to-virtual (P2V), virtual to physical (V2P), and cross-virtual (V2V) migration scenarios to address these challenges.

 

Author Comment

by:sumod_jacob
ID: 39205434
we use ISCSI between VM and Netapps
0
 
LVL 42

Assisted Solution

by:paulsolov
paulsolov earned 167 total points
ID: 39205873
You may want to look at using NFS as well as the datastores are way easier to manage and with Snapdrive 6.4.2 and up you can do space reclaimation on the VMDKs without downtime.

At the end of the day both will work, just depends on what your preference is and what type of infrastructure you would like to have.  Blades give you better control and management while servers give you more redundancy (multiple chassis') and port density.  When I priced it out the blades were more expensive due to chassis components, the servers we get now are bare bones with lots of memory so the cost goes versus having something like virtual connects, blade switches, etc..
0
 
LVL 3

Accepted Solution

by:
costa73 earned 167 total points
ID: 39207369
I fully agree with paulsolov's and andyalder's comments.

In the short term it comes down to how much you will have to spend on servers and NICs/HBAs versus network and storage infrastructure like FC or Ethernet switches. In the long run YMMV depending on the manufacturer you select (hint:check their roadmap and track record). Here's my experience so far...

Assuming you don't have to spend on FC or Ethernet switches for this upgrade (you either are replacing hosts or have port capacity to add a few more), going for rack servers will be less expensive, and you may achieve greater VM density (more RAM/CPU/HBA/NIC per host for less money).

If you have to add FC and/or Ethernet switches (and I'm considering that we're talking about 8Gb FC and 10Gb Ethernet), you may consider blades as a valid option to have less cabling, easier management, and a more compact solution in the datacenter. However, if you choose blades, be aware that you're buying into a supplier, you'll have to rip and replace EVERYTHING if you decide to switch brands... That's why you'll be able to get VERY deep discounts on the chassis.

We're presently doing an upgrade to our VMware cluster based on bladeservers from IBM. For us, the biggest advantage of going this path is flexibility, rack density and ease of management both for blades as well as Ethernet/storage (we already had blades on Bladecenter E, and we have everything externally co-located). Blades are a very space and power effective solution for having virtualized and non-virtualized servers mixed together as we do. We're keeping IBM because it has the broadest choice of networking and storage options , and the most resilient solution, in the chassis (redundant everything). There are not so many choices in terms of blade servers (e.g. compared to HP), but it's good enough - memory/CPU density is your priority, you can always go for e.g. dual HX5 blades. As for port count, we're going for Virtual Fabric and CNAs, effectivelly splitting 2 or 4 10Gb adapters into multiple Ethernet/iSCSI/FCoE ports lke andyalder suggested. Keep in mind that high port counts also créate a lot of management overhead in traditional 1Gb rack switching.

However, this is because we already had an investment on bladeservers, as the path now seems to be PureFlex/HP Bladesystem/etc. i.e. special purpose, not fully redundant hardware geared towards manageability and virtualized workloads. This is not good enough for my mixed virtual/physical servers with business critical workloads.

Right now, if we were completely replacing, I'd probably go for VMware HCL certified rack equipment (if rack space an power requirements permit it) with lots of memory, 10Gb rackswitches, and if costs permited stick to Virtual Fabric / CNAs to drop FC for iSCSI while keeping management easy.
0

Featured Post

Get Actionable Data from Your Monitoring Solution

Your communication platform is only as good as the relevance of the information you send. Ensure your alerts get to the right people every time with actionable responses. Create escalation rules that ensure everyone follows the process and nothing is left to chance.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Last article we focus in how to VMware: How to create and use VMs TAGs – Part 1 so before follow this article and perform the next tasks, you should read the first article how to create the TAG before using them in Veeam Backup Jobs.
In this article, I will show you HOW TO: Create your first Windows Virtual Machine on a VMware vSphere Hypervisor 6.5 (ESXi 6.5) Host Server, the Windows OS we will install is Windows Server 2016.
Teach the user how to join ESXi hosts to Active Directory domains Open vSphere Client: Join ESXi host to AD domain: Verify ESXi computer account in AD: Configure permissions for domain user in ESXi: Test domain user login to ESXi host:
How to Install VMware Tools in Red Hat Enterprise Linux 6.4 (RHEL 6.4) Step-by-Step Tutorial

717 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question