HP BladeSystem c7000 or HP DL380 G6 for virtual consolidation of data center ??


We are at the beginning stages of a virtual consolidation project. Right now we are trying to determine what hardware to go with for our ESX hosts. We plan on consolidating about 20 servers initially. The servers we are consolidating currently run our document management system and our time/billing applications. The current hardware is a mix of HP DL 380s and 360s (g3 and g4). We're going to stay w/ HP hardware and are also going to incorporate a LeftHand Networks SAN appliance. Right now we are considering the HP BladeSystem c7000 or a handful of HP DL380 G6s.  Any insight would be appreciated. Thanks in advance.

Who is Participating?
oswaldofarithConnect With a Mentor Commented:
I agree with paulsolov, you must do your own cost analysis baed in you needs. Maybe you can start with a basic comparison like the one in the attached file (is just a example, prices are not real). It's not in english but I think that is fully understandable.
What is the virtualization ratio, 5-1, 7-1? It depends on the number of new servers that you are expecting to deploy. Above 6 physical servers, I recommend you to go for the blades.

davismisbehavisConnect With a Mentor Commented:
We use DL580 G5's for our ESX hosts because of the sheer scalability of them.  however we use the DL380 G4 and G5's for development and UAT virtual environments.

Licensing costs should be taken into consideration, especially in light of the changes to VMware licensing.  you may be better to go for fewer sockets and more cores.

Not got a lot of experience of blades,  but they make a lot of sense for VMware,  suppose it depends on your budget doesn't it?
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

chaz21Author Commented:
as of right now,  we are building this out assuming an 8:1 virtualizaiton ratio. as of today we have a total of 52 servers in our data center. the majority of these servers are very inefficient in terms of resource utilization. so, assuming an 8:1 virtualizaiton ration (and we may be able to go higher with that number) we would need 7 ESX hosts max if we were to virtualize the entire data center.  for this particular project i can't see us need more than 3 ESX hosts.  

chaz21Author Commented:

you mentioned the licensing costs might be a factor .   can you elaborate on the changes in VMware licensing (im assuming it changes w/ the release of vSphere) ??
Paul SolovyovskyConnect With a Mentor Senior IT AdvisorCommented:
I would do a cost analysis to see how much it would cost per VM on the DL380 G6s and the blade server, both options are good.  The new G6 models can allow you up to 144GB RAM and with 8 cores per server you should be able to load these up with non intensive G3 and G4 models.  With vSPhere in the horizon the performance will be even better.

For a cluster I believe the amount of servers you can have with DRS is around 14, I don't remember from training.  You could also configure multiple clusters in vCenter to optimize the hardware and have a higher ratio on one of the clusters that has lightly used servers.

My $.02
65tdConnect With a Mentor Commented:
We are using the DL580-G5 with 4 quad CPU's, the reason we went with the 580 is the number of I/O slots!
We are using one blade BL480 with virtual connect, I sure like the ease of configuration for the network and SAN with blade.
Nice thing about the 580 , not all in the same rack/chassis or PDU.
7 servers I'd say you're better offf using blades; BL460c G6 has 12 DIMM slots, BL490c G6 has 18 and the BL495c G5 has 16. All 3 have dual Flex 10 onboard which you can split into 8 NICs using a pair of Flex 10 Virtual Connect I/O modules. With 8 logical onboard NICs you probably won't have to add any I/O mezzanines.

Here'[s some blurb on Virtual Connect Flex 10, http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=120&prodSeriesId=3794423&prodTypeId=329290&objectID=c01602755
vmwarun - ArunCommented:
With respect to vSphere 4.0 Licensing, this page should help you out - http://www.vmware.com/products/vsphere/upgrade-center/licensing.html
markzzConnect With a Mentor Commented:
Bades or servers.
here's a few questions
Are you space limited in your DataCentre.?
Are you power limited in your DataCentre.?
Are you cooling/conditioning limited in your DataCentre.?
Are you Comms limited in your DataCentre.?
If you answer no to these I would think DL series servers would be you choice.
The last time I costed a blade infrastructure verse DL380's the DL were marginally cheaper but considerably more flexible.
If however you are limited in any 3 of these areas then possibly a blade infrastructure is your best option.
OH consider things like physical separation. You can split your physicals over 2 racks to ensure any disaster which effects one corner of the Data Centre doesn't effect the other, Unfortunatly you'll need 2x c7000 to achieve this.
As far as the licenseing changes.
It's no great leap, nothing significant has changed.
Currently you buy a 2 socket license with a limit of 4 cores per socket, in the new licensing model you will by a single socket license which is also limited to 4 cores. You can license more cores but you need an add on license for this.. Well at least that what I have read, or possibly better stated as I understood..
As for us. Our primary farm comprises 7 DL585's from G1's to G5's. The down side of using 585's is the upfront cost and of course the expense of additng just one more.
But like many we couldn't get the IO capacity from smaller formfactor servers. eg. we need 18 NIC's ports.
What you choose will depend on more factors than you have or can explain in a public forum.
My prefference due to simple flexibility is to use Servers..
18 NIC ports sounds excessive, there again the Bl685c G6 has the equivalent of 16 NIC ports onboard so you'd only have to add a single dual-channel NIC.
18 NIC ports sounds excessive
What can I say!! I don't set corporate policy just adhere to it... Security, security, security. and than come the Audit, Audit, Audit.
I'm sure you understand this delema.
OH that's 18 for the VM Guests, there's 22 in all.
chaz21Author Commented:
wow, thanks for all  the great responses ! our initial configuration for the 380's was 32GB of memory as well but im thinking of increases to at least 48GB. assuming a ratio of 15:1 i dont see how a c7000 would be necesary for a data center our size. considering a 10:1 ratio w/ the c7000 it's capable of hosting well over 100 vms. our data center currently consists of 50 physical servers. i am really leaning towards the dl380s. can anyone provide any references or opinions as to the advantage of going w/ blade as opposed to rack server ?
andyalderConnect With a Mentor Commented:
If you are going to half fill the enclosure or more then go blades, if not then go rack servers. Pretty good rule of thumb. You can always get a c3000 rather than c7000 but that compromises redundancy as both onboard NICs go to the same LAN switch.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.