Link to home
Create AccountLog in
Avatar of craigueh
craigueh

asked on

HP NC522SFP or NC550SFP to Flex-10 advice

We are shortly going to be upgrading out iScsi storage for one of our HP c7000 Blade enclosures to 10Gbe. All 16 blades are G1s with dual quad cores and 64GB ram running Vmware.

Currently everything is dual port 1Gb in the blades to dual port 1Gb in the SANs.

We will be upgrading the blades with additional Flex-10 Nics, probably NC522m adapters.
We already purchased the two Flex-10 switches, HP 455880-B21.

The iScsi SAN is also to have its Nic upgraded and we were considering NC522SFP adapters.

Our questions are as follows.

What is the difference between NC522SFP and NC550SFP? Is there a compelling reason to choose one over the other?

If the NC522m the best choice for the blades? I know the G1 blades cant use some of the more recent HP 10Gbe adapters.

What cables, transceivers etc. are required to connect the NC522SFP or NC550SFP adapters to the Flex-10 switches?

I see some HP SFP cables on the net that appear to have the transceiver incorporated in the end of the cable, will these work in the NC522SFP and can they plug in to the Flex-10 switches or am I required to purchase all the parts separately? (cables, transceivers etc)

Has anyone any advice or recommendations other than the above for this upgrade?

Thanks.
Avatar of Member_2_231077
Member_2_231077

http://www.emulex.com/artifacts/d75fb9f4-d2d6-4617-8dd8-4d6b83dbc421/elx_faq_all_hpbranded_oneconnect_hp.pdf lists the main differences, the NC550m is based on the older BE2 Emulex chip rather than the BE3, it also takes less power.

I didn't think either card was supported in G1 blades though.

Do you have virtual connect modules in I/O bays 1 and 2? You can't have switches in bays 1&2 and then VC in 3&4 since VC has to be at the top.
Avatar of craigueh

ASKER

I have VC in 5&6 as there are already switched in 1,2,3 & 4
http://h18004.www1.hp.com/products/quickspecs/13127_div/13127_div.pdf has a table of all the possible interconnect bay layouts, the main rule is that bay 1 (and 2 if used) has to have Virtual Connect Ethernet* module in it because that stores the configuration.

*Includes such things as flexfabric, but not switches such as Cisco ones.
Oh that is going to become a problem if its the case.

I cant understand how HP could design Flex-10 like that seeing as it was available from the get go with C class blade enclosures but the first generation of blades came with onboard gigabit nics that have to use bay 1/2 so any effort to install 10Gbe nics would always require VC modules to go in bays 3/4 or 5/6.

Am I missing something here?
Virtual connect for C class was available from day 1 of the C-class introduction as far as I remember, admittedly it wasn't 10Gb to the blades in those days but it isn't required to have 10Gb modules in bays 1&2, just VC Ethernet modules.

Just cleaned my old car out since getting company car on Monday and threw the original VC course notes out dated 2008.
I thought all VC modules were 10gbe.

So is there a specific module I should have in bay 1 & 2 if in going to put the Flex-10 modules in bay 5 & 6? Do you know the part numbers if what I need?

Thanks.
http://h18004.www1.hp.com/products/blades/components/ethernet/vc/index.html, but you would need them in bays 1,2,3,4 as far as I know, there again you could shuffle mezzanine cards about and put your flex-10s in 3&4 and move switches to bays 5&6.

Have you checked that any of the mezzanines are supported in your G1 blades?

Quite frankly I would think about replacing the 16*G1 blades with half a dozen Gen8 ones since DDR3 RAM is so much cheaper that you could easily have 256GB in each one, you'd have similar performance overall on your VMware cluster and you could use the FLex-10s in bays 1&2.
Thanks, unfortunately due to budget constraints that's not an option just yet.

In the mean time though we have installed nc532m Flex-10 nics in each of the blades and connected them to the two Flex-10 modules and back to the Sans.

Everything seems to be working ok but we are not seeing the speeds we would expect from the nics.

 running iperf tests between the blades we can't seem to exceed 6.2Gbps in any direction where as iperf on the loopback adapters produce results of 13.1Gbps so I know memory or Cpu constraints are not the problem. All uplinks and server profiles are set to 10Gb in VC manager. Even using round robin multipathing between two 10Gbe nics tops out at 6.3Gbps with iops set to 1 or 3.

Does anyone know which mezzanine slot the nc532m adapters are supposed to go in bl460c G1 blades? We have them installed in mezzanine 2 as there is already an intel dual port nic in mezzanine 1.
The reason I ask is that I know these blades have two mezzanine slots but one is x4 and one is x8, i wonder have I the nc532m nics in an x4 mezzanine and this could be limiting me to 6.2Gbps bandwidth.
Does anyone have any input on the above question?

Thanks.
Sorry to keep bumping this but does anyone know the answer to the following?

"Does anyone know which mezzanine slot the nc532m adapters are supposed to go in bl460c G1 blades? We have them installed in mezzanine 2 as there is already an intel dual port nic in mezzanine 1.
The reason I ask is that I know these blades have two mezzanine slots but one is x4 and one is x8, i wonder have I the nc532m nics in an x4 mezzanine and this could be limiting me to 6.2Gbps bandwidth. "

Thanks.
ASKER CERTIFIED SOLUTION
Avatar of craigueh
craigueh

Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
See answer
b