We help IT Professionals succeed at work.

Connecting more than four HPE 1950 switches

rkblakely asked
I want to connect together six of HPE 1950-48G-2SFP+-2XGT switches. Looks like the maximum number that can be stacked together with the IRF stacking system is four – so there’s too many switches to stack.

I don’t want to buy the fiber modules.
I want to connect each switch via the 10Gb connections, with redundancy for the other switches if one fails.

I’m thinking to connect them in a ring like this –

Switch 1 10G port A to Switch 2 10G port B
Switch 2 10G port A to Switch 3 10G port B
Switch 3 10G port A to Switch 4 10G port B
Switch 4 10G port A to Switch 5 10G port B
Switch 5 10G port A to Switch 6 10G port B
Switch 6 10G port A to Switch 1 10G port B

Would that work?

Any other configuration changes I need to make, eg enable MSTP to deal with the loop, LLDP etc?

Watch Question

That would work in 2 scenarios.

1. You do not stack the switches.
2. You have more than 1 stack which is not looped.

If you choose option 1, then you will loose the benefit of combined management.

If you choose 2, then you run the risk of at least 1 stack ending up with a split-brain scenario where a link between 2 switches fails an you end up with 2 stacks.

Ideally, you would have 2 stacks configured in loops. These 2 stacks would then be linked together using alternate links.

Do you know you need 10Gbps links between the switches? In a real world scenario, where we have a converged FCoE and Ethernet network in a combined office & datacentre, we are not seeing 10Gbps throughput. This is with systems supporting in excess of 500 users on the LAN.

If you need/want more than 1Gbps throughput between the multiple stacks, you can easily link between the 2 stacks with Etherchannel/link aggregation. A standards complaint implementation of Etherchannel supports up to 8 active and 8 standby links in a single Etherchannel. Traffic across the Eherchannel is load balanced. You'll need to check the documentation to see what method is used.