Eric
asked on
Dell M6220 to Dell 6224 Port channel conundrum
Good day to one and all
I have a stack of 4 M6220 in an M1000e enclosure and I want to take maximum bandwidth out to the rack Dell 6224 Powerconnect stack, however for a poert channel there is a maximum allowed connections of 8 - so 2 port from each M6220 = 8. This means I cannot make use of the other 2 ports per blade switch.
Can anyone give me an idea or configuration help on how I might achieve the full use of the switch ports and associated bandwidth?
Thank
E.
I have a stack of 4 M6220 in an M1000e enclosure and I want to take maximum bandwidth out to the rack Dell 6224 Powerconnect stack, however for a poert channel there is a maximum allowed connections of 8 - so 2 port from each M6220 = 8. This means I cannot make use of the other 2 ports per blade switch.
Can anyone give me an idea or configuration help on how I might achieve the full use of the switch ports and associated bandwidth?
Thank
E.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks for your ideas .
Not sure why the "C" grade. By your diagram, my answer was correct. Without 10Gb links, you're not going to magically get more than the allowed ports into a port-channel.
Your setup is 1x1Gb links, dual to the "core" from each switch on the stack. This is a good topology for resilience.
But, if your stacking cables are greater than 1Gb bandwidth, then you might use 4x1Gb from Sw1 to CoreA, 4x1Gb from Sw2 to CoreB.
This allows the most bandwidth with only one level of resiliency (can only lose one switch).
As for the MPLS...you only have 1Gb, so there's advantage to more bandwidth between the core and the edge switches.
What's running at the core that needs more bandwidth?
Also note that you can't get more than 1Gb on any 1 link from the edge to the core. They are only 1Gb links.
Having 2x 4Gb links might help with congestion issues.
You can always ask a moderator for more help. They can ping more experts who might have more expertise with your particular equipment, or have a different/better solution.
I'd rather you have the best solution possible, within the limits of your environment.
Your setup is 1x1Gb links, dual to the "core" from each switch on the stack. This is a good topology for resilience.
But, if your stacking cables are greater than 1Gb bandwidth, then you might use 4x1Gb from Sw1 to CoreA, 4x1Gb from Sw2 to CoreB.
This allows the most bandwidth with only one level of resiliency (can only lose one switch).
As for the MPLS...you only have 1Gb, so there's advantage to more bandwidth between the core and the edge switches.
What's running at the core that needs more bandwidth?
Also note that you can't get more than 1Gb on any 1 link from the edge to the core. They are only 1Gb links.
Having 2x 4Gb links might help with congestion issues.
You can always ask a moderator for more help. They can ping more experts who might have more expertise with your particular equipment, or have a different/better solution.
I'd rather you have the best solution possible, within the limits of your environment.
ASKER
Hi - I used the guide on the close form it wasn't a sleight on you - it wasn't a complete solution and I didn't get a response after I posted the diagram. I have since acquired 2 more switches which allowed me to look at the whole thing differently, so I was closing the request down. Again thank you for your assistance.
ASKER
I am trying to maximise the bandwidth to the rack switches and MPLS, hence trying to use all available 1gb ports.
The M6220 x 4 are stacked and the 6224 x 2 are stacked - all routes are via the 6224 Stack.
I am unable to upgrade to 10Gbs as have no budget so using existing 1Gbs equipment.
Thanks
E.