Configuring trunk ports on a Cisco Catalyst 9300

I have four Cisco 9300 switches that are stacked and serving as access layer switches.  Each switch in the stack has a 1GB network module that I had planned on using to create a linked aggregate trunk to our two Dell Force 10 core switches. The core switches are a master/slave arrangement so everything physically connected to the master is also physically connected to the slave for fault tolerance.

I wanted to make sure that I have this configured correctly in my lab environment.  This stack is replacing a Cisco 4506 chassis that currently has two trunk ports that are just regular 1GB ports on two switch blades in the chassis.

On the 9300 switch, I have the following set to define the port channel and then assigning the channel to the interfaces serving as trunks.

interface Port-channel30
 description *** TRUNK TO DELL FORCE 10 CORE SWITCHES ***
 switchport mode trunk


then on the interfaces:  _1 is the master and _2 is the slave core switch

interface GigabitEthernet1/1/1
 description *** TRUNK TO FORCE 10 4820_1 ***
 switchport mode trunk
 channel-group 30 mode active

interface GigabitEthernet2/1/1
 description *** TRUNK TO FORCE 10 4820_2 ***
 switchport mode trunk
 channel-group 30 mode active

interface GigabitEthernet3/1/1
 description *** TRUNK TO FORCE 10 4820_1 ***
 switchport mode trunk
 channel-group 30 mode active

interface GigabitEthernet4/1/1
 description *** TRUNK TO FORCE 10 4820_2 ***
 switchport mode trunk
 channel-group 30 mode active


My question is, is this the correct way to do this and am I missing anything?  Would I be better off creating two different port channels and assigning one to the trunk ports connecting to the master and another connecting to the slave core switch?  What I am trying to achieve is a 2GB trunk port to the core switch master and slave for fault tolerance.
LVL 1
Steve BantzIT ManagerAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

atlas_shudderedSr. Network EngineerCommented:
The trunks and port-channel look good.

Here are couple of things that you are going to run into.

A port-channel is more about redundancy than throughput.  You are bundling a group of independent physical links into on virtual connection.  This gives the ability for the path to have fault-tolerance in that if one link goes down, the remaining links remain to pass traffic.  You do get the advantage of additional bandwidth but the algorithm will make the final determination of what traffic goes on what link.  The way this works in reality is that all traffic will be sent down link 1 until it is quite nearly full or the next data stream will saturate (generally around 80%).  Once this point is reached, traffic will begin to be passed down the next remaining physical link.  Wash, rinse, repeat until all links are being used.  Short of it is, bandwidth utilization on the links is asymmetric.

On the matter of creating a second port channel, that can be done, however, it really doesn't buy you anything but a redundant port channel that will be there but not used until the other port-channel goes offline.  This is due to how spanning tree will view the topology, specifically for loop avoidance.  Assuming that you are going to stick with only 4 net uplinks to the core, you would be cutting your usable bandwidth in any one port-channel in half.  In order to really get any benefit from it, I would break the Dell switches apart and have them run independently, then build a port-channel down to each switch.  Both port-channels would then be up at what ever aggregate bandwidth you build into the independent port-channels.
0
JustInCaseCommented:
Each switch in the stack has a 1GB network module that I had planned on using to create a linked aggregate trunk to our two Dell Force 10 core switches. My question is, is this the correct way to do this and am I missing anything?
In this case it is not correct configuration (the case where there are two independent disks upstream/downstream).  
Would I be better off creating two different port channels and assigning one to the trunk ports connecting to the master and another connecting to the slave core switch?
If you will connect etherchannel to two independent switches you need to create two etherchannels instead of one (one to each switch).
What I am trying to achieve is a 2GB trunk port to the core switch master and slave for fault tolerance.
Make sure that you pick proper load balancing algorithm that is suitable for you environment (if needed to be changed to something else that default load balancing algorithm for better utilization of links).
The way this works in reality is that all traffic will be sent down link 1 until it is quite nearly full or the next data stream will saturate (generally around 80%).  Once this point is reached, traffic will begin to be passed down the next remaining physical link.  Wash, rinse, repeat until all links are being used.  Short of it is, bandwidth utilization on the links is asymmetric.
That's not how LAG on switches operates. Described behavior is related to VMware Load-Based Teaming (if I remember correctly exactly where in VMware technologies this description fits).
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
atlas_shudderedSr. Network EngineerCommented:
What am I missing?   Why the extra channels?   9300-->stack.  Why use an extra channel?

Load on the port channel will be asymmetric with or without VMware involvement.
0
Powerful Yet Easy-to-Use Network Monitoring

Identify excessive bandwidth utilization or unexpected application traffic with SolarWinds Bandwidth Analyzer Pack.

JustInCaseCommented:
The way I understand configuration, stack is connected to 2 core devices  that are not in stack (it is not mentioned that those two devices are one logical device - at least I did not understand it that way, but I may be wrong :) ) . In which case connection to each device need it's own etherchannel. For sure, if I misunderstood topology and core devices are one logical device than only one LAG is needed.

Load on LAG on switches is actually calculated according to load balancing algorithm that is configured on device (interface that will be used for specific traffic depends on hash that is result of that calculation (IP addresses, ports involved that are part of chosen LB algorithm)). In some cases it could happen that only one link is utilized and other link is never utilized and other link is never utilized. It is just different mechanism to which interface in LAG traffic is assigned (not like the one it is described in post). That's why there are different load balancing algorithms to choose from, if possible, to be able to avoid situation that traffic would use just one link or unequal LB.
0
Steve BantzIT ManagerAuthor Commented:
Thanks for the comments.  It is true that the two core switches are not stacked, at least I don't think so. They ARE connected with a 40GB VLT connection but it is my understanding that one is a master and one is a standby for fault tolerance.  I have to connect to each separately via SSH to configure them so that leads me to believe they are not seen as one logical device.  Every other switch or device connected to the master also has a physical connection to the standby.  I will look more closely at the core switches to see how they are configured.  I inherited that part of the network infrastructure.
0
atlas_shudderedSr. Network EngineerCommented:
Okay.  With them being independent brains, I agree with JIC's notes.  Sorry for the confusion on that point.  Cheers
0
JustInCaseCommented:
I the case of Nexus vPC you would ssh to two different devices (that are not one logical device) and would still be able to create one MC-LAG to both devices. I am not familiar with Dell HA configurations, so I googled a little - looks like there are, in case of Dell, two technologies similar to vPC - one is MLAG and other is VLT. Since you wrote that VLT is implemented, you can actually configure one LAG from 9300 stack to both Dell Force 10 switches in the case that ports on core switches are properly configured for multi chassis LAG (configuration manual for Dell Force 10 VLT link is provided).
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Cisco

From novice to tech pro — start learning today.