I have four Cisco 9300 switches that are stacked and serving as access layer switches. Each switch in the stack has a 1GB network module that I had planned on using to create a linked aggregate trunk to our two Dell Force 10 core switches. The core switches are a master/slave arrangement so everything physically connected to the master is also physically connected to the slave for fault tolerance.
I wanted to make sure that I have this configured correctly in my lab environment. This stack is replacing a Cisco 4506 chassis that currently has two trunk ports that are just regular 1GB ports on two switch blades in the chassis.
On the 9300 switch, I have the following set to define the port channel and then assigning the channel to the interfaces serving as trunks.
interface Port-channel30
description *** TRUNK TO DELL FORCE 10 CORE SWITCHES ***
switchport mode trunk
then on the interfaces: _1 is the master and _2 is the slave core switch
interface GigabitEthernet1/1/1
description *** TRUNK TO FORCE 10 4820_1 ***
switchport mode trunk
channel-group 30 mode active
interface GigabitEthernet2/1/1
description *** TRUNK TO FORCE 10 4820_2 ***
switchport mode trunk
channel-group 30 mode active
interface GigabitEthernet3/1/1
description *** TRUNK TO FORCE 10 4820_1 ***
switchport mode trunk
channel-group 30 mode active
interface GigabitEthernet4/1/1
description *** TRUNK TO FORCE 10 4820_2 ***
switchport mode trunk
channel-group 30 mode active
My question is, is this the correct way to do this and am I missing anything? Would I be better off creating two different port channels and assigning one to the trunk ports connecting to the master and another connecting to the slave core switch? What I am trying to achieve is a 2GB trunk port to the core switch master and slave for fault tolerance.
Here are couple of things that you are going to run into.
A port-channel is more about redundancy than throughput. You are bundling a group of independent physical links into on virtual connection. This gives the ability for the path to have fault-tolerance in that if one link goes down, the remaining links remain to pass traffic. You do get the advantage of additional bandwidth but the algorithm will make the final determination of what traffic goes on what link. The way this works in reality is that all traffic will be sent down link 1 until it is quite nearly full or the next data stream will saturate (generally around 80%). Once this point is reached, traffic will begin to be passed down the next remaining physical link. Wash, rinse, repeat until all links are being used. Short of it is, bandwidth utilization on the links is asymmetric.
On the matter of creating a second port channel, that can be done, however, it really doesn't buy you anything but a redundant port channel that will be there but not used until the other port-channel goes offline. This is due to how spanning tree will view the topology, specifically for loop avoidance. Assuming that you are going to stick with only 4 net uplinks to the core, you would be cutting your usable bandwidth in any one port-channel in half. In order to really get any benefit from it, I would break the Dell switches apart and have them run independently, then build a port-channel down to each switch. Both port-channels would then be up at what ever aggregate bandwidth you build into the independent port-channels.