This article is in regards to the Cisco QSFP-4SFP10G-CU1M cables, which are designed to uplink/downlink 40GB ports to 10GB SFP ports. I recently experienced this and found very little configuration documentation on how these are supposed to be configured.
So you've just purchased a Cisco Nexus line switch, with fancy looking 40GB ports, great, high speed! Well the questions I had were not that easy to find answers for.
How do these work? What cables do I need? What is the port configuration?
You're wondering, these SFP ports look pretty big - well that is true, they are in fact a lot bigger than a regular 1/10GB SFP port. They are the QSFP ports, not new in the Cisco range but they are advancing pretty quickly with the ability to bring a higher level of network redundancy with the use of the correct cables.
I recently stumbled across a specific design requirement which basically required up linking a Cisco Nexus 3172TQ to an existing WS-3850-E switch stack, and found there were little to no documentation on how these are supposed to be configured.
So first off, the design; All network designs are obviously different - however port configurations and setups will follow a pretty basic standard. In this design as mentioned above, this setup required up linking a new Nexus switch to an existent stack. The business requirement was to utilize Cisco's QSFP-4SFP-CU1CM cables, which allows 40GB (NEXUS) to 4x10GB ports on the 3850. So basically 1 40GB QSFP, to 4x SFP on the same cable.
Great. Now we are all on the same page, easy right.. trunk port these cables, create a port channel, plug it in.. jobs done!
No, not quite - but these cables are super cool.
Before I forget, if you're reading this and considering upping your network throughput - Cisco have specific makes/models of cables depending on the type of switches you are connecting- so be sure to stop by the Cisco website first and confirm the compatibility.
Okay - so into the port configuration
To enable the interface and get the trunk link working you actually have to create 4 virtual ports on the QSFP 40gb port on the Nexus switch. So basically, you will then have 4x virtual 10GB ports. These virtual ports, are the exact same logical thought as a typical VLAN interface, but just more in a physical port form!
To break out the port interface, first you have to figure out what module the port is on. If you are using a Nexus 3172TQ and you're lucky enough to stumble upon this post, its easy - its module 1. It only has one module for QSFP ports. If you have more modules, use the "show modules" command from the Cisco CLI or "show inventory".
Once identified, its straight forward from here.
Enter configuration mode
Interface breakout slot 1 port 49 map 10g-4x
This will break the single port interface, into 4 virtual 10gb ports.
You may want to
Poweroff module 1
No Poweroff Module 1
or perform a shut no shut on the ports to get them to come back up.
You can also remove the configuration and return to the 40gb with no virtual ports by using this command
no interface breakout slot 1 port 49 map 10g-4x
Off the base of this simple configuration - the options to move forward with your network redundancy is pretty much up to you.
- You could create a port channel, on both switches and have 40gb of throughput across the switch-ports.
- You could up link the 4x10GB SFP's to 4 separate switches creating that aggregation point or redundancy.
The options are endless, to some extent...
One cool feature on the Nexus switches when setting up this configuration, is that the 40GB QSFP ports have 4x mini lights on the ports, which easily allow you to see the status of each virtual port on that interface, simple but a nice quick visual tool.
I hope this article will have some help to some users who were placed in the same situation I was. If you've never looked into the QSFP-4SFP cables - take a look, it's worth a read.