Avatar of glenmos
Flag for Russian Federation asked on

Cisco UCS Rack C-series servers networking using 10 Gb

Hello Experts,

We plan to refresh our VMware setup with 3 new Cisco UCS C240 M3 rack servers with VIC 1225 Dual Port 10Gb SFP+ and implement VMware Virtual SAN.

What is the best reasonabe way to connect them together using 10 Gbit? So far we used different server vendor and have little experience with UCS. Should we consider Nexus switches, fabric interconnects or just any other switches, which support 10 Gb? The idea is to keep it simple but Cisco UCS management looks interesting option.

The rest of network is 1Gb, based on Cisco 3750G swithches, which do not support 10 Gb.
Networking Hardware-OtherServer HardwareVMware

Avatar of undefined
Last Comment

8/22/2022 - Mon
James H

If you understand how to manage NXOS then go with the Nexus switches which are uplinked to Fabric Interconnect.  

NXOS is not the same an Cisco IOS and requires understanding in how to use and implement. UCS is very good, however configuration and setup of UCS is complicated and requires training and understanding.

This isn't the most "reasonable" (when it comes to price) but it keeps everything aligned with one vendor.

Would like to avoid NXOS learning as it would overcomlicate the setup without obvious benefits.

After further reading I understand we have 2 options:

Option1: Connect servers to pair of 6248 fabric interconnect so that servers have 10 Gbit connectivity. Use 1Gb aggregated uplinks to existing distribution switches. Plus we will be able to use UCS management via FI.

Option 2: Connect servers to pair of new switches, which support 10 Gb and then use uplinks to distribution switches.

Second solution would be easier to install and guess cheaper. Server management through individual CMI ports.

What switches can we use for second option? With enouhg SFP+ ports ...
James H

View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.

10 Gb connectivity is required for VSAN traffic, which will go exclussively between 3 ESXi servers and will not hit uplinks, i.e. East/West.

All servers are virtual running in these 3 ESXi and uplinks will be loaded only with traffic generated by users, accessing servers. No VDI, not heavy.

Could you suggest a switch model? 3 servers will require only 3 ports from each switch.
I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst
William Peck