Link to home
Start Free TrialLog in
Avatar of glenmos
glenmosFlag for Russian Federation

asked on

Cisco UCS Rack C-series servers networking using 10 Gb

Hello Experts,

We plan to refresh our VMware setup with 3 new Cisco UCS C240 M3 rack servers with VIC 1225 Dual Port 10Gb SFP+ and implement VMware Virtual SAN.

What is the best reasonabe way to connect them together using 10 Gbit? So far we used different server vendor and have little experience with UCS. Should we consider Nexus switches, fabric interconnects or just any other switches, which support 10 Gb? The idea is to keep it simple but Cisco UCS management looks interesting option.

The rest of network is 1Gb, based on Cisco 3750G swithches, which do not support 10 Gb.
Avatar of James H
James H
Flag of United States of America image

If you understand how to manage NXOS then go with the Nexus switches which are uplinked to Fabric Interconnect.  

NXOS is not the same an Cisco IOS and requires understanding in how to use and implement. UCS is very good, however configuration and setup of UCS is complicated and requires training and understanding.

This isn't the most "reasonable" (when it comes to price) but it keeps everything aligned with one vendor.
Avatar of glenmos

ASKER

Would like to avoid NXOS learning as it would overcomlicate the setup without obvious benefits.

After further reading I understand we have 2 options:

Option1: Connect servers to pair of 6248 fabric interconnect so that servers have 10 Gbit connectivity. Use 1Gb aggregated uplinks to existing distribution switches. Plus we will be able to use UCS management via FI.

Option 2: Connect servers to pair of new switches, which support 10 Gb and then use uplinks to distribution switches.

Second solution would be easier to install and guess cheaper. Server management through individual CMI ports.

What switches can we use for second option? With enouhg SFP+ ports ...
ASKER CERTIFIED SOLUTION
Avatar of James H
James H
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of glenmos

ASKER

10 Gb connectivity is required for VSAN traffic, which will go exclussively between 3 ESXi servers and will not hit uplinks, i.e. East/West.

All servers are virtual running in these 3 ESXi and uplinks will be loaded only with traffic generated by users, accessing servers. No VDI, not heavy.

Could you suggest a switch model? 3 servers will require only 3 ports from each switch.