Cisco UCS Rack C-series servers networking using 10 Gb

Hello Experts,

We plan to refresh our VMware setup with 3 new Cisco UCS C240 M3 rack servers with VIC 1225 Dual Port 10Gb SFP+ and implement VMware Virtual SAN.

What is the best reasonabe way to connect them together using 10 Gbit? So far we used different server vendor and have little experience with UCS. Should we consider Nexus switches, fabric interconnects or just any other switches, which support 10 Gb? The idea is to keep it simple but Cisco UCS management looks interesting option.

The rest of network is 1Gb, based on Cisco 3750G swithches, which do not support 10 Gb.
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

James HIT DirectorCommented:
If you understand how to manage NXOS then go with the Nexus switches which are uplinked to Fabric Interconnect.  

NXOS is not the same an Cisco IOS and requires understanding in how to use and implement. UCS is very good, however configuration and setup of UCS is complicated and requires training and understanding.

This isn't the most "reasonable" (when it comes to price) but it keeps everything aligned with one vendor.
glenmosAuthor Commented:
Would like to avoid NXOS learning as it would overcomlicate the setup without obvious benefits.

After further reading I understand we have 2 options:

Option1: Connect servers to pair of 6248 fabric interconnect so that servers have 10 Gbit connectivity. Use 1Gb aggregated uplinks to existing distribution switches. Plus we will be able to use UCS management via FI.

Option 2: Connect servers to pair of new switches, which support 10 Gb and then use uplinks to distribution switches.

Second solution would be easier to install and guess cheaper. Server management through individual CMI ports.

What switches can we use for second option? With enouhg SFP+ ports ...
James HIT DirectorCommented:
Technically you can use any switch, Brocade, HP, Extreme, etc... It doesn't "have" to be Cisco.
What you need to be careful with is the licensing and what your requirements are: Layer 3, etc.. these are NOT standard.

With option 1, I would be concerned about bottleneck at uplink. How many servers and what is your anticipated NORTH/SOUTH traffic?

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
glenmosAuthor Commented:
10 Gb connectivity is required for VSAN traffic, which will go exclussively between 3 ESXi servers and will not hit uplinks, i.e. East/West.

All servers are virtual running in these 3 ESXi and uplinks will be loaded only with traffic generated by users, accessing servers. No VDI, not heavy.

Could you suggest a switch model? 3 servers will require only 3 ports from each switch.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Networking Hardware-Other

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.