Cisco Catalyst Switching Network Design

dee_nz
dee_nz used Ask the Experts™
on
Is this design correct?
The Catalyst switches allow me to have 4x1G or 2x10G Uplinks.
Im going to go for 10G uplinks because I think we will need the bandwidth between the stacks.
In the core stack I will connect one of the dual NICs from each server into each switch - making the core redundant for those servers - if one core switch fails I can still access the servers.
Each access stack switch will have an ether channel link to each core switch providing a fast uplink between the stacks and some redundancy - if one access switch fails, users on the other switch will still be able to access the servers.
Core-Access-Switch-5.PNG
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Instructor
Top Expert 2015
Commented:
Looks good to me.

How far apart are the 3750's to the  2960's?

Author

Commented:
Thanks for your comment, is there anything else I could improve in this design?
The switches are in the same rack. I'm not sure which 10GE modules to get?
http://www.cisco.com/en/US/docs/interfaces_modules/transceiver_modules/compatibility/matrix/OL_6974.html#wp48759

Author

Commented:
These ones? Seeing as the stacks are close together I dont need fiber uplinks...
SFP-H10GB-CU1M1
Robert Sutton JrSenior Network Manager

Commented:
Even if theyre stacked, use fiber if the option to do so is available. Don't shorcut on cabling when youre using those modules.
Don JohnstonInstructor
Top Expert 2015

Commented:
If all the switches are in the same rack, I'd skip the 2960's and just get three of the 48-port 3750's. There won't be much of difference in cost, you'll have the same redundancy, 32g stackwise between all the switches and the single management point.

Author

Commented:
I thought having 2 separate stacks (core & access) would mean that traffic on the core stack wouldn't interfere with traffic on the access stack. e.g. moving VMs between ESX hosts wouldn't affect users connected to the access stack - therefore 2 separate stacks is better?
Don JohnstonInstructor
Top Expert 2015

Commented:
>moving VMs between ESX hosts wouldn't affect users connected to the access stack

Well that's true. Are you going to be moving VMs between ESX servers? Using vMotion? How much traffic will these moves generate? How many hosts will connected to these switches? At what speed?

Author

Commented:
Yes we will be moving VMs between ESX servers and we also have some other users that open files off the server and process/render images which generates lots of traffic. The plan is to connect these users directly to the core stack so all that traffic stays within the core stack. Does that make sense?
Don JohnstonInstructor
Top Expert 2015

Commented:
And how will the ESX servers be connected? If your server-switch connections are 1gb then they'll never generate more than 1gb of traffic. If you're using multiple NICS (say 4), then they'll never generate more than 4gb. Now if you're going to be moving multiple VM's on multiple ESX servers, then you could create a bottleneck since the stack/backplane is 32gb.

As for inter-server traffic not interfering with access stack traffic, will the access traffic be limited to the access stack? I thought the servers are all on the core stack. So the access traffic is going to end up on the core stack anyway.

But then again, it's like I always say: In design, there are rarely "right" or "wrong" designs. Just different levels of "good"... Assuming it works.

Author

Commented:
I think the design is OK but I need to do some more monitoring to understand what is happening on the network currently to know whether or not the new design will be effective. Anyway, appreciate your comments/feedback. Cheers.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial