Link to home
Start Free TrialLog in
Avatar of dee_nz
dee_nzFlag for New Zealand

asked on

Cisco Catalyst Switching Network Design

Is this design correct?
The Catalyst switches allow me to have 4x1G or 2x10G Uplinks.
Im going to go for 10G uplinks because I think we will need the bandwidth between the stacks.
In the core stack I will connect one of the dual NICs from each server into each switch - making the core redundant for those servers - if one core switch fails I can still access the servers.
Each access stack switch will have an ether channel link to each core switch providing a fast uplink between the stacks and some redundancy - if one access switch fails, users on the other switch will still be able to access the servers.
Core-Access-Switch-5.PNG
ASKER CERTIFIED SOLUTION
Avatar of Don Johnston
Don Johnston
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of dee_nz

ASKER

Thanks for your comment, is there anything else I could improve in this design?
The switches are in the same rack. I'm not sure which 10GE modules to get?
http://www.cisco.com/en/US/docs/interfaces_modules/transceiver_modules/compatibility/matrix/OL_6974.html#wp48759
Avatar of dee_nz

ASKER

These ones? Seeing as the stacks are close together I dont need fiber uplinks...
SFP-H10GB-CU1M1
Even if theyre stacked, use fiber if the option to do so is available. Don't shorcut on cabling when youre using those modules.
If all the switches are in the same rack, I'd skip the 2960's and just get three of the 48-port 3750's. There won't be much of difference in cost, you'll have the same redundancy, 32g stackwise between all the switches and the single management point.
Avatar of dee_nz

ASKER

I thought having 2 separate stacks (core & access) would mean that traffic on the core stack wouldn't interfere with traffic on the access stack. e.g. moving VMs between ESX hosts wouldn't affect users connected to the access stack - therefore 2 separate stacks is better?
>moving VMs between ESX hosts wouldn't affect users connected to the access stack

Well that's true. Are you going to be moving VMs between ESX servers? Using vMotion? How much traffic will these moves generate? How many hosts will connected to these switches? At what speed?

Avatar of dee_nz

ASKER

Yes we will be moving VMs between ESX servers and we also have some other users that open files off the server and process/render images which generates lots of traffic. The plan is to connect these users directly to the core stack so all that traffic stays within the core stack. Does that make sense?
And how will the ESX servers be connected? If your server-switch connections are 1gb then they'll never generate more than 1gb of traffic. If you're using multiple NICS (say 4), then they'll never generate more than 4gb. Now if you're going to be moving multiple VM's on multiple ESX servers, then you could create a bottleneck since the stack/backplane is 32gb.

As for inter-server traffic not interfering with access stack traffic, will the access traffic be limited to the access stack? I thought the servers are all on the core stack. So the access traffic is going to end up on the core stack anyway.

But then again, it's like I always say: In design, there are rarely "right" or "wrong" designs. Just different levels of "good"... Assuming it works.

Avatar of dee_nz

ASKER

I think the design is OK but I need to do some more monitoring to understand what is happening on the network currently to know whether or not the new design will be effective. Anyway, appreciate your comments/feedback. Cheers.