donemore2003
asked on
Campus LAN Design
I have a Campus LAN that has 8 distibution blocks. The 7th and 8th distribution block belong to the the WAN Edge and Server Block. The other 6 are user access distribution blocks. The way I have it setup is as follows.
2 WAN Edge Routers
2 4506 L3 Core switches
The 2 WAN routers connect to the Core via a Cisco 3560 to the Core. They connect to the 3560 via 100 mbps access links. Each router has 2 Gig ports. All 4 of the router interfaces connect to the is 3560.
The 3560 has 2 Trunk uplinks to the Cores; 1 1Gmbps to the primary and the other to the secondary. The 2 Cores have HSRP running between them. I run EIGRP between the WAN Edge and the Core; between the Core and the user access distribution blocks. The user access distribution blocks are connected via routed links.
The Server Farm Block consists of 4 Cisco 4948 switches. 2 are connected to the to the 3560 I mentioned above buy 2 1 Gig Trunk Uplinks. The other 2 are connected via these 2 4948's.
The reason I have them connected this way is beacuse there is too much distance between the Closet where the 2 cores located and the Closet where the server farm and the WAN gear are located.
I have no problem with the setup between the user access distribution blocks and the Core.
That said, my goal is to redesign connectivity between the core, the server farm block and the WAN block. I understand that the lone Cisco 3560 switch represents a Single Point Of Failure.
Can someone eexpalin to me the design and performance problem casued by having all traffic from the user access blocks; the server farm and the WAN go across this access switch. I have no means for measuring this so I can explain to stackholders that we need to have all distribution blocks connect to the core directly and not via some access switch?
How can I justify them to invest in a fiber infrustruture in order to provide direct connectivity since copper distance is way beyond the 300 ft limit?
What theoratical/practical performance issues are paused by the setup that I have?
Thanks in advance
2 WAN Edge Routers
2 4506 L3 Core switches
The 2 WAN routers connect to the Core via a Cisco 3560 to the Core. They connect to the 3560 via 100 mbps access links. Each router has 2 Gig ports. All 4 of the router interfaces connect to the is 3560.
The 3560 has 2 Trunk uplinks to the Cores; 1 1Gmbps to the primary and the other to the secondary. The 2 Cores have HSRP running between them. I run EIGRP between the WAN Edge and the Core; between the Core and the user access distribution blocks. The user access distribution blocks are connected via routed links.
The Server Farm Block consists of 4 Cisco 4948 switches. 2 are connected to the to the 3560 I mentioned above buy 2 1 Gig Trunk Uplinks. The other 2 are connected via these 2 4948's.
The reason I have them connected this way is beacuse there is too much distance between the Closet where the 2 cores located and the Closet where the server farm and the WAN gear are located.
I have no problem with the setup between the user access distribution blocks and the Core.
That said, my goal is to redesign connectivity between the core, the server farm block and the WAN block. I understand that the lone Cisco 3560 switch represents a Single Point Of Failure.
Can someone eexpalin to me the design and performance problem casued by having all traffic from the user access blocks; the server farm and the WAN go across this access switch. I have no means for measuring this so I can explain to stackholders that we need to have all distribution blocks connect to the core directly and not via some access switch?
How can I justify them to invest in a fiber infrustruture in order to provide direct connectivity since copper distance is way beyond the 300 ft limit?
What theoratical/practical performance issues are paused by the setup that I have?
Thanks in advance
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.