Link to home
Start Free TrialLog in
Avatar of cfan73
cfan73

asked on

Cisco ASA and data center design

A customer is going to be implementing a Cisco Catalyst 6506 VSS pair for their network core, which has fiber uplinks to all of their IDF closets, and connectivity to their 10-Gb data center fabric.  The data center portion is using Cisco Nexus technology, and is organized like this (simplified):

         Cat6K ---  Cat6K
            |      \    /      |
            |      /    \      |
        Nex5K      Nex5K
            |      \    /      |
            |      /    \      |
        Nex2K      Nex2K
     |||||||||||||||||||||||||||||||||||||||||||||
              servers

All of the ports on the Cat 6K's are 10-Gb, the 5K ports are 1/10-Gb, and the 2K fabric extenders provide 1-GE downlinks to the data center servers.  (These are essentially remote line cards for the Nexus 5K boxes to provide higher server density.)

The customer also has several other peripheral appliances, such as dual ASA firewalls, dual CSS load balancers, a wireless LAN controller, etc., all of which have 1-GE ports.  We need to determine the best way to integrate these into the design, since they cannot connect to the 10-Gb ports on the 6506's directly.  The two options I see are:

1) add an additional 1-GE line card to the Cat6K's (such as a WS-X6748-GE-TX), or

2) connect these appliances to the Nexus 2K fabric in the data center, along with the data center servers

Let's focus on the ASA firewalls, which form the barrier between the campus and the Internet. I believe the best design would be to add the line cards, and connect the ASA's directly to the core.  The "problem" is that they only need 7-8 GE ports, and the 6748 line card lists for $15,000.  (Cisco doesn't make an 8, 16 or even 24-port GE copper line card for these switches.)  

There is plenty of available port density on the Nexus 2K's, so connecting the ASA's to them wouldn't cost a dime, but it SEEMS to me like a questionable design to have the firewalls positioned in the data center fabric, and having all in/outbound Internet traffic traverse the data center layer 2 network.

So, what I'm asking for are specific design reasons why positioning the ASA firewalls directly off the data center fabric would be a BAD idea, and thus support purchasing the new (although expensive) line cards.  Or, justify why this wouldn't really be a problem, and maybe my concerns are unfounded.

Thank you!

Avatar of John Meggers
John Meggers
Flag of United States of America image

I'm not sure I have a real answer for you other than to say I've done a few data center designs and my approach would be to use the ASA 5580 which does support 10GB interfaces.  But if the customer is balking at $15k for a line card, they'd probably have a heart attack over what a 5580 costs, so my guess is that's not the ASA platform you have to work with.  Any ideas what kind of real-world throughput they're expecting over those links?  The ASA doesn't support ECLB so you won't be able to combine interfaces for more throughput.  The 5550 only supports up to a little over a Gig throughput.

I tend to agree with you in principal that I would keep the ASAs out of the Nexus layer. If you look at Cisco's validated designs for data center security they all place the ASA higher up in the architecture at the aggregation layer.  See http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/dc_sec_design.html. This may be your best support for your argument.

Avatar of cfan73
cfan73

ASKER

Thanks for the input!  Thing is, the ASA's 5510's are already in place - this is an upgrade for their campus core and data center infrastructure.  Plus, the ASA's are serving as firewalls to/from the Internet, and not to provide the data center directly.  So, in a "typical" Cisco validated design, these would be out in some kind of Enterprise Edge block - or next best, directly connected to the core (I believe).

I need ammunition, though - hopefully specific reasons why reconfiguring the Internet edge to be hanging off of ASA's in the data center fabric could cause security concern or other types of problems.

Hopefully that helps - all input is required.
Avatar of cfan73

ASKER

All input is appreciated, I meant.  :)
Avatar of Les Moore
How a bout a pair of 2960S's 24 port Gig, with 10G uplinks? The two switches can stack together, and you can have redundant 10G connections to the core VSS pair. This pair should support all of your peripheral devices with full redundency..
It doesn't sound like any single point of failure will be an option, so a pair of switches is best. 3750Xs are certainly a great option, but cost 2x more than the 2960's.
Avatar of cfan73

ASKER

Thanks, lrmoore - that would certainly be an option vs. the 48-port line 67xx line cards.   Good idea...

For ammunition, though - can you help me identify potential problems or security/performance risks that might go along w/ connecting them directly the L2 data center fabric?
I don't know of any downside
Pros:
* redundant connections
* 10G connection to core, 1G connection to ASA's with ?? bandwidth to the internet? The ASA 5510 will be the bottleneck if there is one.
* ability to setup L3 internet zone. No broadcasts will ever hit the firewalls
* $$ vs redundant blades on the core switches
* Nexus 5K does not support Layer 3 to allow creation of internet transit network

According to Cisco best practices for network design, the "internet zone" should be separated from the Core services.
you could have 1 10G link as a L2 trunk to the switch stack to support things like Wireless, guest access, etc.
you could have 1 10G link as a L3 routed interface on the core as the Internet zone l3 boundary.
Avatar of cfan73

ASKER

Thanks again, lrmoore - I think I'm almost there.  When mentioning "Pros" of connecting the device through the data center L2 switch fabric, you mention the ability to setup an L3 internet zone, but then later mention that the 5K does not support L3 for this purpose (as it is a pure L2 switch).  This sounds like a contradiction, so could you please clarify?  

Given the above, it would seem that the only way to prevent broadcasts (from servers in the data center, for example) from hitting the ASA firewalls would be to put their ports in a different L2 VLAN, so that the core VSS pair would provide the routing and be the L3 boundary - is this what you were suggesting?

Lastly, regarding the "Cisco best practice" of separating the Internet zone and core services, could you clarify your last two bullets?  Again, hooking the ASA's directly in the L2 data center switch fabric seems to be the opposite of this recommendation.

Thanks again - sorry if this is taking longer to sink in than it should.
ASKER CERTIFIED SOLUTION
Avatar of Les Moore
Les Moore
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of cfan73

ASKER

Good deal - I think that's sufficient for now.  I appreciate your help and patience!