QoS on ASA - Latency

Hi all,

Our network has a couple of remote locations, each connected by an IPSEC VPN.  We're running into problems with latency at the remote sites when users at the main site are using the Internet heavily (or when there is a traffic burst by a user at the main site).  It seems like this is a good fit for traffic shaping or prioritization, but I'm not sure we have implemented it correctly, and I'm not sure what results I should expect once it's in place.  Any advice would be appreciated.

Here is an example of typical latency, and the subsequent spike once an Internet download is started at the main site (this is a ping across the tunnel, from one site to the other):

Reply from bytes=32 time=30ms TTL=255
Reply from bytes=32 time=31ms TTL=255
Reply from bytes=32 time=30ms TTL=255
Reply from bytes=32 time=42ms TTL=255
Reply from bytes=32 time=249ms TTL=255 (large file download begins)
Reply from bytes=32 time=335ms TTL=255
Reply from bytes=32 time=330ms TTL=255
Reply from bytes=32 time=334ms TTL=255
Reply from bytes=32 time=365ms TTL=255

I ran a couple of show commands to verify that we are applying the class maps properly (the policy to define traffic flowing to the subnet at is cleverly named floyd-tunnel-policy):

Result of the command: "show running-config policy-map"
policy-map type inspect dns preset_dns_map
  message-length maximum 512
policy-map global_policy
 class inspection_default
  inspect dns preset_dns_map
  inspect ftp
  inspect h323 h225
  inspect h323 ras
  inspect netbios
  inspect rsh
  inspect rtsp
  inspect esmtp
  inspect sqlnet
  inspect sunrpc
  inspect tftp
  inspect xdmcp
  inspect skinny  
  inspect sip  
policy-map outside-policy
 description QoS and traffic shaping service policy
 class floyd-tunnel-policy

Result of the command: "show service-policy interface outside"
Interface outside:
  Service-policy: outside-policy
    Class-map: floyd-tunnel-policy
        Interface outside: aggregate drop 0, aggregate transmit 0
    Class-map: class-default
      Default Queueing

From the running-config, I have the following class-map defined:

class-map inspection_default
 match default-inspection-traffic
class-map floyd-tunnel-policy
 match tunnel-group xx.xx.xx.xx (Peer IP address of the floyd tunnel)

Does anyone have suggestions?  Is it reasonable to expect latency to remain lower than 300ms if traffic is prioritized?  Am I missing something important (which is likely) in the configuration of the tunnel?

Many thanks in advance.
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.


By default ASA policy matches all default application inspection traffic in order to apply certain inspections to the traffic on all interfaces. I see you are trying to add priority to the tunnel traffic, the only problem is that this is a VPN and i assume you are routing across the internet where there is no priority assigned to your traffic, also, how about the return traffic? QOS to be meaningful is applied end to end otherwise your can give priority in a direction on your infrastructure however outside that it is best effort. You may benefit from looking at this doc;


harbor235 ;}
gwermterAuthor Commented:
Actually, this specific connection is an MPLS circuit, so traffic prioritization through the ASA should provide some legitimate QoS.  Regardless, though, I would expect the latency to have some consistency if QoS were configured properly -- the jump from 30 ms to 300 ms specifically when non-tunneled traffic is sent across the same circuit seems to indicate that I have something configured wrong.

The article you mentioned is one of the articles I was using as a reference for creating the initial policy and class-maps.

MPLS does not really change things, classifying and enforcing a policy at the edges of your network provides no COS/QOS guarentees through the MPLS cloud, it only provides priority through that device.

You almost certainly will have better application performance puchasing a better COS/QOS package from your service provider. Who is your provider and what COS/QOS package did they sell you? Hopefully your SP analyzed your applications and designed an appropriate package.

If you bought a default COS/QOS package then all your traffic is treated the same as soon as it leaves your device and through the cloud.  Priority traffic from your SP's perspective is enforced through the cloud and on the return trip as well.

harbor235 ;}
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

gwermterAuthor Commented:
I understand your thoughts about the traffic lacking QoS once it leaves our edge network.  More information about this network is probably in order to explain why I'm approaching it from this direction.  We're trying to balance a number of different loads and are trying to make the best use of bandwidth.  All of these sites are very rural (and mountainous), so connectivity options are actually quite limited (and expensive), so making the best use of what we have is important here.

Main Site:
ASA 5505 is connected to a Cisco 1841 router.  This 1841 has three external interfaces: an MPLS T-1 to Site A, a point-to-point T-1 to Site C, and an Ethernet handoff to an Internet T-1.  My earlier post about this tunnel running over the MPLS circuit was incorrect; the tunnel we are trying to prioritize traffic to runs across the Internet connection to Site B.  I believe all of the interface slots in the 1841 are full, though I am not at that location to verify.

Site B: ASA 5505 connected to 3.0 Mbps Ethernet handoff from local ISP, connection to general Internet with a site-to-site VPN to the main site.

Our users at Site B report latency whenever the Internet is used heavily at the main site (Windows updates, downloads, pretty much anything).  Because the VPN is Internet-based, it seems our best bet would be to (a) prioritize tunnel traffic or (b) deprioritize other Internet traffic.  Since bandwidth is a limited (and expensive) commodity, doing everything we can to setup QoS seems to be the best option.

With that clarification, any thoughts on the implementation of QoS on the ASA?

I believe I get the picture now, looks like to me that perhaps the best place to setup QOS or congestion avoidance may be at the 1841. I assume you mean all site to site traffic and internet traffic traverses the 1841? Have you looked into CBWFQ?

Here is some info from Cisco:

Here are some general factors you should consider in determining whether you need to configure CBWFQ:

•Bandwidth allocation. CBWFQ allows you to specify the exact amount of bandwidth to be allocated for a specific class of traffic. Taking into account available bandwidth on the interface, you can configure up to 64 classes and control distribution among them, which is not the case with flow-based WFQ. Flow-based WFQ applies weights to traffic to classify it into conversations and determine how much bandwidth each conversation is allowed relative to other conversations. For flow-based WFQ, these weights, and traffic classification, are dependent on and limited to the seven IP Precedence levels.

•Coarser granularity and scalability. CBWFQ allows you to define what constitutes a class based on criteria that exceed the confines of flow. CBWFQ allows you to use ACLs and protocols or input interface names to define how traffic will be classified, thereby providing coarser granularity. You need not maintain traffic classification on a flow basis. Moreover, you can configure up to 64 discrete classes in a service policy.

You have the granularity to define classes of traffic and how they get treated, it makes sense to do this at the 1841 becuase it is the hub of all traffic flow that goes to the internet and remote sites. You could mark all traffic leaving each site using diffserv and using WRED but that could get more complicated. I would try CBWFQ and see how it improves. Remember though, you do have limited resources, providing priority to particular traffic types during times of congestion will make other traffic type latent.
But at least you can control it.

Have you thought about configuring a caching proxy for http, https, and ftp like squid at each site? It's open source and you can install it on many *NIX variants. What happens is the caching web proxy will cache those fequently requested web pages, it can reduce your overall usage depending on your internet usage profile, just a thought


harbor235 ;}

gwermterAuthor Commented:
I guess I'm wondering why there is a difference between configuring this on the ASA and on router?

The tunnels are terminated onto the ASA, so it seems like it would be a better fit to do this on the ASA rather than on the router.
gwermterAuthor Commented:

Because all traffic has to pass through the router, not just tunnel traffic but internet traffic and other non tunnel site to site traffic. Not to mention the fact the you do not want to concentrate all functions on a single device.

harbor235 ;}
gwermterAuthor Commented:
But all traffic passes through the ASA as well.  In fact, it is the place where the tunnels are created.

"Not to mention the fact the you do not want to concentrate all functions on a single device."  I don't understand this -- wouldn't we be simply concentrating functions on the router rather than the ASA?

Please don't get me wrong, I appreciate the advice, but I'm puzzled why what I'm asking is perceived as poor architecture or a strange request.  I do understand that I could use the router for QoS, but I'm wondering why the ASA won't do it (or is not supposed to be used for it.)

(The router's job in this network is specifically to route traffic to the proper circuit -- S0 for traffic to one site, S1 for traffic to another site, or E0 for default traffic.  Why is it a better choice for prioritizing traffic than the ASA, which actually examines the traffic through its inspections, and which also creates/terminates the tunnels?)

How does the ASA prioritize control plane traffic generated from the 1841? (routing updates, MPLS
CE to PE traffic, internet peering routing updates, etc ...)

Can't the traffic coming in from the internet be routed to the 1841 and then to one of the other sites without going through the ASA?

This is my reason, however, you can most certainly go either way. From my perspective the 1841 is the best choice, I hope this clears it up, if not please include a diagram of your network, it is hard to understand designs without detailed knowledge.

harbor235 ;}
gwermterAuthor Commented:
I've attached a network diagram to help.

Do you have any thoughts as to why QoS is not working the way it should on the ASA at the main site?  I can understand your reasoning about the 1841 -- though I disagree with it -- but I still don't understand why the ASA isn't working as expected.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.