eureeka
asked on
Cisco MLPPP latency problem
Hi all,
I've got 2 t1's bonded together via MLPPP on a PA-MC-T3 card in a 7204vxr for our backbone connection. While running extended ping tests on the remote interface accross the link, I'll get decent results between 3-6 ms but every 30-45 seconds it will jump to over 300 ms for a couple packets. I'm not getting any input, crc or frame errors on either of the t1 serial interfaces and the multilink stats look fine to. Does anyone know if there's anything else I can check on my side before I open a trouble ticket with the telco?
Any help would greatly be appreciated.
Thanks in advance,
eureeka
I've got 2 t1's bonded together via MLPPP on a PA-MC-T3 card in a 7204vxr for our backbone connection. While running extended ping tests on the remote interface accross the link, I'll get decent results between 3-6 ms but every 30-45 seconds it will jump to over 300 ms for a couple packets. I'm not getting any input, crc or frame errors on either of the t1 serial interfaces and the multilink stats look fine to. Does anyone know if there's anything else I can check on my side before I open a trouble ticket with the telco?
Any help would greatly be appreciated.
Thanks in advance,
eureeka
ASKER
Hmmm...well my card is configured in channelized mode currently so that I can bond two of the T1's together with MLPPP, and this happens at all times of the day even in the middle of the night when there's next to no traffic. I can do an extended ping on my side of the MLPPP link and I don't get the jumps in ms up to 300+ but I do when I extended ping the remote interface of our telco on the other side.
Any other ideas?
Thanks again,
eureeka
Any other ideas?
Thanks again,
eureeka
What kind of router do you have at the other end? Does it have enough horsepower to run MLPPP effeciently?
ASKER
I've got a:
cisco 7204VXR (NPE200) processor (revision B) with 114688K/16384K bytes of memory.
Processor board ID 16070654
R5000 CPU at 200Mhz, Implementation 35, Rev 2.1, 512KB L2 Cache
4 slot VXR midplane, Version 2.0
and I believe the telco has a 7206 on the other side with the same PA-MC-T3 card. Below is an extended traceroute between the two interfaces where you can see the ms jump up every once in a while:
BigDog#traceroute
Protocol [ip]:
Target IP address: 66.242.239.81
Source address: 66.242.239.82
Numeric display [n]:
Timeout in seconds [3]:
Probe count [3]: 200
Minimum Time to Live [1]:
Maximum Time to Live [30]:
Port Number [33434]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Type escape sequence to abort.
Tracing the route to 66-242-239-81.arpa.kmcmail .net (66.242.239.81)
1 66-242-239-81.arpa.kmcmail .net (66.242.239.81) 4 msec * 4 msec * 4 msec *
4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 136 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 12 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 8 msec * 12 msec * 16 msec * 4 msec * 8 msec * 4 msec * 212 msec * 4 msec * 4 msec * 8 msec * 4 msec * 20 msec * 4 msec * 4 msec * 4 msec * 16 msec * 24 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 8 msec * 12 msec * 4 msec * 16 msec * 92 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 16 msec * 4 msec * 8 msec * 4 msec * 4 msec * 4 msec * 8 msec * 4 msec * 12 msec * 152 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 12 msec * 4 msec * 12 msec * 4 msec * 4 msec * 12 msec * 4 msec * 12 msec * 4 msec * 4 msec * 4 msec * 8 msec * 4 msec * 16 msec * 4 msec * 264 msec * 4 msec * 40 msec * 12 msec * 4 msec *
cisco 7204VXR (NPE200) processor (revision B) with 114688K/16384K bytes of memory.
Processor board ID 16070654
R5000 CPU at 200Mhz, Implementation 35, Rev 2.1, 512KB L2 Cache
4 slot VXR midplane, Version 2.0
and I believe the telco has a 7206 on the other side with the same PA-MC-T3 card. Below is an extended traceroute between the two interfaces where you can see the ms jump up every once in a while:
BigDog#traceroute
Protocol [ip]:
Target IP address: 66.242.239.81
Source address: 66.242.239.82
Numeric display [n]:
Timeout in seconds [3]:
Probe count [3]: 200
Minimum Time to Live [1]:
Maximum Time to Live [30]:
Port Number [33434]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Type escape sequence to abort.
Tracing the route to 66-242-239-81.arpa.kmcmail
1 66-242-239-81.arpa.kmcmail
4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 136 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 12 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 8 msec * 12 msec * 16 msec * 4 msec * 8 msec * 4 msec * 212 msec * 4 msec * 4 msec * 8 msec * 4 msec * 20 msec * 4 msec * 4 msec * 4 msec * 16 msec * 24 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 8 msec * 12 msec * 4 msec * 16 msec * 92 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 16 msec * 4 msec * 8 msec * 4 msec * 4 msec * 4 msec * 8 msec * 4 msec * 12 msec * 152 msec * 4 msec * 4 msec * 4 msec * 4 msec * 4 msec * 12 msec * 4 msec * 12 msec * 4 msec * 4 msec * 12 msec * 4 msec * 12 msec * 4 msec * 4 msec * 4 msec * 8 msec * 4 msec * 16 msec * 4 msec * 264 msec * 4 msec * 40 msec * 12 msec * 4 msec *
I'd definately call the telco. If you're only pinging the other side of the link and see 200+ msec times then something is wrong.
This is what I get from my end. Notice I also get spikes of high latency, then back to normal.. So it's not localized to just you..
14 Addr:66.242.239.81, RTT: 49ms, TTL: 241
15 Addr:66.242.239.81, RTT: 50ms, TTL: 241
16 Addr:66.242.239.81, RTT: 243ms, TTL: 241 <==
17 Addr:66.242.239.81, RTT: 245ms, TTL: 241 <==
18 Addr:66.242.239.81, RTT: 175ms, TTL: 241 <==
19 Addr:66.242.239.81, RTT: 69ms, TTL: 241
20 Addr:66.242.239.81, RTT: 48ms, TTL: 241
This is what I get from my end. Notice I also get spikes of high latency, then back to normal.. So it's not localized to just you..
14 Addr:66.242.239.81, RTT: 49ms, TTL: 241
15 Addr:66.242.239.81, RTT: 50ms, TTL: 241
16 Addr:66.242.239.81, RTT: 243ms, TTL: 241 <==
17 Addr:66.242.239.81, RTT: 245ms, TTL: 241 <==
18 Addr:66.242.239.81, RTT: 175ms, TTL: 241 <==
19 Addr:66.242.239.81, RTT: 69ms, TTL: 241
20 Addr:66.242.239.81, RTT: 48ms, TTL: 241
ASKER
Yea. It's kind of hard to trouble shoot to because 5 of the hops before it gets to the remote side of my link, which is the telco's interface, the routers do not respond. You are right though, it jumps up in ms before it even comes accross my link. I'm gonna open a trouble ticket, but also do you know of a way around trying to get those routers to respond so I can figure out exactly where the culprit is?
eureeka
eureeka
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Symptoms: Buffer leakage could occur when a high load of traffic is sent to an interface that has a service policy enabled. This could result in ping failures or very long packet delay.
Conditions: The problem is observed with an MC-T3+ interface that is configured in unchannelized mode, and the traffic consists only of small packets such a 64-byte packets.
Workaround: Manually configure the tx-ring-limit command to lower the number of packets that can be placed on the transmission ring.