• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1502
  • Last Modified:

TBF traffic shaping

Greets.

I'm having some trouble shaping my outgoing traffic to 14Mbit. I've been searching posts here but nothing seems to have helped. I've tried CBQ approach:
tc qdisc add dev eth0 root handle 10: cbq bandwidth 100Mbit avpkt 1000 cell 8
tc class add dev eth0 parent 10:0 classid 10:1 cbq allot 1514 cell 8 maxburst 80 avpkt 1000 prio 1 bandwidth 14Mbit rate 14Mbit weight 5Kbit bounded
tc qdisc add dev eth0 parent 10:1 sfq quantum 14Mbit

but that did nothing, I've poked around a bit, found wondershaper 1.1a, configured, ran, everything went smoothly but traffic wasn't bursted. I've looked around manual and found TBF, quick line from manual:
tc qdisc add dev eth0 root tbf rate 14mbit latency 50ms burst 220kbit

limited burst somewhat, but it always falled back to 16Kb/s.

I'm trying to limit just outgoing connections to 14mbit, iptables doesn't look to friendly to me to accomplish that. I'm running on 2.6 kernel, x86_64.
0
jamation
Asked:
jamation
  • 8
  • 6
  • 2
1 Solution
 
ravenplCommented:
personally I use htb, In Your case

TC=/sbin/tc
DEV="dev eth0"

#flush qdisc
$TC qdisc del $DEV root

#init qdisc, set default class for traffic
$TC qdisc add $DEV root handle 1:0 htb default 2

#root class, full DEV speed
$TC class add $DEV parent 1:0 classid 1:1 htb rate 100mbit ceil 100mbit burst 64k

#the default class
$TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k
0
 
jamationAuthor Commented:
Yeah I've tried that too (I've searched EE) but that doesn't seem to do anything for me. Nor does it limit burst, nor it limits traffic at 14mbit, mrtg still showing traffic at 17mbps:

 eth0  /  traffic statistics

                             rx       |       tx
--------------------------------------+----------------------------------------
  bytes                      8.97 MB  |      71.20 MB
--------------------------------------+----------------------------------------
          max            170.55 kB/s  |     1.35 MB/s
      average            129.41 kB/s  |     1.00 MB/s
          min            102.33 kB/s  |   536.36 kB/s
--------------------------------------+----------------------------------------
  packets                      61035  |         52879
--------------------------------------+----------------------------------------
          max               1044 p/s  |       908 p/s
      average                859 p/s  |       744 p/s
          min                666 p/s  |       538 p/s
--------------------------------------+----------------------------------------
  time                  1.18 minutes

any other ideas ?
0
 
jamationAuthor Commented:
P.S. Yes I've corrected parent to 1:0 on:
#the default class
$TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k

but no luck.
0
The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

 
NopiusCommented:
> mrtg still showing traffic at 17mbps

Be aware that 14mbit/s is the same as 1835008 bytes/s or the same as 1.75 megabytes/s


MRTG (interface) statistics is not the same as  IP (packet) statistics. Number of bytes passed over interface is always more then bytes in IP packet. So I guess that 17mbps is quite close to 14mbps and you are close to what you should get. To get better results you may try specify more options of 'tc' or even to recompile kernel with CONFIG_HZ_1000 value (man tc-tbf for reference, HZ is a number of timer ticks per second for kernel scheduler).

For bandwidth limiting I'd recommend simple TBF (and more close to exact limiting), not CBQ, HTB or SQF.

So this command (mentioned before) does exactly what you need:
tc qdisc add dev eth1 root tbf rate 14mbit mpu 64 latency 20ms burst 200kb

If you want to get 1.4MBytes/s, you should use:
tc qdisc add dev eth1 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb

0
 
ravenplCommented:
> P.S. Yes I've corrected parent to 1:0 on:
> $TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k
No, the parent should be 1:1, at least I use it that way.
1:0 is the root qdisc, 1:1 is the root class
0
 
jamationAuthor Commented:
TBF seems to be the only to has some effect, however I'm having problem with burst. No matter what value I put, it's limiting it to somewhat 20kB/s (give or take)
0
 
NopiusCommented:
> No matter what value I put, it's limiting it to somewhat 20kB/s (give or take)

Value of what? 'burst' parameter in 'tbf' queue is only the size of 'queue' in bytes. It should be big enough for hi traffic (say 200kb for 14mbit traffic).

What do you meter and how do you meter when you say you have problems?
0
 
jamationAuthor Commented:
I was downloading a file from server, it was limited to 20kB/s (I assumed it was that burst you set). I'm using mrtg 5 minute graph and using vnstat to check current usage.
0
 
NopiusCommented:
That looks strange, because I just tested the configuration.
Can you repeat the test:
1) remove all qdisc from eth0:
tc qdisc del dev eth0

2) ensure you are using eth0 as an outbound interface and create a shaper:
tc qdisc add dev eth0 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb

3) create a file exactly 10Mbytes:
dd if=/dev/zero of=/tmp/10MB_file bs=1024 count=10240

4) get this file either by http or by ftp (but not scp or sftp) and measure the time:
time wget ftp://user:password@server:/tmp/10MB_file

I've got such file (via FTP) in 07.13 seconds with average speed 1.4MB/s
you can't get much deviation from that value.
0
 
jamationAuthor Commented:
As soon as I add the rule, connection speed drops immediately:
# wget 212.200.52.17/10mb
--08:44:11--  http://212.200.52.17/10mb
           => `10mb'
Connecting to 212.200.52.17:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10,485,760 (10M) [text/plain]

 0% [                                     ] 66,893         4.78K/s    ETA 39:17

and without the rule:
# wget 212.200.52.17/10mb
--08:45:57--  http://212.200.52.17/10mb
           => `10mb.1'
Connecting to 212.200.52.17:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10,485,760 (10M) [text/plain]

 1% [                                     ] 138,433       31.51K/s    ETA 05:12

Server is well under 12.2Mb (currently at ~7Mb), that's why I assumed it's related to burst. Also out of curiosity, what does 'latecy' does in that rule ?
0
 
NopiusCommented:
It seems that your test is not clear and your server is under the load (someone else downloads from it).


> Also out of curiosity, what does 'latecy' does in that rule ?

man tc-tbf

the algorithm is like we fill a bucket with 'tockens' or 'bytes' through a small pipe with specified rate (14mbit/s). Once the bucked is filled we 'empty' it through the NIC with the maximum speed. So we do all the time, fill and empty. The volume of the 'bucket' is the 'burst' or 'buffer' value, the length of that small tube is calculated from  'latency' or equal to  'limit' parameter (they are mutually exclusive).

0
 
jamationAuthor Commented:
Well it's under load, but not near to limit so why that speed degradation ?
0
 
NopiusCommented:
> but not near to limit so why that speed degradation ?

If someone else downloads from your server at the same time you are testing - that's the reason of your single download session degrade. Total interface speed should be near to 14Mbit, just wait for some time and check your MRTG graphs.

If you want to limit 14Mbit/s per one user IP, that's slightly another problem.
0
 
jamationAuthor Commented:
I don't want it to interfere with download speeds unless it hits 14Mbit, I assumed that was the logic, why it's messing with speed when it's far from limit ?
0
 
NopiusCommented:
> I assumed that was the logic, why it's messing with speed when it's far from limit ?

I guess it _is_ near the limit. But you don't see it in your test because your test is not clear and someone else uses that 'invisible' bandwidth.

Stop all other transfers (say disconnect your machine from LAN and connect it directoy to client with a crossover cable), then perform the test. It should be 14Mb/s.

If it isn't either your tc config is not as 'tc qdisc add dev eth0 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb' or there are some other traffic consumers.

TBF has precise limiter and mature code, so that's not a bug.


0
 
jamationAuthor Commented:
Thanks, bit weird behavior, I'll look into it further.
0

Featured Post

Never miss a deadline with monday.com

The revolutionary project management tool is here!   Plan visually with a single glance and make sure your projects get done.

  • 8
  • 6
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now