jamation
asked on
TBF traffic shaping
Greets.
I'm having some trouble shaping my outgoing traffic to 14Mbit. I've been searching posts here but nothing seems to have helped. I've tried CBQ approach:
tc qdisc add dev eth0 root handle 10: cbq bandwidth 100Mbit avpkt 1000 cell 8
tc class add dev eth0 parent 10:0 classid 10:1 cbq allot 1514 cell 8 maxburst 80 avpkt 1000 prio 1 bandwidth 14Mbit rate 14Mbit weight 5Kbit bounded
tc qdisc add dev eth0 parent 10:1 sfq quantum 14Mbit
but that did nothing, I've poked around a bit, found wondershaper 1.1a, configured, ran, everything went smoothly but traffic wasn't bursted. I've looked around manual and found TBF, quick line from manual:
tc qdisc add dev eth0 root tbf rate 14mbit latency 50ms burst 220kbit
limited burst somewhat, but it always falled back to 16Kb/s.
I'm trying to limit just outgoing connections to 14mbit, iptables doesn't look to friendly to me to accomplish that. I'm running on 2.6 kernel, x86_64.
I'm having some trouble shaping my outgoing traffic to 14Mbit. I've been searching posts here but nothing seems to have helped. I've tried CBQ approach:
tc qdisc add dev eth0 root handle 10: cbq bandwidth 100Mbit avpkt 1000 cell 8
tc class add dev eth0 parent 10:0 classid 10:1 cbq allot 1514 cell 8 maxburst 80 avpkt 1000 prio 1 bandwidth 14Mbit rate 14Mbit weight 5Kbit bounded
tc qdisc add dev eth0 parent 10:1 sfq quantum 14Mbit
but that did nothing, I've poked around a bit, found wondershaper 1.1a, configured, ran, everything went smoothly but traffic wasn't bursted. I've looked around manual and found TBF, quick line from manual:
tc qdisc add dev eth0 root tbf rate 14mbit latency 50ms burst 220kbit
limited burst somewhat, but it always falled back to 16Kb/s.
I'm trying to limit just outgoing connections to 14mbit, iptables doesn't look to friendly to me to accomplish that. I'm running on 2.6 kernel, x86_64.
ASKER
Yeah I've tried that too (I've searched EE) but that doesn't seem to do anything for me. Nor does it limit burst, nor it limits traffic at 14mbit, mrtg still showing traffic at 17mbps:
eth0 / traffic statistics
rx | tx
-------------------------- ---------- --+------- ---------- ---------- ---------- ---
bytes 8.97 MB | 71.20 MB
-------------------------- ---------- --+------- ---------- ---------- ---------- ---
max 170.55 kB/s | 1.35 MB/s
average 129.41 kB/s | 1.00 MB/s
min 102.33 kB/s | 536.36 kB/s
-------------------------- ---------- --+------- ---------- ---------- ---------- ---
packets 61035 | 52879
-------------------------- ---------- --+------- ---------- ---------- ---------- ---
max 1044 p/s | 908 p/s
average 859 p/s | 744 p/s
min 666 p/s | 538 p/s
-------------------------- ---------- --+------- ---------- ---------- ---------- ---
time 1.18 minutes
any other ideas ?
eth0 / traffic statistics
rx | tx
--------------------------
bytes 8.97 MB | 71.20 MB
--------------------------
max 170.55 kB/s | 1.35 MB/s
average 129.41 kB/s | 1.00 MB/s
min 102.33 kB/s | 536.36 kB/s
--------------------------
packets 61035 | 52879
--------------------------
max 1044 p/s | 908 p/s
average 859 p/s | 744 p/s
min 666 p/s | 538 p/s
--------------------------
time 1.18 minutes
any other ideas ?
ASKER
P.S. Yes I've corrected parent to 1:0 on:
#the default class
$TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k
but no luck.
#the default class
$TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k
but no luck.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
> P.S. Yes I've corrected parent to 1:0 on:
> $TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k
No, the parent should be 1:1, at least I use it that way.
1:0 is the root qdisc, 1:1 is the root class
> $TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k
No, the parent should be 1:1, at least I use it that way.
1:0 is the root qdisc, 1:1 is the root class
ASKER
TBF seems to be the only to has some effect, however I'm having problem with burst. No matter what value I put, it's limiting it to somewhat 20kB/s (give or take)
> No matter what value I put, it's limiting it to somewhat 20kB/s (give or take)
Value of what? 'burst' parameter in 'tbf' queue is only the size of 'queue' in bytes. It should be big enough for hi traffic (say 200kb for 14mbit traffic).
What do you meter and how do you meter when you say you have problems?
Value of what? 'burst' parameter in 'tbf' queue is only the size of 'queue' in bytes. It should be big enough for hi traffic (say 200kb for 14mbit traffic).
What do you meter and how do you meter when you say you have problems?
ASKER
I was downloading a file from server, it was limited to 20kB/s (I assumed it was that burst you set). I'm using mrtg 5 minute graph and using vnstat to check current usage.
That looks strange, because I just tested the configuration.
Can you repeat the test:
1) remove all qdisc from eth0:
tc qdisc del dev eth0
2) ensure you are using eth0 as an outbound interface and create a shaper:
tc qdisc add dev eth0 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb
3) create a file exactly 10Mbytes:
dd if=/dev/zero of=/tmp/10MB_file bs=1024 count=10240
4) get this file either by http or by ftp (but not scp or sftp) and measure the time:
time wget ftp://user:password@server:/tmp/10MB_file
I've got such file (via FTP) in 07.13 seconds with average speed 1.4MB/s
you can't get much deviation from that value.
Can you repeat the test:
1) remove all qdisc from eth0:
tc qdisc del dev eth0
2) ensure you are using eth0 as an outbound interface and create a shaper:
tc qdisc add dev eth0 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb
3) create a file exactly 10Mbytes:
dd if=/dev/zero of=/tmp/10MB_file bs=1024 count=10240
4) get this file either by http or by ftp (but not scp or sftp) and measure the time:
time wget ftp://user:password@server:/tmp/10MB_file
I've got such file (via FTP) in 07.13 seconds with average speed 1.4MB/s
you can't get much deviation from that value.
ASKER
As soon as I add the rule, connection speed drops immediately:
# wget 212.200.52.17/10mb
--08:44:11-- http://212.200.52.17/10mb
=> `10mb'
Connecting to 212.200.52.17:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10,485,760 (10M) [text/plain]
0% [ ] 66,893 4.78K/s ETA 39:17
and without the rule:
# wget 212.200.52.17/10mb
--08:45:57-- http://212.200.52.17/10mb
=> `10mb.1'
Connecting to 212.200.52.17:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10,485,760 (10M) [text/plain]
1% [ ] 138,433 31.51K/s ETA 05:12
Server is well under 12.2Mb (currently at ~7Mb), that's why I assumed it's related to burst. Also out of curiosity, what does 'latecy' does in that rule ?
# wget 212.200.52.17/10mb
--08:44:11-- http://212.200.52.17/10mb
=> `10mb'
Connecting to 212.200.52.17:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10,485,760 (10M) [text/plain]
0% [ ] 66,893 4.78K/s ETA 39:17
and without the rule:
# wget 212.200.52.17/10mb
--08:45:57-- http://212.200.52.17/10mb
=> `10mb.1'
Connecting to 212.200.52.17:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 10,485,760 (10M) [text/plain]
1% [ ] 138,433 31.51K/s ETA 05:12
Server is well under 12.2Mb (currently at ~7Mb), that's why I assumed it's related to burst. Also out of curiosity, what does 'latecy' does in that rule ?
It seems that your test is not clear and your server is under the load (someone else downloads from it).
> Also out of curiosity, what does 'latecy' does in that rule ?
man tc-tbf
the algorithm is like we fill a bucket with 'tockens' or 'bytes' through a small pipe with specified rate (14mbit/s). Once the bucked is filled we 'empty' it through the NIC with the maximum speed. So we do all the time, fill and empty. The volume of the 'bucket' is the 'burst' or 'buffer' value, the length of that small tube is calculated from 'latency' or equal to 'limit' parameter (they are mutually exclusive).
> Also out of curiosity, what does 'latecy' does in that rule ?
man tc-tbf
the algorithm is like we fill a bucket with 'tockens' or 'bytes' through a small pipe with specified rate (14mbit/s). Once the bucked is filled we 'empty' it through the NIC with the maximum speed. So we do all the time, fill and empty. The volume of the 'bucket' is the 'burst' or 'buffer' value, the length of that small tube is calculated from 'latency' or equal to 'limit' parameter (they are mutually exclusive).
ASKER
Well it's under load, but not near to limit so why that speed degradation ?
> but not near to limit so why that speed degradation ?
If someone else downloads from your server at the same time you are testing - that's the reason of your single download session degrade. Total interface speed should be near to 14Mbit, just wait for some time and check your MRTG graphs.
If you want to limit 14Mbit/s per one user IP, that's slightly another problem.
If someone else downloads from your server at the same time you are testing - that's the reason of your single download session degrade. Total interface speed should be near to 14Mbit, just wait for some time and check your MRTG graphs.
If you want to limit 14Mbit/s per one user IP, that's slightly another problem.
ASKER
I don't want it to interfere with download speeds unless it hits 14Mbit, I assumed that was the logic, why it's messing with speed when it's far from limit ?
> I assumed that was the logic, why it's messing with speed when it's far from limit ?
I guess it _is_ near the limit. But you don't see it in your test because your test is not clear and someone else uses that 'invisible' bandwidth.
Stop all other transfers (say disconnect your machine from LAN and connect it directoy to client with a crossover cable), then perform the test. It should be 14Mb/s.
If it isn't either your tc config is not as 'tc qdisc add dev eth0 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb' or there are some other traffic consumers.
TBF has precise limiter and mature code, so that's not a bug.
I guess it _is_ near the limit. But you don't see it in your test because your test is not clear and someone else uses that 'invisible' bandwidth.
Stop all other transfers (say disconnect your machine from LAN and connect it directoy to client with a crossover cable), then perform the test. It should be 14Mb/s.
If it isn't either your tc config is not as 'tc qdisc add dev eth0 root tbf rate 12.2mbit mpu 64 latency 20ms burst 200kb' or there are some other traffic consumers.
TBF has precise limiter and mature code, so that's not a bug.
ASKER
Thanks, bit weird behavior, I'll look into it further.
TC=/sbin/tc
DEV="dev eth0"
#flush qdisc
$TC qdisc del $DEV root
#init qdisc, set default class for traffic
$TC qdisc add $DEV root handle 1:0 htb default 2
#root class, full DEV speed
$TC class add $DEV parent 1:0 classid 1:1 htb rate 100mbit ceil 100mbit burst 64k
#the default class
$TC class add $DEV parent 1:1 classid 1:2 htb rate 14mbit ceil 14mbit burst 32k