[lxc-devel] limit the netwok traffic of container from the host

lsmushroom lsmushroom at 126.com
Wed Jul 3 06:23:35 UTC 2013


yes , we do not need to add restrictions on both end.The peer option will only replace the target device specified by the key word "dev" with its peer end.
For example , the veth pair name is veth-vps1 , and the peer end is eth0 :
tc qdisc add dev veth-vps1 root tbf rate 0.05mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540   //This will add qdisc to veth-vps1
tc qdisc add dev peer veth-vps1 root tbf rate 0.05mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540   // This will add qdisc to eth0
And it also applys to the configuration of class and filter.


At 2013-07-03 04:13:16,"Serge Hallyn" <serge.hallyn at ubuntu.com> wrote:
>Quoting lsmushroom (lsmushroom at 126.com):
>> Sorry,  I really get confused. Can you show me your testing procedure ?
>> Below is my testing result , the veth pair name is veth-vps1:
>> 
>> 
>> 1) Add qdisc on veth-vps1
>> tc qdisc add dev veth-vps1 root tbf rate 0.05mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540
>> 
>> 
>> 2) Send packet from the container, run iperf server on the host end, and iperf client inside the container: 
>> [root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1
>> ------------------------------------------------------------
>> Server listening on TCP port 10086
>> TCP window size: 85.3 KByte (default)
>> ------------------------------------------------------------
>> [  4] local 172.16.10.5 port 10086 connected with 172.16.10.125 port 38580
>> [  4]  0.0- 1.0 sec  1.76 MBytes  14.7 Mbits/sec
>> [  4]  1.0- 2.0 sec  1.03 MBytes  8.63 Mbits/sec
>> [  4]  2.0- 3.0 sec  1.18 MBytes  9.86 Mbits/sec
>> [  4]  3.0- 4.0 sec  1.17 MBytes  9.81 Mbits/sec
>> [  4]  4.0- 5.0 sec  1.26 MBytes  10.6 Mbits/sec
>> [  4]  5.0- 6.0 sec  1.13 MBytes  9.45 Mbits/sec
>> [  4]  6.0- 7.0 sec  1.16 MBytes  9.71 Mbits/sec
>> [  4]  7.0- 8.0 sec  1.20 MBytes  10.0 Mbits/sec
>> [  4]  8.0- 9.0 sec  1.20 MBytes  10.1 Mbits/sec
>> [  4]  0.0- 9.6 sec  11.7 MBytes  10.3 Mbits/sec
>> 
>> 
>> 3) Send packet to the container, run iperf client on the host end, and iperf server inside the container: 
>> [root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1
>> ------------------------------------------------------------
>> Server listening on TCP port 10086
>> TCP window size: 85.3 KByte (default)
>> ------------------------------------------------------------
>> [  4] local 172.16.10.125 port 10086 connected with 172.16.10.5 port 34648
>> [  4]  0.0- 1.0 sec  5.66 KBytes  46.3 Kbits/sec
>> [  4]  1.0- 2.0 sec  7.07 KBytes  57.9 Kbits/sec
>
>Thanks.
>
>So yeah, this clearly answer's my question which was:
>
>> >Are you saying it will limit only traffic to, but not from, the
>> >container?
>
>(I don't doubt your test, won't reproduce it right now :)
>
>Now what you've proposed would solve this in a more flexible way,
>allowing traffic both into and out of the container to be slowed down
>independently.  But on the other hand, especially given the scant amount
>of information on tc out there, I think many people would end up
>misconfiguring.
>
>Do you think it would be an ok idea to have 'qdisc add dev <some-veth>'
>add the restriction on both of the tunnel end-points?
>
>(The answer may well be no, but I do prefer the simpler configuration)
>
>-serge
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20130703/30d30fe4/attachment.html>


More information about the lxc-devel mailing list