[lxc-devel] limit the netwok traffic of container from the host

lsmushroom lsmushroom at 126.com
Tue Jul 2 08:44:08 UTC 2013


Sorry,  I really get confused. Can you show me your testing procedure ?
Below is my testing result , the veth pair name is veth-vps1:


1) Add qdisc on veth-vps1
tc qdisc add dev veth-vps1 root tbf rate 0.05mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540


2) Send packet from the container, run iperf server on the host end, and iperf client inside the container: 
[root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1
------------------------------------------------------------
Server listening on TCP port 10086
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.16.10.5 port 10086 connected with 172.16.10.125 port 38580
[  4]  0.0- 1.0 sec  1.76 MBytes  14.7 Mbits/sec
[  4]  1.0- 2.0 sec  1.03 MBytes  8.63 Mbits/sec
[  4]  2.0- 3.0 sec  1.18 MBytes  9.86 Mbits/sec
[  4]  3.0- 4.0 sec  1.17 MBytes  9.81 Mbits/sec
[  4]  4.0- 5.0 sec  1.26 MBytes  10.6 Mbits/sec
[  4]  5.0- 6.0 sec  1.13 MBytes  9.45 Mbits/sec
[  4]  6.0- 7.0 sec  1.16 MBytes  9.71 Mbits/sec
[  4]  7.0- 8.0 sec  1.20 MBytes  10.0 Mbits/sec
[  4]  8.0- 9.0 sec  1.20 MBytes  10.1 Mbits/sec
[  4]  0.0- 9.6 sec  11.7 MBytes  10.3 Mbits/sec


3) Send packet to the container, run iperf client on the host end, and iperf server inside the container: 
[root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1
------------------------------------------------------------
Server listening on TCP port 10086
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.16.10.125 port 10086 connected with 172.16.10.5 port 34648
[  4]  0.0- 1.0 sec  5.66 KBytes  46.3 Kbits/sec
[  4]  1.0- 2.0 sec  7.07 KBytes  57.9 Kbits/sec
[  4]  2.0- 3.0 sec  7.07 KBytes  57.9 Kbits/sec
[  4]  3.0- 4.0 sec  7.07 KBytes  57.9 Kbits/sec
[  4]  4.0- 5.0 sec  5.66 KBytes  46.3 Kbits/sec
[  4]  5.0- 6.0 sec  2.83 KBytes  23.2 Kbits/sec
[  4]  6.0- 7.0 sec  2.83 KBytes  23.2 Kbits/sec
[  4]  7.0- 8.0 sec  2.83 KBytes  23.2 Kbits/sec
[  4]  8.0- 9.0 sec  2.83 KBytes  23.2 Kbits/sec
[  4]  9.0-10.0 sec  2.83 KBytes  23.2 Kbits/sec
[  4] 10.0-11.0 sec  2.83 KBytes  23.2 Kbits/sec
[  4] 11.0-12.0 sec  1.41 KBytes  11.6 Kbits/sec


4) Delete the qdisc , send packet to the container:
[root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1
------------------------------------------------------------
Server listening on TCP port 10086
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.16.10.125 port 10086 connected with 172.16.10.5 port 34653
[  4]  0.0- 1.0 sec    112 MBytes    940 Mbits/sec
[  4]  1.0- 2.0 sec    112 MBytes    941 Mbits/sec
[  4]  2.0- 3.0 sec    112 MBytes    941 Mbits/sec
[  4]  3.0- 4.0 sec    112 MBytes    938 Mbits/sec
[  4]  4.0- 5.0 sec    112 MBytes    940 Mbits/sec
[  4]  5.0- 6.0 sec    112 MBytes    940 Mbits/sec
[  4]  6.0- 7.0 sec    112 MBytes    941 Mbits/sec
[  4]  7.0- 8.0 sec    112 MBytes    941 Mbits/sec
[  4]  0.0- 8.1 sec    912 MBytes    940 Mbits/sec


Obviously the receiving speed of case 3 obey the policy we add on veth-vps1. Although case 2 also speed down , the point of the rate limit is also happened when the packet sending to the container. And the simplest way to prove that is you may hook the tbf_enqueue function, ping from the container to the host , see the dump stack  of the function tbf_enqueue, check when it is called.


At 2013-07-01 22:29:16,"Serge Hallyn" <serge.hallyn at ubuntu.com> wrote:
>Quoting lsmushroom (lsmushroom at 126.com):
>> Sorry for the late response. For your question , you could not limit
>> the network traffic in that way. Because TC will only limit the
>
>But it worked for me...
>
>Are you saying it will limit only traffic to, but not from, the
>container?
>
>> traffic send out from the target device .  And for the device of veth
>> type , the device on the host end will “send out” the traffic to the
>> container , and  it will “receive”  the traffic come from the
>> container . Thus , you have to go into the  container to run your
>> command , and that is not what we want.  So I’ve added a new option
>> “peer” to support running the command on the host. And for your
>> command , it may run like this (on the host) :
>> 
>> sudo tc qdisc add dev peer xxx root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540
>> 
>>          Which will take effective on the peer end of the xxx. In this way , we may control the network traffic of the container from the host end .
>> 
>> At 2013-06-15 05:27:01,"Serge Hallyn" <serge.hallyn at ubuntu.com> wrote:
>> >Quoting lsmushroom (lsmushroom at 126.com):
>> >> Hi All,
>> >>       Recently , we have been trying to find out a suitable way to
>> >>       limit  network traffic generated from the process running in the
>> >>       container. The network  type we used for our container is veth.
>> >>       And we have tried TC combined with cgroup net_cls subsystem ,
>> >>       which has successfully fulfill our goal . However ,  it requires
>> >>       to add the configurations inside the container. As we will
>> >>       provide the container as a service, and it is obviously
>> >>       unacceptable to allow the end user modify the bandwidth
>> >>       allocation . 
>> >
>> >If I just set the veth pair name to xxx and issue:
>> >
>> >sudo tc qdisc add dev xxx root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540
>> >
>> >on the host, the container's network is rated limited.
>> >
>> >Do you want something different?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20130702/1295f6c8/attachment.html>


More information about the lxc-devel mailing list