<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial"><div><div style="color: rgb(0, 0, 0); font-family: arial; font-size: 14px; line-height: 1.7;"><span style="line-height: 1.7;">Sorry, I really get confused. Can you show me your testing procedure ?</span></div><div style="color: rgb(0, 0, 0); font-family: arial; font-size: 14px; line-height: 1.7;"><span style="line-height: 1.7;">Below is my testing result , the </span><span style="line-height: 1.7; white-space: pre-wrap;">veth pair name is veth-vps1</span><span style="line-height: 1.7;">:</span></div><div><div><div style="color: rgb(0, 0, 0); font-family: arial; font-size: 14px; line-height: 1.7;"><br></div><div><div>1) Add qdisc on veth-vps1</div><div>tc qdisc add dev veth-vps1 root tbf rate 0.05mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540</div><div><br></div><div>2) Send packet from the container, run iperf server on the host end, and iperf client inside the container: </div><div>[root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1</div><div>------------------------------------------------------------</div><div>Server listening on TCP port 10086</div><div>TCP window size: 85.3 KByte (default)</div><div>------------------------------------------------------------</div><div>[ 4] local 172.16.10.5 port 10086 connected with 172.16.10.125 port 38580</div><div>[ 4] 0.0- 1.0 sec 1.76 MBytes 14.7 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 1.03 MBytes 8.63 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 1.18 MBytes 9.86 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 1.17 MBytes 9.81 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 1.26 MBytes 10.6 Mbits/sec</div><div>[ 4] 5.0- 6.0 sec 1.13 MBytes 9.45 Mbits/sec</div><div>[ 4] 6.0- 7.0 sec 1.16 MBytes 9.71 Mbits/sec</div><div>[ 4] 7.0- 8.0 sec 1.20 MBytes 10.0 Mbits/sec</div><div>[ 4] 8.0- 9.0 sec 1.20 MBytes 10.1 Mbits/sec</div><div>[ 4] 0.0- 9.6 sec 11.7 MBytes 10.3 Mbits/sec</div><div><br></div><div>3) Send packet to the container, run iperf client on the host end, and iperf server inside the container: </div><div>[root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1</div><div>------------------------------------------------------------</div><div>Server listening on TCP port 10086</div><div>TCP window size: 85.3 KByte (default)</div><div>------------------------------------------------------------</div><div>[ 4] local 172.16.10.125 port 10086 connected with 172.16.10.5 port 34648</div><div>[ 4] 0.0- 1.0 sec 5.66 KBytes 46.3 Kbits/sec</div><div>[ 4] 1.0- 2.0 sec 7.07 KBytes 57.9 Kbits/sec</div><div>[ 4] 2.0- 3.0 sec 7.07 KBytes 57.9 Kbits/sec</div><div>[ 4] 3.0- 4.0 sec 7.07 KBytes 57.9 Kbits/sec</div><div>[ 4] 4.0- 5.0 sec 5.66 KBytes 46.3 Kbits/sec</div><div>[ 4] 5.0- 6.0 sec 2.83 KBytes 23.2 Kbits/sec</div><div>[ 4] 6.0- 7.0 sec 2.83 KBytes 23.2 Kbits/sec</div><div>[ 4] 7.0- 8.0 sec 2.83 KBytes 23.2 Kbits/sec</div><div>[ 4] 8.0- 9.0 sec 2.83 KBytes 23.2 Kbits/sec</div><div>[ 4] 9.0-10.0 sec 2.83 KBytes 23.2 Kbits/sec</div><div>[ 4] 10.0-11.0 sec 2.83 KBytes 23.2 Kbits/sec</div><div>[ 4] 11.0-12.0 sec 1.41 KBytes 11.6 Kbits/sec</div><div><br></div><div>4) Delete the qdisc , send packet to the container:</div><div>[root@ iperf_2.0.2-4_amd64]# ./iperf -s -p 10086 -i 1</div><div>------------------------------------------------------------</div><div>Server listening on TCP port 10086</div><div>TCP window size: 85.3 KByte (default)</div><div>------------------------------------------------------------</div><div>[ 4] local 172.16.10.125 port 10086 connected with 172.16.10.5 port 34653</div><div>[ 4] 0.0- 1.0 sec 112 MBytes 940 Mbits/sec</div><div>[ 4] 1.0- 2.0 sec 112 MBytes 941 Mbits/sec</div><div>[ 4] 2.0- 3.0 sec 112 MBytes 941 Mbits/sec</div><div>[ 4] 3.0- 4.0 sec 112 MBytes 938 Mbits/sec</div><div>[ 4] 4.0- 5.0 sec 112 MBytes 940 Mbits/sec</div><div>[ 4] 5.0- 6.0 sec 112 MBytes 940 Mbits/sec</div><div>[ 4] 6.0- 7.0 sec 112 MBytes 941 Mbits/sec</div><div>[ 4] 7.0- 8.0 sec 112 MBytes 941 Mbits/sec</div><div>[ 4] 0.0- 8.1 sec 912 MBytes 940 Mbits/sec</div><div><br></div><div>Obviously the receiving speed of case 3 obey the policy we add on veth-vps1. Although case 2 also speed down , the point of the rate limit is also happened when the packet sending to the container. And the simplest way to prove that is you may hook the tbf_enqueue function, ping from the container to the host , see the dump stack of the function tbf_enqueue, check when it is called.</div></div><div style="color: rgb(0, 0, 0); font-family: arial; font-size: 14px; line-height: 1.7;"><br><pre>At 2013-07-01 22:29:16,"Serge Hallyn" <<a href="mailto:serge.hallyn@ubuntu.com">serge.hallyn@ubuntu.com</a>> wrote:
>Quoting lsmushroom (<a href="mailto:lsmushroom@126.com">lsmushroom@126.com</a>):
>> Sorry for the late response. For your question , you could not limit
>> the network traffic in that way. Because TC will only limit the
>
>But it worked for me...
>
>Are you saying it will limit only traffic to, but not from, the
>container?
>
>> traffic send out from the target device . And for the device of veth
>> type , the device on the host end will “send out” the traffic to the
>> container , and it will “receive” the traffic come from the
>> container . Thus , you have to go into the container to run your
>> command , and that is not what we want. So I’ve added a new option
>> “peer” to support running the command on the host. And for your
>> command , it may run like this (on the host) :
>>
>> sudo tc qdisc add dev peer xxx root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540
>>
>> Which will take effective on the peer end of the xxx. In this way , we may control the network traffic of the container from the host end .
>>
>> At 2013-06-15 05:27:01,"Serge Hallyn" <<a href="mailto:serge.hallyn@ubuntu.com">serge.hallyn@ubuntu.com</a>> wrote:
>> >Quoting lsmushroom (<a href="mailto:lsmushroom@126.com">lsmushroom@126.com</a>):
>> >> Hi All,
>> >> Recently , we have been trying to find out a suitable way to
>> >> limit network traffic generated from the process running in the
>> >> container. The network type we used for our container is veth.
>> >> And we have tried TC combined with cgroup net_cls subsystem ,
>> >> which has successfully fulfill our goal . However , it requires
>> >> to add the configurations inside the container. As we will
>> >> provide the container as a service, and it is obviously
>> >> unacceptable to allow the end user modify the bandwidth
>> >> allocation .
>> >
>> >If I just set the veth pair name to xxx and issue:
>> >
>> >sudo tc qdisc add dev xxx root tbf rate 0.5mbit burst 5kb latency 70ms peakrate 1mbit minburst 1540
>> >
>> >on the host, the container's network is rated limited.
>> >
>> >Do you want something different?
</pre></div></div></div></div></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>