[Lxc-users] Packet forwarding performance drop with 1000 containers

Benoit Lourdelet blourdel at juniper.net
Thu Apr 25 19:42:18 UTC 2013


I have 250 arp entries per containers and no more by design, so my values
are :

net.ipv4.neigh.default.gc_thresh1 = 280000
net.ipv4.neigh.default.gc_thresh2 = 280000
net.ipv4.neigh.default.gc_thresh3 = 280000


In a real life environment, I would configure the value to have garbage
collection working meaningfully !

Benoit



On 25/04/2013 20:13, "Andrew Grigorev" <andrew at ei-grad.ru> wrote:

>What gc_thresh* values did you set?.. Having gc_interval=30 should not
>be a bad thing if you have a proper gc_thresh1 value. If you would
>disable garbage collector, as you did by setting large gc_interval, then
>your system could be accidentally DoS'ed by stopping/starting your
>containers, for example.
>
>25.04.2013 21:28, Benoit Lourdelet пишет:
>> Hello,
>>
>> Working with 1000 containers I had already modified gc_thresh* to fit my
>> needs.
>>
>> By mistake I had set gc_interval to a too high value (past 2^32) ,
>>forcing
>> linux to set gc_interval to the default value (30) with is not suitable
>>in
>> my case.
>>
>> Setting gc_interval to 3600000 solved my problem.
>>
>> Thanks for pointing me in the right direction
>>
>> Benoit
>>
>> On 24/04/2013 07:21, "Guido Jäkel" <G.Jaekel at DNB.DE> wrote:
>>
>>> Dear Benoit,
>>>
>>> there's a lot of local matching and translation between layer2 and
>>>layer3
>>> in your case. I wounder if it is related to the apr cache size and
>>> garbage parameters. I found [http://linux.die.net/man/7/arp]:
>>>
>>>   gc_interval (since Linux 2.2)
>>>      How frequently the garbage collector for neighbor entries should
>>> attempt to run. Defaults to 30 seconds.
>>>   gc_stale_time (since Linux 2.2)
>>>      Determines how often to check for stale neighbor entries. When a
>>> neighbor entry is considered stale, it is resolved again before sending
>>> data to it. Defaults to 60 seconds.
>>>   gc_thresh1 (since Linux 2.2)
>>>      The minimum number of entries to keep in the ARP cache. The
>>>garbage
>>> collector will not run if there are fewer than this number of entries
>>>in
>>> the cache. Defaults to 128.
>>>   gc_thresh2 (since Linux 2.2)
>>>      The soft maximum number of entries to keep in the ARP cache. The
>>> garbage collector will allow the number of entries to exceed this for 5
>>> seconds before collection will be performed. Defaults to 512.
>>>   gc_thresh3 (since Linux 2.2)
>>>      The hard maximum number of entries to keep in the ARP cache. The
>>> garbage collector will always run if there are more than this number of
>>> entries in the cache. Defaults to 1024.
>>>
>>> This still seems to be the default on recent kernels, on a box with
>>>3.3.5
>>> I found
>>>
>>>    root at bladerunner9 ~ # cat /proc/sys/net/ipv4/neigh/default/gc*
>>>    30
>>>    60
>>>    128
>>>    512
>>>    1024
>>>
>>> If the ARP cache get's exhausted, there must be continous additional
>>>ARP
>>> resolution traffic and latency. May you check this theory?
>>>
>>>
>>> Greetings
>>>
>>> Guido
>>>
>>>
>>> On 2013-04-23 23:34, Benoit Lourdelet wrote:
>>>> Hello,
>>>>
>>>> Forwarding throughput is decreasing gradually as I add containers. I
>>>> don't
>>>> see any sudden drop.
>>>>
>>>> I we consider aggregated forwarding performance with 100 containers to
>>>> be
>>>> 1, here are the measurements for
>>>>
>>>> # containers 	Aggregated throughput
>>>> ------------------------------------
>>>> 100 			1
>>>> 500 			.71
>>>> 1000 			.27
>>>> 1100 			.23
>>
>>
>> 
>>-------------------------------------------------------------------------
>>-----
>> Try New Relic Now & We'll Send You this Cool Shirt
>> New Relic is the only SaaS-based application performance monitoring
>>service
>> that delivers powerful full stack analytics. Optimize and monitor your
>> browser, app, & servers with just a few lines of code. Try New Relic
>> and get this awesome Nerd Life shirt!
>>http://p.sf.net/sfu/newrelic_d2d_apr
>> _______________________________________________
>> Lxc-users mailing list
>> Lxc-users at lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/lxc-users
>
>



More information about the lxc-users mailing list