[lxc-users] Bug bug bug

CDR venefax at gmail.com
Sun Nov 9 11:31:03 UTC 2014


I think I found the issue. Each instance of Asterisk generates one tread
per peer plus about 18 threads, but I have 600 peers. This is without a
single open call. With over 18 instances, the LXC session collapses. I got
rid of LXC and installed my app in the host, for testing purposes, and I
got to 32 instances. So LXC makes it way lower. I wanted to use LXC so I
could move the whole app around in servers, using plain rsync. I am
switching now the SIP engine to PJSIP to see if it is more thread-efficient.
Before I got here I increased the swap space to more than physical memory,
but it is unrelated. It is clear that a single LXC session can handle only
a few thousand threads. The error message about "task processor" may be
from the kernel.
Any insight into all of this?


On Sun, Nov 9, 2014 at 4:10 AM, Neil Greenwood <neil.greenwood at gmail.com>
wrote:

> Have you tried starting fewer instances of Asterisk, to see if the memory
> consumption is the issue?
>
> Neil
>
> On 8 November 2014 20:41:38 GMT+00:00, CDR <venefax at gmail.com> wrote:
>
>> This is my ulimits
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 0
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 1048576
>> max locked memory       (kbytes, -l) unlimited
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 1048576
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) unlimited
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>>
>> Also, I added swap space,
>>
>> free -g
>>               total        used        free      shared  buff/cache
>> available
>> Mem:            177          59         116           0
>> 0         116
>> Swap:           269           0         269
>>
>> and it makes no difference.
>>
>> It his is not a swap issue, nor a ulimit issue, where can it be the
>> problem?
>> Federico
>>
>>
>> On Sat, Nov 8, 2014 at 5:23 AM, Guido Jäkel <G.Jaekel at dnb.de> wrote:
>>
>>> Hi,
>>>
>>> googleing for  pthread_join  leads to
>>> http://www.ibm.com/developerworks/library/l-memory-leaks/  , an article
>>> about memory consumption of POSIX threads (and potential leaks if rejoin
>>> fails).
>>>
>>> From this, you can see that every thread needs at least memory for the
>>> stack. It is said that the default may be 10MB. And if you want to start 50
>>> instances of Asterix, this will lead to 50*n+10MB = n*0.5G stack size,
>>> where n is the number of threads this quoted 'taskprocessor' will try to
>>> start.
>>>
>>> Maybe you need at least more virtual memory to satisfy this requirements
>>> (e.g.
>>> http://stackoverflow.com/questions/344203/maximum-number-of-threads-per-process-in-linux).
>>> It seems that you have 19G swap for your ~180GB RAM machine. Maybe you need
>>> more even it will be unused. For a quick test, you may consider to use some
>>> file (instead of a partition) as additional swapspace and you may assign a
>>> lower priority to it. Or maybe you just have to adjust some 'ulimits'.
>>>
>>> Guido
>>>
>>>
>>> On 08.11.2014 03:36, CDR wrote:
>>> > There is something very wrong with LXC in general, it does not matter
>>> the
>>> > OS or even the kernel version. My OS is Ubuntu 14.04.
>>> > I have a Centos 6.6 container with mysql and 50 instances of Asterisk
>>> 12.0,
>>> > plus opensips.
>>> > The memory is limited to 100G, but it does not matter if I limit it or
>>> not.
>>> > It crashes when I start the 50 Asterisk processes.
>>> > ​ The err​or message is below.
>>> >
>>> > MySql starts fine and uses large-pages, memlocked. It uses 60G, so
>>> there is
>>> > plenty of available memory left.
>>> >
>>> >  free -g
>>> >              total       used       free     shared    buffers
>>>  cached
>>> > Mem:           177        163         13          0          0
>>>  97
>>> > -/+ buffers/cache:         66        110
>>> > Swap:           19          0         19
>>> >
>>> > ​​
>>> >
>>> >
>>> > ​I tried the same ​container in Fedora 21 and the outcome is
>>> identical, and
>>> > it matters not if the technology is plain lxc or libvirt-lxc.
>>> >
>>> > This is my containers config:
>>> >  lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
>>> > lxc.mount.entry = sysfs sys sysfs defaults  0 0
>>> >
>>> >
>>> > lxc.tty = 4
>>> > lxc.pts = 1024
>>> > lxc.cgroup.devices.deny = a
>>> > lxc.cgroup.devices.allow = c 1:3 rwm
>>> > lxc.cgroup.devices.allow = c 1:5 rwm
>>> > lxc.cgroup.devices.allow = c 5:1 rwm
>>> > lxc.cgroup.devices.allow = c 5:0 rwm
>>> > lxc.cgroup.devices.allow = c 4:0 rwm
>>> > lxc.cgroup.devices.allow = c 4:1 rwm
>>> > lxc.cgroup.devices.allow = c 1:9 rwm
>>> > lxc.cgroup.devices.allow = c 1:8 rwm
>>> > lxc.cgroup.devices.allow = c 136:* rwm
>>> > lxc.cgroup.devices.allow = c 5:2 rwm
>>> > lxc.cgroup.devices.allow = c 254:0 rwm
>>> > lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
>>> > lxc.cgroup.devices.allow = b 7:* rwm    # loop*
>>> > lxc.cgroup.memory.limit_in_bytes =  107374182400
>>> > lxc.mount.auto = cgroup
>>> >
>>> > lxc.utsname = parallelu
>>> > lxc.autodev = 1
>>> > lxc.aa_profile = unconfined
>>> >
>>> > lxc.network.type=macvlan
>>> > lxc.network.macvlan.mode=bridge
>>> > lxc.network.link=eth1
>>> > lxc.network.name = eth0
>>> > lxc.network.flags = up
>>> > lxc.network.hwaddr = 00:c8:a0:7d:84:cf
>>> > lxc.network.ipv4 = 0.0.0.0/25
>>> >
>>> >
>>> >
>>> > Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> > [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
>>> > __allocate_taskprocessor: Unable to start taskprocessor listener for
>>> > taskprocessor 2ad8515c-c1eb-46ab-b53a-d63c84a56192
>>> > [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> > [Nov  7 21:12:05] ERROR[1205]: taskprocessor.c:614
>>> > __allocate_taskprocessor: Unable to start taskprocessor listener for
>>> > taskprocessor 1fe67cd3-b65f-491a-aa59-a089dcba26a5
>>> > [Nov  7 21:12:05] ERROR[1205]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> > [Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:614
>>> > __allocate_taskprocessor: Unable to start taskprocessor listener for
>>> > taskprocessor 34d41f19-2936-4e0a-a626-ceb386ff3a1f
>>> > [Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> > [Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:614
>>> > __allocate_taskprocessor: Unable to start taskprocessor listener for
>>> > taskprocessor 204873a6-b595-4e82-ae02-0b2a3ee37fdc
>>> > [Nov  7 21:12:05] ERROR[1562]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> > [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
>>> > __allocate_taskprocessor: Unable to start taskprocessor listener for
>>> > taskprocessor 7711ffdc-57c6-48e2-8f43-3fe4b396c405
>>> > [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> > [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:614
>>> > __allocate_taskprocessor: Unable to start taskprocessor listener for
>>> > taskprocessor 3eed34af-a070-4c8b-96ee-c9e1f92756c8
>>> > [Nov  7 21:12:05] ERROR[1480]: taskprocessor.c:245
>>> > default_listener_shutdown: pthread_join(): Cannot allocate memory
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > lxc-users mailing list
>>> > lxc-users at lists.linuxcontainers.org
>>> > http://lists.linuxcontainers.org/listinfo/lxc-users
>>> >
>>>
>>> _______________________________________________
>>> lxc-users mailing list
>>> lxc-users at lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>>
>> ------------------------------
>>
>> lxc-users mailing list
>> lxc-users at lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20141109/2facfbf4/attachment.html>


More information about the lxc-users mailing list