[Lxc-users] Problem with cgroup with LXc

Miroslav Lednicky, AVONET, s.r.o. lednicky at avonet.cz
Thu Nov 4 11:24:18 UTC 2010


Dne 4.11.2010 10:20, Daniel Lezcano napsal(a):
> On 11/01/2010 03:05 PM, Miroslav Lednicky, AVONET, s.r.o. wrote:
>> Dne 26.10.2010 15:03, Daniel Lezcano napsal(a):
>>> On 10/26/2010 08:38 AM, Miroslav Lednicky, AVONET, s.r.o. wrote:
>>>> Hello all,
>>>>
>>>> I started using LXCs. They are very nice.
>>>> But I have problems with cgroup.
>>>>
>>>> There are problem with remove old informations from cgroup
>>>> subdirectory.
>>>> It happend only sometimes. Typicaly zabbix agent in LXC can generate
>>>> this problem. But not always.
>>>>
>>>> Please see:
>>>>
>>>> ls -l /cgroup/test_lxc
>>>>
>>>> drwxr-xr-x 3 root root 0 2010-09-29 23:07 10194
>>>> drwxr-xr-x 3 root root 0 2010-10-01 21:11 11382
>>>> drwxr-xr-x 3 root root 0 2010-10-03 18:29 12632
>>>> drwxr-xr-x 3 root root 0 2010-09-15 15:10 1715
>>>> drwxr-xr-x 3 root root 0 2010-10-15 07:31 20270
>>>> drwxr-xr-x 3 root root 0 2010-10-16 02:05 20468
>>>> drwxr-xr-x 3 root root 0 2010-10-16 22:42 21090
>>>> drwxr-xr-x 3 root root 0 2010-10-19 04:58 22349
>>>> drwxr-xr-x 3 root root 0 2010-08-27 16:09 22455
>>>> drwxr-xr-x 3 root root 0 2010-08-29 10:45 23636
>>>> drwxr-xr-x 3 root root 0 2010-09-16 19:10 2398
>>>> drwxr-xr-x 3 root root 0 2010-10-22 00:27 24182
>>>> drwxr-xr-x 3 root root 0 2010-10-26 06:45 27044
>>>> drwxr-xr-x 3 root root 0 2010-09-04 18:26 27119
>>>> drwxr-xr-x 3 root root 0 2010-09-05 04:24 27187
>>>> drwxr-xr-x 3 root root 0 2010-09-09 21:39 30581
>>>> drwxr-xr-x 3 root root 0 2010-09-20 10:10 4793
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 cgroup.procs
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.stat
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.usage
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.usage_percpu
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.rt_period_us
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.rt_runtime_us
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.cpu_exclusive
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.cpus
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mem_exclusive
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mem_hardwall
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_migrate
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_pressure
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_spread_page
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_spread_slab
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mems
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.sched_load_balance
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53
>>>> cpuset.sched_relax_domain_level
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.shares
>>>> --w------- 1 root root 0 2010-08-02 13:53 devices.allow
>>>> --w------- 1 root root 0 2010-08-02 13:53 devices.deny
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 devices.list
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 freezer.state
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.failcnt
>>>> --w------- 1 root root 0 2010-08-02 13:53 memory.force_empty
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.limit_in_bytes
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.max_usage_in_bytes
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.failcnt
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.limit_in_bytes
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53
>>>> memory.memsw.max_usage_in_bytes
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.usage_in_bytes
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.soft_limit_in_bytes
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.stat
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.swappiness
>>>> -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.usage_in_bytes
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.use_hierarchy
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 net_cls.classid
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 notify_on_release
>>>> -rw-r--r-- 1 root root 0 2010-08-02 13:53 tasks
>>>>
>>>> ls -R1 10194
>>>> 10194:
>>>> 2
>>>> cgroup.procs
>>>> cpuacct.stat
>>>> cpuacct.usage
>>>> cpuacct.usage_percpu
>>>> cpu.rt_period_us
>>>> cpu.rt_runtime_us
>>>> cpuset.cpu_exclusive
>>>> cpuset.cpus
>>>> cpuset.mem_exclusive
>>>> cpuset.mem_hardwall
>>>> cpuset.memory_migrate
>>>> cpuset.memory_pressure
>>>> cpuset.memory_spread_page
>>>> cpuset.memory_spread_slab
>>>> cpuset.mems
>>>> cpuset.sched_load_balance
>>>> cpuset.sched_relax_domain_level
>>>> cpu.shares
>>>> devices.allow
>>>> devices.deny
>>>> devices.list
>>>> freezer.state
>>>> memory.failcnt
>>>> memory.force_empty
>>>> memory.limit_in_bytes
>>>> memory.max_usage_in_bytes
>>>> memory.memsw.failcnt
>>>> memory.memsw.limit_in_bytes
>>>> memory.memsw.max_usage_in_bytes
>>>> memory.memsw.usage_in_bytes
>>>> memory.soft_limit_in_bytes
>>>> memory.stat
>>>> memory.swappiness
>>>> memory.usage_in_bytes
>>>> memory.use_hierarchy
>>>> net_cls.classid
>>>> notify_on_release
>>>> tasks
>>>>
>>>> 10194/2:
>>>> cgroup.procs
>>>> cpuacct.stat
>>>> cpuacct.usage
>>>> cpuacct.usage_percpu
>>>> cpu.rt_period_us
>>>> cpu.rt_runtime_us
>>>> cpuset.cpu_exclusive
>>>> cpuset.cpus
>>>> cpuset.mem_exclusive
>>>> cpuset.mem_hardwall
>>>> cpuset.memory_migrate
>>>> cpuset.memory_pressure
>>>> cpuset.memory_spread_page
>>>> cpuset.memory_spread_slab
>>>> cpuset.mems
>>>> cpuset.sched_load_balance
>>>> cpuset.sched_relax_domain_level
>>>> cpu.shares
>>>> devices.allow
>>>> devices.deny
>>>> devices.list
>>>> freezer.state
>>>> memory.failcnt
>>>> memory.force_empty
>>>> memory.limit_in_bytes
>>>> memory.max_usage_in_bytes
>>>> memory.memsw.failcnt
>>>> memory.memsw.limit_in_bytes
>>>> memory.memsw.max_usage_in_bytes
>>>> memory.memsw.usage_in_bytes
>>>> memory.soft_limit_in_bytes
>>>> memory.stat
>>>> memory.swappiness
>>>> memory.usage_in_bytes
>>>> memory.use_hierarchy
>>>> net_cls.classid
>>>> notify_on_release
>>>> tasks
>>>>
>>>> It looks like as this problem:
>>>>
>>>> http://www.mail-archive.com/devel@openvz.org/msg19736.html
>>>>
>>>> But I have no solution.
>>>>
>>>> Can somebody help me?
>>>>
>>>> It is big problem with lxc-stop and lx-start. I must restarting server.
>>>
>>> Weird.
>>>
>>> Is it possible the zabbix application creates a new namespace ?
>>
>> T thing that no. But in logs is sometime:
>>
>> warning: process `zabbix_agentd' used the deprecated sysctl system
>> call with 1.55.
>>
>> I running zabbix agent in most of my containers, but i have problem
>> only with one.
>>
>> I stoped zabbix agent, but there are some numeric directories in
>> container cgroup.
>>
>> But not too much. 11 directories after one month running.
>
> Are you using google chrome ?

No, it is server instalation.

>> I cannot stop and start LXC without restart server now.
>
> That would be interesting to see where your container is blocked.

correction:

I can stop LXC, but I cannot start LXC after it because there is still
/cgroup/test_lxc directory. I can not delete /cgroup/test_lxc.
I must restart server.

> Is it possible to get the stack of the processes inside the container
> via /proc/<pid>/stack when the container is blocked ?

I have this in /cgroup/test_lxc now:

10620
11715
16394
21031
29688
32459
32628
3765
3767
3769
3777
6775
cgroup.procs
cpuacct.stat
cpuacct.usage
cpuacct.usage_percpu
cpu.rt_period_us
cpu.rt_runtime_us
cpuset.cpu_exclusive
cpuset.cpus
cpuset.mem_exclusive
cpuset.mem_hardwall
cpuset.memory_migrate
cpuset.memory_pressure
cpuset.memory_spread_page
cpuset.memory_spread_slab
cpuset.mems
cpuset.sched_load_balance
cpuset.sched_relax_domain_level
cpu.shares
devices.allow
devices.deny
devices.list
freezer.state
memory.failcnt
memory.force_empty
memory.limit_in_bytes
memory.max_usage_in_bytes
memory.memsw.failcnt
memory.memsw.limit_in_bytes
memory.memsw.max_usage_in_bytes
memory.memsw.usage_in_bytes
memory.soft_limit_in_bytes
memory.stat
memory.swappiness
memory.usage_in_bytes
memory.use_hierarchy
net_cls.classid
notify_on_release
tasks

But there are no /proc/pid

For example /proc/10620

> What is the kernel version ?

uname -a
Linux neptun 2.6.32-25-server #44-Ubuntu SMP Fri Sep 17 21:13:39 UTC 
2010 x86_64 GNU/Linux

I tryed 2.6.35 kernel and I was the same problem.

Best regards,

Miroslav.

-- 
Miroslav Lednicky, AVONET, s.r.o.




More information about the lxc-users mailing list