[lxc-devel] process number limit

Robert Gierzinger robert.gierzinger at gmx.at
Mon May 20 18:33:51 UTC 2013


Hi,

>> Is there anything planned to restrict exhaustive process generation in a
>> guest or any other means to defend against fork bombs?
> In recent kernels (such as 3.9.x) you have
> `memory.kmem.limit_in_bytes` which could be use for that purpose.
> see
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/cgroups/memory.txt
Thanks for pointing me to the right docs. I managed to get lxc to run 
with the kmem limits.
I discovered some strange behaviour, hope this is the right mailing list 
to report to.

My scenario:
*) Server is 64 Bit Intel I7 cpu with 16 GB RAM, Ubuntu 13.04 with 64 
bit - I installed the supplied ubuntu 3.8 kernel source with the 
as-experimental-marked cgroup->kmem->limit enabled.
*) Inside the container: I tried to figure out how much kernel memory to 
allocate to the container, tried various usual stuff. I realized that 
using rsync ate up all my kernel memory allocated to the container (1GB) 
when syncing a directory of about 1500 MB of size - error "Cannot 
allocate memory (12)"; of course the corresponding failcnt was not zero. 
Setting vfs_cache_pressure to a very high number and periodically 
modifying drop_caches did not help
*) It seems that setting 512MB kernel memory for the container is OK to 
prevent for a forkbomb from forkbomb.c in my last mail. The strange 
thing, allocating 1GB and forkbombing the guest results in killing the 
host. On the host, even a "ls" is not possible - getting "bash: fork: 
Cannot allocate memory". SysRq is the only thing to work with at this 
stage. However, if I left htop running in another terminal it is not 
killed and reports around 32k tasks and only around 1100MB of the 16GB 
of RAM used!

Thanks in advance for some enlightenment ;-)

Robert





More information about the lxc-devel mailing list