<div dir="ltr"><div><div style="font-family:arial,sans-serif;font-size:13px">I've also asked this question on serverfault: <a href="http://serverfault.com/questions/516074/why-can-applications-writing-large-files-in-a-memory-limited-lxc-container-get-k" target="_blank">http://serverfault.com/questions/516074/why-can-applications-writing-large-files-in-a-memory-limited-lxc-container-get-k</a> The answer remains inconclusive, but I believe there may be issues with how file I/o caching is handled under the limitations on memory imposed on the the lxc cgroup (cgroup.memory.limit_in_bytes) A similar thing seems to be happening in this post: <a href="http://serverfault.com/questions/488014/limit-private-memory-usage-per-user" style="font-family:arial;font-size:small">http://serverfault.com/questions/488014/limit-private-memory-usage-per-user</a></div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">As a simple way to reproduce this issue, you can do the following:</div><div style="font-family:arial,sans-serif;font-size:13px">
<br></div><div style="font-family:arial,sans-serif;font-size:13px">create an empty lxc container (in my case I did lxc-create -n testcon -t ubuntu -- -r precise)</div><div style="font-family:arial,sans-serif;font-size:13px">
<br></div><div style="font-family:arial,sans-serif;font-size:13px">Modify the configuration of the container to set <span style="line-height:16px;font-size:12px">lxc</span><span style="font-size:12px;line-height:16px;color:rgb(102,102,0)">.</span><span style="line-height:16px;font-size:12px">cgroup</span><span style="font-size:12px;line-height:16px;color:rgb(102,102,0)">.</span><span style="line-height:16px;font-size:12px">memory</span><span style="font-size:12px;line-height:16px;color:rgb(102,102,0)">.</span><span style="line-height:16px;font-size:12px">limit_in_bytes </span><span style="font-size:12px;line-height:16px;color:rgb(102,102,0)">=</span><span style="line-height:16px;font-size:12px"> 300</span><span style="font-size:12px;line-height:16px"><font color="#006666">M</font></span></div>
<div style="font-family:arial,sans-serif;font-size:13px"><span style="font-size:12px;line-height:16px"><font color="#006666"><br></font></span></div><div style="font-family:arial,sans-serif;font-size:13px">After starting the container, run <span style="line-height:18px;font-size:14px;background-color:rgb(238,238,238);font-family:Consolas,Menlo,Monaco,'Lucida Console','Liberation Mono','DejaVu Sans Mono','Bitstream Vera Sans Mono','Courier New',monospace,serif">dd if=/dev/zero of=test2 bs=100k count=5010</span></div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">While this command will most likely succeed, you will see that there are memory allocation failures in memory.failcnt. </div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">However, I have found that when I run such a command on an instance that actually has low amounts of memory (say the t1.micro with only 590 MB), allocation failures are much rarer (<30%) and I cannot reproduce an actual crash out. The fact that errors are less frequent when hardware memory is smaller suggest that the application/kernel's I/O caching manager is aware of the actual memory limitations on the underlying hardware, but is unaware of the container limitation. </div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">In my case, I seek to use lxc as a way to split a single machine into multiple virtualized containers with proportionate resources. Unfortunately, there appear to be fundamental problems with my method of imposing memory limitations on the containers; too much memory is being allocated for I/O cache and when multiple containers are competing for I/O cache memory, processes start to be killed.</div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">I'd be surprised if this hasn't affected more people. Does anyone have an idea for a workaround? Am I missing some needed configuration setting or is there something more?</div>
<div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px"><br></div><div style="font-family:arial,sans-serif;font-size:13px">Thanks,</div><div style="font-family:arial,sans-serif;font-size:13px">
Aaron</div><div class="" style="font-family:arial,sans-serif;font-size:13px"></div></div><br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Message: 1<br>
Date: Fri, 14 Jun 2013 19:40:24 -0700<br>
From: Aaron Staley <<a href="mailto:aaron@picloud.com">aaron@picloud.com</a>><br>
Subject: [Lxc-users] Do not understand why OOM killer is being<br>
triggered by a DD<br>
To: <a href="mailto:lxc-users@lists.sourceforge.net">lxc-users@lists.sourceforge.net</a><br>
Message-ID:<br>
<<a href="mailto:CAMcjixYO1d8HF0_hnJLJJEMLUG3Qayps8gzCbtGcZj-UyfWyTA@mail.gmail.com">CAMcjixYO1d8HF0_hnJLJJEMLUG3Qayps8gzCbtGcZj-UyfWyTA@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="iso-8859-1"<br>
<br>
Hello,<br>
<br>
I'm running into a troublesome scenario where the OOM killer is hard<br>
killing processes in my container when I write a file with size exceeding<br>
the memory limitation.<br>
<br>
Scenario:<br>
<br>
I have an LXC container where memory.limit_in_bytes is set to 300 MB.<br>
<br>
I attempt to dd a ~500 MB file as follows:<br>
<br>
dd if=/dev/zero of=test2 bs=100k count=5010<br>
<br>
Roughly 20% of the time, the Linux OOM manager is triggered by this command<br>
and a process is killed. Needless to say, this is highly unintended<br>
behavior; dd is meant to simulate an actual "useful" file write by a<br>
program running inside the container.<br>
<br>
<br>
Details:<br>
While file caches get large (260 MB), rss and file map seem to stay quite<br>
low. Here's an example of what memory.stat may look like during the write:<br>
cache 278667264<br>
rss 20971520<br>
mapped_file 24576<br>
pgpgin 138147<br>
pgpgout 64993<br>
swap 0<br>
pgfault 55054<br>
pgmajfault 2<br>
inactive_anon 10637312<br>
active_anon 10342400<br>
inactive_file 278339584<br>
active_file 319488<br>
unevictable 0<br>
hierarchical_memory_limit 300003328<br>
hierarchical_memsw_limit 300003328<br>
total_cache 278667264<br>
total_rss 20971520<br>
total_mapped_file 24576<br>
total_pgpgin 138147<br>
total_pgpgout 64993<br>
total_swap 0<br>
total_pgfault 55054<br>
total_pgmajfault 2<br>
total_inactive_anon 10637312<br>
total_active_anon 10342400<br>
total_inactive_file 278339584<br>
total_active_file 319488<br>
total_unevictable 0<br>
<br>
<br>
<br>
Here's a paste from dmesg where the OOM triggered a kill. I'm not too<br>
familiar with the distinctions between the types of memory; one thing that<br>
stands out is that while "Node 0 Normal" is very low, there is plenty of<br>
Node 0 DMA32 memory free. Can anyone explain why a file write is causing<br>
the OOM? How do I prevent this from happening?<br>
<br>
The log:<br>
<br>
[1801523.686755] Task in /lxc/c-7 killed as a result of limit of /lxc/c-7<br>
[1801523.686758] memory: usage 292972kB, limit 292972kB, failcnt 39580<br>
[1801523.686760] memory+swap: usage 292972kB, limit 292972kB, failcnt 0<br>
[1801523.686762] Mem-Info:<br>
[1801523.686764] Node 0 DMA per-cpu:<br>
[1801523.686767] CPU 0: hi: 0, btch: 1 usd: 0<br>
[1801523.686769] CPU 1: hi: 0, btch: 1 usd: 0<br>
[1801523.686771] CPU 2: hi: 0, btch: 1 usd: 0<br>
[1801523.686773] CPU 3: hi: 0, btch: 1 usd: 0<br>
[1801523.686775] CPU 4: hi: 0, btch: 1 usd: 0<br>
[1801523.686778] CPU 5: hi: 0, btch: 1 usd: 0<br>
[1801523.686780] CPU 6: hi: 0, btch: 1 usd: 0<br>
[1801523.686782] CPU 7: hi: 0, btch: 1 usd: 0<br>
[1801523.686783] Node 0 DMA32 per-cpu:<br>
[1801523.686786] CPU 0: hi: 186, btch: 31 usd: 158<br>
[1801523.686788] CPU 1: hi: 186, btch: 31 usd: 114<br>
[1801523.686790] CPU 2: hi: 186, btch: 31 usd: 133<br>
[1801523.686792] CPU 3: hi: 186, btch: 31 usd: 69<br>
[1801523.686794] CPU 4: hi: 186, btch: 31 usd: 70<br>
[1801523.686796] CPU 5: hi: 186, btch: 31 usd: 131<br>
[1801523.686798] CPU 6: hi: 186, btch: 31 usd: 169<br>
[1801523.686800] CPU 7: hi: 186, btch: 31 usd: 30<br>
[1801523.686802] Node 0 Normal per-cpu:<br>
[1801523.686804] CPU 0: hi: 186, btch: 31 usd: 162<br>
[1801523.686806] CPU 1: hi: 186, btch: 31 usd: 184<br>
[1801523.686809] CPU 2: hi: 186, btch: 31 usd: 99<br>
[1801523.686811] CPU 3: hi: 186, btch: 31 usd: 82<br>
[1801523.686813] CPU 4: hi: 186, btch: 31 usd: 90<br>
[1801523.686815] CPU 5: hi: 186, btch: 31 usd: 99<br>
[1801523.686817] CPU 6: hi: 186, btch: 31 usd: 157<br>
[1801523.686819] CPU 7: hi: 186, btch: 31 usd: 138<br>
[1801523.686824] active_anon:60439 inactive_anon:28841 isolated_anon:0<br>
[1801523.686825] active_file:110417 inactive_file:907078 isolated_file:64<br>
[1801523.686827] unevictable:0 dirty:164722 writeback:1652 unstable:0<br>
[1801523.686828] free:445909 slab_reclaimable:176594<br>
slab_unreclaimable:14754<br>
[1801523.686829] mapped:4753 shmem:66 pagetables:3600 bounce:0<br>
[1801523.686831] Node 0 DMA free:7904kB min:8kB low:8kB high:12kB<br>
active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB<br>
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:7648kB<br>
mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB<br>
slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB<br>
unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0<br>
all_unreclaimable? no<br>
[1801523.686841] lowmem_reserve[]: 0 4016 7048 7048<br>
[1801523.686845] Node 0 DMA32 free:1770072kB min:6116kB low:7644kB<br>
high:9172kB active_anon:22312kB inactive_anon:12128kB active_file:4988kB<br>
inactive_file:2190136kB unevictable:0kB isolated(anon):0kB<br>
isolated(file):256kB present:4112640kB mlocked:0kB dirty:535072kB<br>
writeback:6452kB mapped:4kB shmem:4kB slab_reclaimable:72888kB<br>
slab_unreclaimable:1100kB kernel_stack:120kB pagetables:832kB unstable:0kB<br>
bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no<br>
[1801523.686855] lowmem_reserve[]: 0 0 3031 3031<br>
[1801523.686859] Node 0 Normal free:5660kB min:4616kB low:5768kB<br>
high:6924kB active_anon:219444kB inactive_anon:103236kB<br>
active_file:436680kB inactive_file:1438176kB unevictable:0kB<br>
isolated(anon):0kB isolated(file):0kB present:3104640kB mlocked:0kB<br>
dirty:123816kB writeback:156kB mapped:19008kB shmem:260kB<br>
slab_reclaimable:633488kB slab_unreclaimable:57916kB kernel_stack:2800kB<br>
pagetables:13568kB unstable:0kB bounce:0kB writeback_tmp:0kB<br>
pages_scanned:0 all_unreclaimable? no<br>
[1801523.686869] lowmem_reserve[]: 0 0 0 0<br>
[1801523.686873] Node 0 DMA: 2*4kB 3*8kB 0*16kB 2*32kB 4*64kB 3*128kB<br>
2*256kB 1*512kB 2*1024kB 2*2048kB 0*4096kB = 7904kB<br>
[1801523.686883] Node 0 DMA32: 129*4kB 87*8kB 86*16kB 89*32kB 87*64kB<br>
65*128kB 12*256kB 5*512kB 2*1024kB 13*2048kB 419*4096kB = 1769852kB<br>
[1801523.686893] Node 0 Normal: 477*4kB 23*8kB 1*16kB 5*32kB 0*64kB 3*128kB<br>
3*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 5980kB<br>
[1801523.686903] 1017542 total pagecache pages<br>
[1801523.686905] 0 pages in swap cache<br>
[1801523.686907] Swap cache stats: add 0, delete 0, find 0/0<br>
[1801523.686908] Free swap = 1048572kB<br>
[1801523.686910] Total swap = 1048572kB<br>
[1801523.722319] 1837040 pages RAM<br>
[1801523.722322] 58337 pages reserved<br>
[1801523.722323] 972948 pages shared<br>
[1801523.722324] 406948 pages non-shared<br>
[1801523.722326] [ pid ] uid tgid total_vm rss cpu oom_adj<br>
oom_score_adj name<br>
[1801523.722396] [31266] 0 31266 6404 511 6 0<br>
0 init<br>
[1801523.722445] [32489] 0 32489 12370 688 7 -17<br>
-1000 sshd<br>
[1801523.722460] [32511] 101 32511 10513 325 0 0<br>
0 rsyslogd<br>
[1801523.722495] [32625] 0 32625 17706 838 2 0<br>
0 sshd<br>
[1801523.722522] [32652] 103 32652 5900 176 0 0<br>
0 dbus-daemon<br>
[1801523.722583] [ 526] 0 526 1553 168 5 0<br>
0 getty<br>
[1801523.722587] [ 530] 0 530 1553 168 1 0<br>
0 getty<br>
[1801523.722593] [ 537] 2007 537 17706 423 5 0<br>
0 sshd<br>
[1801523.722629] [ 538] 2007 538 16974 5191 1 0<br>
0 python<br>
[1801523.722650] [ 877] 2007 877 2106 157 7 0<br>
0 dd<br>
[1801523.722657] Memory cgroup out of memory: Kill process 538 (python)<br>
score 71 or sacrifice child<br>
[1801523.722674] Killed process 538 (python) total-vm:67896kB,<br>
anon-rss:17464kB, file-rss:3300kB<br>
<br>
I'm running on Linux ip-10-8-139-98 3.2.0-29-virtual #46-Ubuntu SMP Fri Jul<br>
27 17:23:50 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux on Amazon EC2.<br>
<br>
<br>
Thanks for any help you can provide,<br>
Aaron Staley</blockquote></div>
</div></div>