[lxc-users] How to properly find what consumes memory inside the container.
Ivan Kurnosov
zerkms at zerkms.ru
Tue Sep 26 19:26:33 UTC 2017
Here I attach the 2 files with the output of those commands: one.txt is
right after I copied several gigabytes of files to that container (using
samba), two.txt is after I removed it.
It looks like the most relevant number that is changed is `total_cache`
On 27 September 2017 at 02:24, Stéphane Graber <stgraber at ubuntu.com> wrote:
> Hi,
>
> This sounds like a lxcfs issue.
>
> Can you file a bug at https://github.com/lxc/lxcfs or find one which
> matches your symptoms.
>
> We'll want at least:
>
> - /proc/meminfo from the container
> - /sys/fs/cgroup/memory/lxc/CONTAINER/memory.usage_in_bytes from the host
> - /sys/fs/cgroup/memory/lxc/CONTAINER/memory.stat from the host
>
> That should let us track down where the memory usage comes from and what
> may be wrong with it.
>
> On Wed, Sep 20, 2017 at 01:18:01PM +1200, Ivan Kurnosov wrote:
> > Hi,
> >
> > there is a server that currently runs ~100 containers.
> >
> > One of those containers is a subject of my interest.
> >
> > Brief details about the container: it runs ubuntu xenial, and it's a tiny
> > file server (samba based) with near to no traffic at all.
> >
> > I have found that after you upload files to that server, the available
> > memory size is decreased (while the "buff/cache" size stays at 0). And if
> > you remove the just uploaded files - the memory consumption drops to the
> > same value as it was before uploading.
> >
> > Here is a output of the top (sorted by resident memory size, processes
> with
> > more than 500kib rss):
> >
> > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
> COMMAND
> > 48 root 20 0 52048 18880 14428 S 0.0 0.9 0:10.11
> > /lib/systemd/systemd-journald
> > 18609 www-data 20 0 349208 15404 7516 S 0.0 0.7 0:13.72
> > /usr/sbin/smbd -D
> > 7176 www-data 20 0 345500 10720 6720 S 6.7 0.5 0:06.91
> > /usr/sbin/smbd -D
> > 25124 root 20 0 340104 9624 6744 S 0.0 0.5 0:00.12
> > /usr/sbin/smbd -D
> > 37541 root 20 0 344828 8012 4520 S 0.0 0.4 0:02.36
> > /usr/sbin/smbd -D
> > 15593 root 20 0 344352 6368 3444 S 0.0 0.3 0:00.39
> > /usr/sbin/smbd -D
> > 2450 root 20 0 336636 4072 1520 S 0.0 0.2 0:06.09
> > /usr/sbin/smbd -D
> > 25401 root 20 0 40560 3728 3112 R 0.3 0.2 0:00.49 top
> > 2447 root 20 0 336636 3528 976 S 0.0 0.2 0:04.30
> > /usr/sbin/smbd -D
> > 25287 root 20 0 19972 3044 2872 S 0.0 0.1 0:00.01 bash
> > 2476 root 20 0 238728 2944 1336 S 0.0 0.1 0:28.52
> > /usr/sbin/nmbd -D
> > 25271 ivan 20 0 21328 2784 2764 S 0.0 0.1 0:00.04 -bash
> > 24250 root 20 0 858936 2616 0 S 0.0 0.1 0:01.98
> > /usr/sbin/collectd
> > 2448 root 20 0 426848 2504 20 S 0.3 0.1 0:01.65
> > /usr/sbin/smbd -D
> > 1 root 20 0 37884 2488 1676 S 0.0 0.1 0:17.13
> > /sbin/init
> > 25285 root 20 0 51660 2404 2400 S 0.0 0.1 0:00.00 sudo
> su
> > 25270 ivan 20 0 95368 2172 1960 S 0.0 0.1 0:00.24 sshd:
> > ivan at pts/0
> > 25286 root 20 0 51008 1908 1908 S 0.0 0.1 0:00.00 su
> > 8041 zabbix 20 0 95520 1680 1512 S 0.0 0.1 0:02.10
> > /usr/sbin/zabbix_agentd: active checks #1 [idle 1 sec]
> > 25240 root 20 0 95368 1620 1572 S 0.0 0.1 0:00.02 sshd:
> > ivan [priv]
> > 145 message+ 20 0 42892 1164 872 S 0.0 0.1 0:01.55
> > /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile
> > --systemd-activation
> > 6453 www-data 20 0 125348 1152 656 S 0.0 0.1 0:32.26
> nginx:
> > worker process
> > 20811 postfix 20 0 67640 1136 656 S 0.0 0.1 0:00.86 qmgr
> -l
> > -t unix -u
> > 8038 zabbix 20 0 95520 1084 880 S 0.0 0.1 0:01.04
> > /usr/sbin/zabbix_agentd: listener #1 [waiting for connection]
> > 8039 zabbix 20 0 95520 972 768 S 0.0 0.0 0:01.05
> > /usr/sbin/zabbix_agentd: listener #2 [waiting for connection]
> > 142 root 20 0 27732 924 636 S 0.0 0.0 0:05.32
> > /usr/sbin/cron -f
> > 8040 zabbix 20 0 95520 872 668 S 0.0 0.0 0:01.07
> > /usr/sbin/zabbix_agentd: listener #3 [waiting for connection]
> > 8037 zabbix 20 0 93444 728 628 S 0.0 0.0 0:14.57
> > /usr/sbin/zabbix_agentd: collector [idle 1 sec]
> > 6462 www-data 20 0 125348 500 0 S 0.0 0.0 0:32.14
> nginx:
> > worker process
> >
> >
> >
> > As you can see - the cumulative RSS is could barely get to the 100mb.
> >
> > While this is what `free` returns:
> >
> > # free -m
> > total used free shared buff/cache
> > available
> > Mem: 2048 1785 261 1750 0
> > 261
> > Swap: 512 14 497
> >
> >
> > So, it clearly states about 85% of ram is occupied.
> >
> > `slabtop` (due to cgroup limitations?) does not work:
> >
> > # slabtop
> > fopen /proc/slabinfo: Permission denied
> >
> >
> > But if I clear the system caches on the host
> >
> > echo 3 > /proc/sys/vm/drop_caches
> >
> >
> > the container memory consumption drops to the expected <100mb.
> >
> > So the question, how to monitor the memory consumption from the container
> > reliably? And why does `free` count caches as used memory inside
> container?
> >
> > --
> > With best regards, Ivan Kurnosov
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> --
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
--
With best regards, Ivan Kurnosov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170927/d714b426/attachment.html>
-------------- next part --------------
container# cat /proc/meminfo
MemTotal: 2097152 kB
MemFree: 36 kB
MemAvailable: 36 kB
Buffers: 0 kB
Cached: 12 kB
SwapCached: 0 kB
Active: 24 kB
Inactive: 64 kB
Active(anon): 16 kB
Inactive(anon): 60 kB
Active(file): 8 kB
Inactive(file): 4 kB
Unevictable: 0 kB
Mlocked: 29112 kB
SwapTotal: 524288 kB
SwapFree: 513980 kB
Dirty: 79564 kB
Writeback: 0 kB
AnonPages: 19891352 kB
Mapped: 4096016 kB
Shmem: 1521488 kB
Slab: 0 kB
SReclaimable: 0 kB
SUnreclaim: 0 kB
KernelStack: 111328 kB
PageTables: 525296 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 76478396 kB
Committed_AS: 48821700 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 14186496 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 863168 kB
DirectMap2M: 52568064 kB
DirectMap1G: 82837504 kB
host# cat /sys/fs/cgroup/memory/lxc/CONTAINER/memory.usage_in_bytes
2147471360
host# cat /sys/fs/cgroup/memory/lxc/CONTAINER/memory.stat
cache 4096
rss 77824
rss_huge 0
mapped_file 4096
dirty 0
writeback 0
swap 540672
pgpgin 6877
pgpgout 6857
pgfault 8312
pgmajfault 95
inactive_anon 61440
active_anon 16384
inactive_file 0
active_file 4096
unevictable 0
hierarchical_memory_limit 2147483648
hierarchical_memsw_limit 2684354560
total_cache 2107715584
total_rss 39424000
total_rss_huge 6291456
total_mapped_file 17145856
total_dirty 0
total_writeback 0
total_swap 10444800
total_pgpgin 1722046
total_pgpgout 1210617
total_pgfault 2085946
total_pgmajfault 1399
total_inactive_anon 29151232
total_active_anon 19386368
total_inactive_file 2026012672
total_active_file 72556544
total_unevictable 32768
-------------- next part --------------
container# cat /proc/meminfo
MemTotal: 2097152 kB
MemFree: 1922572 kB
MemAvailable: 1922572 kB
Buffers: 0 kB
Cached: 4 kB
SwapCached: 0 kB
Active: 20 kB
Inactive: 64 kB
Active(anon): 16 kB
Inactive(anon): 64 kB
Active(file): 4 kB
Inactive(file): 0 kB
Unevictable: 0 kB
Mlocked: 29112 kB
SwapTotal: 524288 kB
SwapFree: 513860 kB
Dirty: 4412 kB
Writeback: 0 kB
AnonPages: 18971936 kB
Mapped: 4089980 kB
Shmem: 1521992 kB
Slab: 0 kB
SReclaimable: 0 kB
SUnreclaim: 0 kB
KernelStack: 111152 kB
PageTables: 515284 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 76478396 kB
Committed_AS: 47856592 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 13834240 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 863168 kB
DirectMap2M: 52568064 kB
DirectMap1G: 82837504 kB
host# cat /sys/fs/cgroup/memory/lxc/CONTAINER/memory.usage_in_bytes
178917376
host# cat /sys/fs/cgroup/memory/lxc/CONTAINER/memory.stat
cache 4096
rss 81920
rss_huge 0
mapped_file 4096
dirty 0
writeback 0
swap 536576
pgpgin 6878
pgpgout 6857
pgfault 8312
pgmajfault 95
inactive_anon 65536
active_anon 16384
inactive_file 0
active_file 4096
unevictable 0
hierarchical_memory_limit 2147483648
hierarchical_memsw_limit 2684354560
total_cache 139726848
total_rss 39190528
total_rss_huge 6291456
total_mapped_file 18042880
total_dirty 0
total_writeback 0
total_swap 10674176
total_pgpgin 1723052
total_pgpgout 1692146
total_pgfault 2086960
total_pgmajfault 1433
total_inactive_anon 27209728
total_active_anon 21180416
total_inactive_file 56582144
total_active_file 73912320
total_unevictable 32768
More information about the lxc-users
mailing list