[lxc-users] LXD 2.13 - Containers using lots of swap despite having free RAM

Fajar A. Nugraha list at fajar.net
Tue Jun 6 09:40:42 UTC 2017


On Tue, Jun 6, 2017 at 4:29 PM, Ron Kelley <rkelleyrtp at gmail.com> wrote:

> (Similar to a redit post: https://www.reddit.com/r/LXD/
> comments/53l7on/how_does_lxd_manage_swap_space).
>
> Ubuntu 16.04, LXC 2.13 running about 50 containers.  System has 8G RAM and
> 20G swap.  From what I can tell, the containers are using lots of swap
> despite having free memory.
>
>
> Top output from the host:
> -------------------------------
> top - 05:23:24 up 15 days,  4:25,  2 users,  load average: 0.29, 0.45, 0.62
> Tasks: 971 total,   1 running, 970 sleeping,   0 stopped,   0 zombie
> %Cpu(s):  8.9 us,  8.1 sy,  0.0 ni, 81.0 id,  0.7 wa,  0.0 hi,  1.2 si,
> 0.0 st
> KiB Mem :  8175076 total,   284892 free,  2199792 used,  5690392 buff/cache
> KiB Swap: 19737596 total, 15739612 free,  3997984 used.  3599856 avail Mem
> -------------------------------
>
>
> Top output from a container:
> -------------------------------
> top - 09:19:47 up 10 min,  0 users,  load average: 0.52, 0.61, 0.70
> Tasks:  17 total,   1 running,  16 sleeping,   0 stopped,   0 zombie
> %Cpu(s):  0.3 us,  2.2 sy,  0.0 ni, 96.5 id,  0.0 wa,  0.0 hi,  1.0 si,
> 0.0 st
> KiB Mem :   332800 total,   148212 free,    79524 used,   105064 buff/cache
> KiB Swap:   998400 total,   867472 free,   130928 used.   148212 avail Mem
> -------------------------------
>
>

Do you have history of memory usage inside the container? It's actually
normal for linux to keep some elements in cache (e.g. inode entries), while
forcing out program memory to swap. I'm guessing that's what happened
during "busy" times, but now you see the non-busy times. Linux won't
automatically put entries in swap back to memory if it's not used.

In the past I had to set vm.vfs_cache_pressure = 1000 to make linux release
inode cache from memory. This was especially important on servers with lots
of files. Nowadays I simply don't use swap anymore.

-- 
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170606/db67b65b/attachment-0001.html>


More information about the lxc-users mailing list