[lxc-users] LXD 2.13 - Containers using lots of swap despite having free RAM
T.C 吳天健
tcwu2005 at gmail.com
Wed Jun 7 09:09:16 UTC 2017
If you remove the memory limit of container is the result the same?
operation system tends not to use too much memory one time instead of
trigger out-of-memory killing later.
Removing memory limit of container is a bit like over-commit and probably
encourage operation system to consume more RAM (and less swap).
2017-06-06 17:56 GMT+08:00 Ron Kelley <rkelleyrtp at gmail.com>:
> I don’t have a way to track the memory usage of a container yet, but this
> issue seems very consistent among the containers. In fact, all containers
> have a higher than expected swap usage.
>
>
> As a quick test, I modified the container profile to see if removing the
> swap and memory limits would help (removed the “lxc.cgroup.memory.memsw.limit_in_bytes”
> setting and changed "limits.memory.swap=false” ). The odd thing now is the
> available memory to the container seems to be maxed out at the
> limits.memory setting and does not include swap:
>
> Top output from container
> -----------------------------
> top - 09:48:50 up 9 min, 0 users, load average: 0.62, 0.67, 0.72
> Tasks: 20 total, 1 running, 19 sleeping, 0 stopped, 0 zombie
> %Cpu(s): 2.7 us, 3.8 sy, 0.0 ni, 93.2 id, 0.0 wa, 0.0 hi, 0.3 si,
> 0.0 st
> KiB Mem : 332800 total, 52252 free, 99324 used, 181224 buff/cache
> KiB Swap: 19737596 total, 19737596 free, 0 used. 52252 avail Mem
> -----------------------------
>
> Notice the "52252 avail Mem”. Seems the container is maxed out at 325MB
> despite having access to 19G swap.
>
> Confusing…
>
>
>
>
>
> On Jun 6, 2017, at 5:40 AM, Fajar A. Nugraha <list at fajar.net> wrote:
>
> On Tue, Jun 6, 2017 at 4:29 PM, Ron Kelley <rkelleyrtp at gmail.com> wrote:
>
>> (Similar to a redit post: https://www.reddit.com/
>> r/LXD/comments/53l7on/how_does_lxd_manage_swap_space).
>>
>> Ubuntu 16.04, LXC 2.13 running about 50 containers. System has 8G RAM
>> and 20G swap. From what I can tell, the containers are using lots of swap
>> despite having free memory.
>>
>>
>> Top output from the host:
>> -------------------------------
>> top - 05:23:24 up 15 days, 4:25, 2 users, load average: 0.29, 0.45,
>> 0.62
>> Tasks: 971 total, 1 running, 970 sleeping, 0 stopped, 0 zombie
>> %Cpu(s): 8.9 us, 8.1 sy, 0.0 ni, 81.0 id, 0.7 wa, 0.0 hi, 1.2 si,
>> 0.0 st
>> KiB Mem : 8175076 total, 284892 free, 2199792 used, 5690392
>> buff/cache
>> KiB Swap: 19737596 total, 15739612 free, 3997984 used. 3599856 avail Mem
>> -------------------------------
>>
>>
>> Top output from a container:
>> -------------------------------
>> top - 09:19:47 up 10 min, 0 users, load average: 0.52, 0.61, 0.70
>> Tasks: 17 total, 1 running, 16 sleeping, 0 stopped, 0 zombie
>> %Cpu(s): 0.3 us, 2.2 sy, 0.0 ni, 96.5 id, 0.0 wa, 0.0 hi, 1.0 si,
>> 0.0 st
>> KiB Mem : 332800 total, 148212 free, 79524 used, 105064
>> buff/cache
>> KiB Swap: 998400 total, 867472 free, 130928 used. 148212 avail Mem
>> -------------------------------
>>
>>
>
> Do you have history of memory usage inside the container? It's actually
> normal for linux to keep some elements in cache (e.g. inode entries), while
> forcing out program memory to swap. I'm guessing that's what happened
> during "busy" times, but now you see the non-busy times. Linux won't
> automatically put entries in swap back to memory if it's not used.
>
> In the past I had to set vm.vfs_cache_pressure = 1000 to make linux
> release inode cache from memory. This was especially important on servers
> with lots of files. Nowadays I simply don't use swap anymore.
>
> --
> Fajar
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170607/fedece5d/attachment.html>
More information about the lxc-users
mailing list