[lxc-users] LXD 2.13 - Containers using lots of swap despite having free RAM

Ron Kelley rkelleyrtp at gmail.com
Tue Jun 6 09:29:01 UTC 2017


(Similar to a redit post: https://www.reddit.com/r/LXD/comments/53l7on/how_does_lxd_manage_swap_space).

Ubuntu 16.04, LXC 2.13 running about 50 containers.  System has 8G RAM and 20G swap.  From what I can tell, the containers are using lots of swap despite having free memory.  


Top output from the host:
-------------------------------
top - 05:23:24 up 15 days,  4:25,  2 users,  load average: 0.29, 0.45, 0.62
Tasks: 971 total,   1 running, 970 sleeping,   0 stopped,   0 zombie
%Cpu(s):  8.9 us,  8.1 sy,  0.0 ni, 81.0 id,  0.7 wa,  0.0 hi,  1.2 si,  0.0 st
KiB Mem :  8175076 total,   284892 free,  2199792 used,  5690392 buff/cache
KiB Swap: 19737596 total, 15739612 free,  3997984 used.  3599856 avail Mem
-------------------------------


Top output from a container:
-------------------------------
top - 09:19:47 up 10 min,  0 users,  load average: 0.52, 0.61, 0.70
Tasks:  17 total,   1 running,  16 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.3 us,  2.2 sy,  0.0 ni, 96.5 id,  0.0 wa,  0.0 hi,  1.0 si,  0.0 st
KiB Mem :   332800 total,   148212 free,    79524 used,   105064 buff/cache
KiB Swap:   998400 total,   867472 free,   130928 used.   148212 avail Mem
-------------------------------


The profile associated with the container:
-------------------------------
root at Container-001:/usr/local/tmp# lxc profile show WP_Default
config:
  limits.cpu: "2"
  limits.memory: 325MB
  limits.memory.swap: "true"
  raw.lxc: lxc.cgroup.memory.memsw.limit_in_bytes = 1300M
description: ""
devices:
  eth0:
    name: eth0
    nictype: macvlan
    parent: eth1.2005
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: WP_Default
-------------------------------


As it stands now, the host is using 4G of swap and the "kswapd0” program is using lots of CPU.  In fact, I have a cron job to clear the VM cache every 5mins (/bin/echo 1 > /proc/sys/vm/drop_caches).

Any pointers?


More information about the lxc-users mailing list