[lxc-users] Strange freezes with btrfs backend

Fajar A. Nugraha list at fajar.net
Sat Dec 3 10:51:38 UTC 2016


On Sat, Dec 3, 2016 at 11:30 AM, Pierce Ng <pierce at samadhiweb.com> wrote:

> Hi all,
>
> I'm running LXD on a Ubuntu 16.04 VPS with ~1GB RAM. My setup uses a disk
> image
> file, running on the default ext4 base filesystem, as the btrfs backend.
> The
> server runs four containers, of which only one is on the high side of
> lightly
> loaded.
>
> I'm getting random freezes such that all containers err, although the host
> OS
> continues running. When I reboot, the server gets stuck at the boot prompt
> requiring a manual fsck to get going.
>
> Anyone seeing this?
>
> Should I switch to ZFS? Is it sensible to run ZFS on a 1GB RAM VPS?
> Realistically, I'll be using ZFS in the same manner - via a disk image
> file.
> Is this a stupid idea?
>
>

1GB for 4 containers is probably overkill, depending on what you're trying
to achive. "random freeze" might be caused by out of memory (OOM)
condition, where the host is too busy swapping. This is why I disable swap
on most of my systems: at least when OOM killer is active it'd immediately
kill processes instead of keeping the disk busy.

That being said, 1GB with zfs should work. I have several variants running
on EC2 (with swap disabled):
- one with xfs root (or you can use whatever rootfs in the AMI), with
separate zfs EBS for lxd
- one with zfs root (requires an effort to create this, but now I have this
as an AMI). One of the dataset (on the same root pool) is managed by lxd.

You'd need to set arc to be as small as possible:
# cat /etc/modprobe.d/zfs-arc-max.conf
options zfs zfs_arc_max=67108865

Anything lower than that will make the setting ignored. After that, it's
simply a matter of checking whether your system have enough memory to run
the containers (try "htop", and check syslog for OOM killer)

-- 
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20161203/a78c93eb/attachment.html>


More information about the lxc-users mailing list