[lxc-users] Setting a limit on the disk size that a container can use

Fajar A. Nugraha list at fajar.net
Fri Jun 27 22:36:18 UTC 2014


On second thought, DON'T use scst/LIO in loopback configuration.  Or any
other inititator-target configuration in the same host where both initiator
and target are in-kernel (this includes nfs). Using these kind of setup can
lead to memory allocation deadlock. It should be fine for testing/migration
purposes, or when you can guarantee plenty of memory available, but not
currently recommended for production use.

qemu-nbd + lxc's nbd config shouldn't have this problem since qemu-nbd is
in userspace, but it's dreadfully slow, even for simple migration purpose.

Sorry for the added confusion. More comments inline.

On Fri, Jun 27, 2014 at 4:21 PM, Qiang Huang <h.huangqiang at huawei.com>
wrote:

> The major problem I met from loop device, is that it take all IO as
> buffered IO,
>
if the system crashes or lose power for any reason, an unflushed buffer
> cache can
> cause data corrupted, even file system crash(the file system for image
> file).
>

in scst, you can use "nv_cache 0" in device parameter to solve this problem.


> Or how do you guys deal with this kind of
>
problem?
>
>

The original question was "set a limit on the disk size that a container
root filesystem can use". zfs (with quota attribute set) or thin LVM (for
those who don't use zfs) should be the best option currently, IMHO.

-- 
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20140628/bde7696b/attachment.html>


More information about the lxc-users mailing list