<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On second thought, DON'T use scst/LIO in loopback configuration. Or any other inititator-target configuration in the same host where both initiator and target are in-kernel (this includes nfs). Using these kind of setup can lead to memory allocation deadlock. It should be fine for testing/migration purposes, or when you can guarantee plenty of memory available, but not currently recommended for production use.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">qemu-nbd + lxc's nbd config shouldn't have this problem since qemu-nbd is in userspace, but it's dreadfully slow, even for simple migration purpose.</div>
<div class="gmail_quote"><br></div><div class="gmail_quote">Sorry for the added confusion. More comments inline.</div><div class="gmail_quote"><br></div><div class="gmail_quote">
On Fri, Jun 27, 2014 at 4:21 PM, Qiang Huang <span dir="ltr"><<a href="mailto:h.huangqiang@huawei.com" target="_blank">h.huangqiang@huawei.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">The major problem I met from loop device, is that it take all IO as buffered IO,<br>
</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
if the system crashes or lose power for any reason, an unflushed buffer cache can<br>
cause data corrupted, even file system crash(the file system for image file).<br></blockquote><div><br></div><div>in scst, you can use "nv_cache 0" in device parameter to solve this problem.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Or how do you guys deal with this kind of<br>
</blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
problem?<br>
<div><br></div></blockquote><div><br></div><div><br></div><div>The original question was "set a limit on the disk size that a container root filesystem can use". zfs (with quota attribute set) or thin LVM (for those who don't use zfs) should be the best option currently, IMHO.</div>
<div><br></div><div>-- </div><div>Fajar</div></div></div></div>