[lxc-users] zfs disk usage for published lxd images

Fajar A. Nugraha list at fajar.net
Mon May 16 09:55:50 UTC 2016


On Mon, May 16, 2016 at 3:38 PM, Brian Candler <b.candler at pobox.com> wrote:

> root at vtp:~# lxc launch ubuntu:16.04 base1
> Creating base1
> Retrieving image: 100%
> Starting base1
> root at vtp:~# zpool list
> NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
> lxd     77G   644M  76.4G         -     0%     0%  1.00x  ONLINE  -
> root at vtp:~# lxc launch ubuntu:16.04 base2
> Creating base2
> Starting base2
> root at vtp:~# lxc launch ubuntu:16.04 base3
> Creating base3
> Starting base3
> root at vtp:~# lxc exec base1 /bin/sh -- -c 'echo hello >/usr/test.txt'
> root at vtp:~# lxc stop base1
> root at vtp:~# zpool list
> NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
> lxd     77G   655M  76.4G         -     0%     0%  1.00x  ONLINE  -
>
> So disk space usage is about 645MB for the image, and small change for the
> instances launched from it. Now I want to clone further containers from
> base1, so I publish it:
>
>

Did you know you can set compression on zfs side?

zfs set compression=lz4 lxd

... would save lots of space with a negligable cost to CPU.



>
> Now, what I was hoping for was that the named image (clonemaster) would be
> a snapshot derived directly from the parent, so that it would also share
> disk space. What I'm actually trying to achieve is a workflow like this:
>
> - launch (say) 10 initial master containers
> - customise those 10 containers in different ways (e.g. install different
> software packages in each one)
> - launch multiple instances from each of those master containers
>
> This is for a training lab. The whole lot will then be packaged up and
> distributed as a single VM. It would be hugely helpful if the initial zfs
> usage came to around 650MB not 6.5GB.
>


Are you using the published images on the same lxd instance? If so, you can
use "lxc copy" on a powered-off container. It should correctly use zfs
clone. You can also copy-a-copy.

Add to that "compression=lz4", and you'll probablly end up with around
400MB usage. If you want even more savings, use "compression=gzip". Note
that compression only affects new data, so the easiest way would be to
restart from scratch, apply zfs compression, and configure lxd.



> The other option I can think of is zfs dedupe. The finished target system
> won't have the resources to do dedupe continuously. However I could turn on
> dedupe during the cloning, do the cloning, and then turn it back off again
> (*)
>
>
DO NOT USE DEDUPE. EVER.
* there are few exceptions, but if you still need to ask, then dedupe is
not for you.

-- 
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160516/f9523bf2/attachment.html>


More information about the lxc-users mailing list