[lxc-users] Best backing store for 500 containers

Fajar A. Nugraha list at fajar.net
Tue Jun 30 06:03:40 UTC 2015


On Tue, Jun 30, 2015 at 7:16 AM, Federico Alves <venefax at gmail.com> wrote:
> I need to create 500 identical containers, but after the first one, I donĀ“t
> want to repeat the same file 500 times. The disk is formatted ext4. What
> should be the best type of format or partition that would be 100% sparse,
> i.e., it would never repeat  the same information.

That's not the definition of sparse: https://en.wikipedia.org/wiki/Sparse_file

If you want to "create a container one time and clone it for the other
499", then you can create container_1 container using
snapshot-cabapble backingstore (e.g. zfs), then run "lxc-clone -s
container_1 container_2". This should create container 2 using
snapshot/clone feature of the storage (at least this works on zfs,
should work on btrfs as well) so that the only additional space that
will be used is only for changed files/blocks (e.g. container config,
/etc/hosts, and so on). Note that as the containers get used, the
changed files will increase (e.g. logs, database files), and those
changed files will use additional space.

See also "man lxc-clone":
Creates a new container as a clone of an existing container. Two types
of clones are supported: copy and snapshot. A copy clone copies the
root filessytem from the original container to the new. A snapshot
filesystem uses the backing store's snapshot functionality to create a
 very small  copy-on-write snapshot of the original container.
Snapshot clones require the new container backing store to support
snapshotting. Currently this includes only aufs, btrfs, lvm, overlayfs
and zfs. LVM devices do not support snapshots of snapshots.


If you REALLY want a system that "would never repeat  the same
information", then you'd need dedup-capable storage. zfs can do that,
but it implies high overhead (e.g. much higher memory requirements
compared to normal, and needs a fast L2ARC), and should NOT be used
unless you REALLY know what you're doing.

-- 
Fajar


More information about the lxc-users mailing list