[lxc-users] LXD move, how to reduce downtime without live migration

Fajar A. Nugraha list at fajar.net
Fri Apr 28 01:57:58 UTC 2017


On Thu, Apr 27, 2017 at 9:09 PM, Spike <spike at drba.org> wrote:

> Tamas,
>
> are you actually doing this? any gotchas?
>
> I'm trying to set up exactly the same, have a live node and a backup node,
> both running zfs. I have the same containers, with the same mac, at
> destination, however I'm unclear that just by copying over the rootfs
> dataset from zfs it will be enough to make it work.
>

Yes, this is enough. Assuming your container ONLY store data in the default
rootfs (i.e. no additional disk/path added manually to the container).

If you also want to replicate old mac address (as well as custom configs,
like physical NIC passthru) , also store the output of "lxc config show
container_name > container_backup.lxc".


>
> Has anybody done this before? I think somebody (maybe Fajar) in the past
> also mentioned keeping /var/lib/lxd on zfs and replicating that too which
> makes a lot of sense.
>
>

FWIW, here's more info about my setup

(1) relevant part of my sanoid.conf in the container host -> main purpose:
hourly snapshot, to be able to rollback easily. sanoid cron runs every hour
###
[rpool/ROOT/ubuntu]
       use_template = rootfs
       recursive = no

[data/lib/lxd]
        use_template = container

[data/lxd/containers]
        use_template = container
        recursive = yes
        process_children_only = yes

[template_rootfs]
        hourly = 0
        daily = 7
        monthly = 1
        yearly = 0
        autosnap = yes
        autoprune = yes

[template_container]
        hourly = 36
        daily = 7
        monthly = 1
        yearly = 0
        autosnap = yes
        autoprune = yes
###


(2) sync to backup host job -> syncoid, runs daily on the backup host.
'backup-cold' is zfs pool backed by AWS SC1 (cheap, cold HDD with lower
throughput) Note that I DON'T replicate the host's rootfs
(rpool/ROOT/ubuntu), as I can quickly restore a fresh one from my custom
AMI if needed.

###
echo $(date) syncoid start
## generic container backup
# host list
for h in container_host1 container_host2 container_hostN ;do
echo $(date) processing $h
zfs create -p backup-cold/$h/lxd
syncoid syncoid@$h:data/lib/lxd backup-cold/$h/lxd/lib
syncoid -r syncoid@$h:data/lxd/containers backup-cold/$h/lxd/containers
done
echo $(date) syncoid end
###

(3) relevant part of sanoid.conf in the backup host -> main purpose: prune
old snapshots (sent from container host)

###
[backup-cold]
        use_template = backup
        recursive = yes
        process_children_only = yes

[template_backup]
hourly = 0
daily = 45
monthly = 6
yearly = 0
autoprune = yes
autosnap = no
###

-- 
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170428/dc2eff7e/attachment.html>


More information about the lxc-users mailing list