[lxc-users] LXD move, how to reduce downtime without live migration

Spike spike at drba.org
Sat Apr 29 20:15:25 UTC 2017


thank you for sharing Fajar, this is very helpful. A couple questions:
1. how do you ensure data consistency? I don't think it's safe to take a
snap of a mysql container with mysql running for example. Other backup
solutions I've used in the past, like bacula for example, allowed you to
run pre-backup jobs to say make the db readonly or stuff like that. Are you
doing such a thing with sanoid?
2. related to, if you move lib/lxd, is it safe to snap with lxd running? no
consistency issues?
3. if you move lxd which, and maybe I'm wrong, means you're also
replicating all container's configs, how are you testing your backup since
you would not be able to spin up the container without causing a conflict?

thanks,

Spike

On Thu, Apr 27, 2017 at 6:58 PM Fajar A. Nugraha <list at fajar.net> wrote:

> On Thu, Apr 27, 2017 at 9:09 PM, Spike <spike at drba.org> wrote:
>
>> Tamas,
>>
>> are you actually doing this? any gotchas?
>>
>> I'm trying to set up exactly the same, have a live node and a backup
>> node, both running zfs. I have the same containers, with the same mac, at
>> destination, however I'm unclear that just by copying over the rootfs
>> dataset from zfs it will be enough to make it work.
>>
>
> Yes, this is enough. Assuming your container ONLY store data in the
> default rootfs (i.e. no additional disk/path added manually to the
> container).
>
> If you also want to replicate old mac address (as well as custom configs,
> like physical NIC passthru) , also store the output of "lxc config show
> container_name > container_backup.lxc".
>
>
>>
>> Has anybody done this before? I think somebody (maybe Fajar) in the past
>> also mentioned keeping /var/lib/lxd on zfs and replicating that too which
>> makes a lot of sense.
>>
>>
>
> FWIW, here's more info about my setup
>
> (1) relevant part of my sanoid.conf in the container host -> main purpose:
> hourly snapshot, to be able to rollback easily. sanoid cron runs every hour
> ###
> [rpool/ROOT/ubuntu]
>        use_template = rootfs
>        recursive = no
>
> [data/lib/lxd]
>         use_template = container
>
> [data/lxd/containers]
>         use_template = container
>         recursive = yes
>         process_children_only = yes
>
> [template_rootfs]
>         hourly = 0
>         daily = 7
>         monthly = 1
>         yearly = 0
>         autosnap = yes
>         autoprune = yes
>
> [template_container]
>         hourly = 36
>         daily = 7
>         monthly = 1
>         yearly = 0
>         autosnap = yes
>         autoprune = yes
> ###
>
>
> (2) sync to backup host job -> syncoid, runs daily on the backup host.
> 'backup-cold' is zfs pool backed by AWS SC1 (cheap, cold HDD with lower
> throughput) Note that I DON'T replicate the host's rootfs
> (rpool/ROOT/ubuntu), as I can quickly restore a fresh one from my custom
> AMI if needed.
>
> ###
> echo $(date) syncoid start
> ## generic container backup
> # host list
> for h in container_host1 container_host2 container_hostN ;do
> echo $(date) processing $h
> zfs create -p backup-cold/$h/lxd
> syncoid syncoid@$h:data/lib/lxd backup-cold/$h/lxd/lib
> syncoid -r syncoid@$h:data/lxd/containers backup-cold/$h/lxd/containers
> done
> echo $(date) syncoid end
> ###
>
> (3) relevant part of sanoid.conf in the backup host -> main purpose: prune
> old snapshots (sent from container host)
>
> ###
> [backup-cold]
>         use_template = backup
>         recursive = yes
>         process_children_only = yes
>
> [template_backup]
> hourly = 0
> daily = 45
> monthly = 6
> yearly = 0
> autoprune = yes
> autosnap = no
> ###
>
> --
> Fajar
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170429/8f3198e4/attachment.html>


More information about the lxc-users mailing list