<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Dec 5, 2015 at 7:10 PM, John Lewis <span dir="ltr"><<a href="mailto:oflameo2@gmail.com" target="_blank">oflameo2@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>What I do is store my containers in a
disk image with a filesystem, usually ext4. I store the image in
the LXC server's /opt. I mount the LXC's to /srv before starting
them because I haven't figured out how to run them directly out of
the disk images yet. I back up the disk images with rsnapshot with
a sparse option. It saves a lot of time because there is only one
file to backup instead of hundreds for each LXC.<br>
<br></div></div></blockquote><div><br></div><div>... and that, is one of the reasons more and more people use zfs :)</div><div><br></div><div>tar -> basically can't do incremental snapshot</div><div>rsync on rootfs -> very long incremental backup time if you have lots of files</div><div>rsync on disk image -> still need to read the whole image, checksum every "block", and compare it (source vs destination), so still relatively slow, particularly if your image is big. Even when only a single byte changed.</div><div><br></div><div>Also, on those three, you need to shutdown the container to get a consistent backup (or at least, "lxc-freeze")</div><div><br></div><div>zfs snapshot + send receive -> should be much faster than any of the above methods for incremental backups, since basically it already knows "what has changed between snapshots". If you only have small amount of changed data between snapshots, the incremental send/receive will be very fast. Plus, on most scenarios, no need to shutdown/stop the container.</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div text="#000000" bgcolor="#FFFFFF"><div>
To restore I mount the disk image and rsync the target file back
to the original container or copy up the whole container disk
image over the one that wasn't in the the state I needed it to be
in. To back up databases, you need to make sure you get a database
dump before the backup. The way I like to do it is by using a
remote ssh command and dump the database over an ssh socket from
the backup machine, I copy the dump command up using standard
input and copy the database dump back down using standard output.
Keeping database files on a separate image file is helpful to
reduce the size of backups but not required.<div><div class="h5"><br></div></div></div></div></blockquote><div><br></div><div>That's the "normal", common database-recommended method. Safe, but slow. In particular if your have a large db (e.g. > 10GB)</div><div><br></div><div>The "quick-and-relatively-safe" way is to use snapshots (e.g. like the zfs scenario I wrote above). Most modern database can survive an unclean shutdown (e.g. like what happens when the server crashed, or you experience power failure), so as long as all the necesssary files (usually data files and journal) can be snapshotted at the same time, you should be able to recover using the snapshot.</div><div><br></div><div>IIRC brfs should also support snapshot and incremental send/receive, but I haven't tested it personally.</div><div><br></div><div>-- </div><div>Fajar</div></div></div></div>