[lxc-devel] [PATCH] Add remove_snapshots_entry() (rebased - v2)
Serge Hallyn
serge.hallyn at ubuntu.com
Mon Sep 14 20:31:59 UTC 2015
Quoting Christian Brauner (christianvanbrauner at gmail.com):
> On Mon, Sep 14, 2015 at 07:27:06PM +0000, Serge Hallyn wrote:
> > Quoting Christian Brauner (christianvanbrauner at gmail.com):
> > > On Mon, Sep 14, 2015 at 04:33:05PM +0000, Serge Hallyn wrote:
> > > > Quoting Serge Hallyn (serge.hallyn at ubuntu.com):
> > > > > Quoting Christian Brauner (christianvanbrauner at gmail.com):
> > > > > > On Mon, Sep 14, 2015 at 02:50:39PM +0000, Serge Hallyn wrote:
> > > > > > > Quoting Christian Brauner (christianvanbrauner at gmail.com):
> > > > > > > > When creating ephemeral containers that have the option lxc.ephemeral = 1 set
> > > > > > > > in their config, they will be destroyed on shutdown. As they are simple overlay
> > > > > > > > clones of an existing container they should be registered in the lxc_snapshots
> > > > > > > > file of the original container to stay consistent and adhere to the
> > > > > > > > expectancies of the users. Most of all, it ensure that we cannot remove a
> > > > > > > > container that has clones, even if they are just ephemeral snapshot-clones. The
> > > > > > > > function adds further consistency because remove_snapshots_entry() ensures that
> > > > > > > > ephemeral clone-snapshots deregister themselves from the lxc_snapshots file
> > > > > > > > when they are destroyed.
> > > > > > > >
> > > > > > > > POSSIBLE GLITCH:
> > > > > > > > I was thinking hard about racing conditions and concurrent acces on the
> > > > > > > > lxc_snapshots file when lxc-destroy is called on the container while we
> > > > > > > > shutdown then container from inside. Here is what my thoughts are so far:
> > > > > > > >
> > > > > > > > There should be no racing condition when lxc-destroy including all snapshots is
> > > > > > >
> > > > > > > Note that lxcapi_destroy_with_snapshots() deletes the *snapshots*, not the
> > > > > > > snapshot clones. This is an unfortunate naming clash (which we could try
> > > > > > > to correct henceforth but we need good names :), but they are different.
> > > > > > > So anything under /var/lib/lxc/$container/snaps will be deleted. But if
> > > > > > > you've created an overlayfs clone, then containers listed in
> > > > > > > /var/lib/lxc/$container/lxc_snapshots will not be deleted. There is no
> > > > > > > API call or program to automatically deleted those right now.
> > > > > >
> > > > > > I think you are partially wrong here. I was not thinking about problems created
> > > > > > by an API-call but by the lxc-destroy exectuable. A quick walkthrough: With the
> > > > >
> > > > > D'oh. Yeah, you'll need to mutex that somehow.
> > > >
> > > > If you want help up-front with the design, please let me know. If you
> > > > aren't sure what the current container_disk_lock() and container_mem_lock()
> > > > do, please shout. (they are explained in a LOCKING comment above
> > > > lxc_container_free() in src/lxc/lxccontainer.c)
> > > >
> > > > The easiest thing to do mght be to disk_lock the container in lxc_destroy.c,
> > > > then make the mod_rdep() helper which you use in lxc_destroy.c be a _locked
> > > > variant (to avoid deadlock). So mod_rdep() would turn into something like:
> > > >
> > > > static bool mod_rdep(struct lxc_container *c0, struct lxc_container *c, bool inc)
> > > > {
> > > > bool ret;
> > > > if (container_disk_lock(c0))
> > > > return false;
> > > > ret = mod_rdep_locked(c0->name, c0->lxcpath, c->name, c->lxcpath);
> > > > container_disk_unlock(c0);
> > > > return ret;
> > > > }
> > > >
> > > > -serge
> > >
> > > Thanks, this is really nice! One question:
> > > - I'll take it that we want to make mod_rdep() public. mod_rdep() will be used
> > > in lxc_destroy.c, lxccontainer.c and start.c. Problem is that in start.c we do
> > > not have a container to pass into mod_rdep(). Do you want me to rewrite
> > > mod_rdep() to take in lxcpath and lxcname? If so, could we still use
> >
> > Ok, I'm not running on all cylinders, sorry.
> >
> > You don't want mod_rdep, you want mod_all_rdeps. So yes export that,
> > make a struct lxc_container, lock it, and pass that to mod_all_rdeps
> > which will dothe mod_rdeps for you.
>
> Ok, so in start.c we can actually use lxc_container new. I could modify my:
Yeah I think that looks good,
> static void lxc_destroy_container_on_signal(struct lxc_handler *handler,
> const char *name)
> {
> char destroy[MAXPATHLEN];
> bool bret = true;
> int ret = 0;
> if (handler->conf && handler->conf->rootfs.path && handler->conf->rootfs.mount) {
> bret = do_destroy_container(handler->conf);
> if (!bret) {
> ERROR("Error destroying rootfs for %s", name);
> return;
> }
> }
> INFO("Destroyed rootfs for %s", name);
>
> ret = snprintf(destroy, MAXPATHLEN, "%s/%s", handler->lxcpath, name);
> if (ret < 0 || ret >= MAXPATHLEN) {
> ERROR("Error printing path for %s", name);
> ERROR("Error destroying directory for %s", name);
> return;
> }
>
> // Where this is the critical and new part.
> struct lxc_container *c;
> c = lxc_container_new(name, handler->lxcpath);
> if (!c) {
> INFO("Useful message 1");
> }
remembering to disk_lock it here,
> mod_all_rdeps(c, false);
> if (!lxc_container_put(c))
and making sure to not put here if lxc_container_new failed.
> INFO("Useful message 2");
> //
>
> if (am_unpriv())
> ret = userns_exec_1(handler->conf, lxc_rmdir_onedev_wrapper, destroy);
> else
> ret = lxc_rmdir_onedev(destroy, NULL);
>
> if (ret < 0) {
> ERROR("Error destroying directory for %s", name);
> return;
> }
> INFO("Destroyed directory for %s", name);
> }
>
> Are you fine with that?
>
> Christian
>
> >
> > > disk_lock() and mem_lock by e.g. calling lxc_container_new(lxcname, lxcpath)
> > > and then calling disk_lock() or mem_lock() after to protect the container?
> >
> >
More information about the lxc-devel
mailing list