[lxc-devel] [PATCH] Add remove_snapshots_entry() (rebased - v2)
Serge Hallyn
serge.hallyn at ubuntu.com
Mon Sep 14 16:33:05 UTC 2015
Quoting Serge Hallyn (serge.hallyn at ubuntu.com):
> Quoting Christian Brauner (christianvanbrauner at gmail.com):
> > On Mon, Sep 14, 2015 at 02:50:39PM +0000, Serge Hallyn wrote:
> > > Quoting Christian Brauner (christianvanbrauner at gmail.com):
> > > > When creating ephemeral containers that have the option lxc.ephemeral = 1 set
> > > > in their config, they will be destroyed on shutdown. As they are simple overlay
> > > > clones of an existing container they should be registered in the lxc_snapshots
> > > > file of the original container to stay consistent and adhere to the
> > > > expectancies of the users. Most of all, it ensure that we cannot remove a
> > > > container that has clones, even if they are just ephemeral snapshot-clones. The
> > > > function adds further consistency because remove_snapshots_entry() ensures that
> > > > ephemeral clone-snapshots deregister themselves from the lxc_snapshots file
> > > > when they are destroyed.
> > > >
> > > > POSSIBLE GLITCH:
> > > > I was thinking hard about racing conditions and concurrent acces on the
> > > > lxc_snapshots file when lxc-destroy is called on the container while we
> > > > shutdown then container from inside. Here is what my thoughts are so far:
> > > >
> > > > There should be no racing condition when lxc-destroy including all snapshots is
> > >
> > > Note that lxcapi_destroy_with_snapshots() deletes the *snapshots*, not the
> > > snapshot clones. This is an unfortunate naming clash (which we could try
> > > to correct henceforth but we need good names :), but they are different.
> > > So anything under /var/lib/lxc/$container/snaps will be deleted. But if
> > > you've created an overlayfs clone, then containers listed in
> > > /var/lib/lxc/$container/lxc_snapshots will not be deleted. There is no
> > > API call or program to automatically deleted those right now.
> >
> > I think you are partially wrong here. I was not thinking about problems created
> > by an API-call but by the lxc-destroy exectuable. A quick walkthrough: With the
>
> D'oh. Yeah, you'll need to mutex that somehow.
If you want help up-front with the design, please let me know. If you
aren't sure what the current container_disk_lock() and container_mem_lock()
do, please shout. (they are explained in a LOCKING comment above
lxc_container_free() in src/lxc/lxccontainer.c)
The easiest thing to do mght be to disk_lock the container in lxc_destroy.c,
then make the mod_rdep() helper which you use in lxc_destroy.c be a _locked
variant (to avoid deadlock). So mod_rdep() would turn into something like:
static bool mod_rdep(struct lxc_container *c0, struct lxc_container *c, bool inc)
{
bool ret;
if (container_disk_lock(c0))
return false;
ret = mod_rdep_locked(c0->name, c0->lxcpath, c->name, c->lxcpath);
container_disk_unlock(c0);
return ret;
}
-serge
More information about the lxc-devel
mailing list