[lxc-devel] [PATCH 1/1] api locking: take process lock around open/close

Serge Hallyn serge.hallyn at ubuntu.com
Thu May 30 15:14:56 UTC 2013


Quoting Dwight Engen (dwight.engen at oracle.com):
> On Wed, 29 May 2013 22:21:22 -0500
> Serge Hallyn <serge.hallyn at ubuntu.com> wrote:
> 
> > And update the comment explaining the locking.
> > 
> > Also take memlock in want_daemonize.
> 
> Hi Serge, could you explain a bit what the locking is protecting about
> open/close? It looks like you are locking around just the open or
> just the close, and not the whole open to close region so I'm a bit
> confused on what scenario the locks are protecting against.

My understanding is that open and close themselves, i.e. the
modification of the fdtable, are not thread-safe.  But once
open has completed, read/write will be thread-safe.

> > Signed-off-by: Serge Hallyn <serge.hallyn at ubuntu.com>
> > ---
> >  src/lxc/lxccontainer.c | 153
> > +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 115
> > insertions(+), 38 deletions(-)
> > 
> > diff --git a/src/lxc/lxccontainer.c b/src/lxc/lxccontainer.c
> > index 9bc1caf..145ed7c 100644
> > --- a/src/lxc/lxccontainer.c
> > +++ b/src/lxc/lxccontainer.c
> > @@ -131,7 +131,12 @@ void remove_partial(struct lxc_container *c, int
> > fd) char *path = alloca(len);
> >  	int ret;
> >  
> > +	if (process_lock()) {
> > +		ERROR("Failed getting process lock");
> > +		return;
> > +	}
> >  	close(fd);
> > +	process_unlock();
> >  	ret = snprintf(path, len, "%s/%s/partial", c->config_path,
> > c->name); if (ret < 0 || ret >= len) {
> >  		ERROR("Error writing partial pathname");
> > @@ -145,11 +150,11 @@ void remove_partial(struct lxc_container *c,
> > int fd) }
> >  
> >  /* LOCKING
> > - * 1. c->privlock protects the struct lxc_container from multiple
> > threads.
> > - * 2. c->slock protects the on-disk container data
> > + * 1. container_mem_lock(c) protects the struct lxc_container from
> > multiple threads.
> > + * 2. container_disk_lock(c) protects the on-disk container data -
> > in particular the
> > + *    container configuration file.
> > + *    The container_disk_lock also takes the container_mem_lock.
> >   * 3. thread_mutex protects process data (ex: fd table) from
> > multiple threads
> > - * slock is an flock, which does not exclude threads.  Therefore
> > slock should
> > - * always be wrapped inside privlock.
> >   * NOTHING mutexes two independent programs with their own struct
> >   * lxc_container for the same c->name, between API calls.  For
> > instance,
> >   * c->config_read(); c->start();  Between those calls, data on disk
> > @@ -160,7 +165,7 @@ void remove_partial(struct lxc_container *c, int
> > fd)
> >   * due to hung callers.  So I prefer to keep the locks only within
> > our own
> >   * functions, not across functions.
> >   *
> > - * If you're going to fork while holding a lxccontainer, increment
> > + * If you're going to clone while holding a lxccontainer, increment
> >   * c->numthreads (under privlock) before forking.  When deleting,
> >   * decrement numthreads under privlock, then if it hits 0 you can
> > delete.
> >   * Do not ever use a lxccontainer whose numthreads you did not bump.
> > @@ -358,11 +363,14 @@ static pid_t lxcapi_init_pid(struct
> > lxc_container *c) 
> >  static bool load_config_locked(struct lxc_container *c, const char
> > *fname) {
> > +	bool ret = false;
> >  	if (!c->lxc_conf)
> >  		c->lxc_conf = lxc_conf_init();
> > +	process_lock();
> 
> From here on you start not checking if process_lock() failed. Just

Yeah...

> curious as to why we need to check for failure above but not in the
> rest of these.

I was punting on making the decision :)  Do you know whether it will
ever fail unless we're having a catastrophic error?  I'm tempted to say
that if it fails, then process_lock() rather than returnign an error
will log the error and exit the calling program.

> FWIW I'd much prefer not having to check in all calls if we can
> convince ourselves its not needed. The only reasons I can see that
> process_lock()'s pthread_mutex_lock() would fail would be EBUSY, EINVAL,
> EDEADLK all of which would be because of errors in lxc itself so maybe
> we could make process_lock() abort() or exit() if pthread_mutex_lock()
> returns an error?

Sounds good to me then!

-serge




More information about the lxc-devel mailing list