[lxc-devel] [RFC PATCH] lxclock: Replace named sempahore with flock

Dwight Engen dwight.engen at oracle.com
Fri May 24 16:58:53 UTC 2013


On Fri, 24 May 2013 10:40:30 -0500
Serge Hallyn <serge.hallyn at ubuntu.com> wrote:

> Quoting Dwight Engen (dwight.engen at oracle.com):
> > On Fri, 24 May 2013 08:23:57 -0500
> > Serge Hallyn <serge.hallyn at ubuntu.com> wrote:
> > 
> > > Quoting Serge Hallyn (serge.hallyn at ubuntu.com):
> > > > The problem: if a task is killed while holding a posix
> > > > semaphore, there appears to be no way to have the semaphore be
> > > > reliably autmoatically released.  The only trick which seemed
> > > > promising is to store the pid of the lock holder in some file
> > > > and have later lock seekers check whether that task has died.
> > > > 
> > > > Instead of going down that route, this patch switches from a
> > > > named posix semaphore to flock.  The advantage is that when
> > > > the task is killed, its fds are closed and locks are
> > > > automatically released.
> > > > 
> > > > The disadvantage of flock is that we can't rely on it to exclude
> > > > threads.  Therefore c->slock must now always be wrapped inside
> > > > c->privlock.
> > > > 
> > > > This patch survived basic testing with the lxcapi_create
> > > > patchset, where now killing lxc-create while it was holding the
> > > > lock did not lock up future api commands.
> > > 
> > > Two more notes:
> > > 
> > > 1. the new lock doesn't support timeouts like the old one did.
> > > There's no caller which is currently using timeouts, so I will
> > > probably remove timeouts from the private semaphore as well.
> > > 
> > > 2. It doesn't seem necessary to require everyone to understand
> > > the details, so I may abstrace away knowledge of c->privlock
> > > and c->slock behind two helpers.  Not sure what good names would
> > > be, maybe c->memlock() and c->disklock() ?  c->threadlock()
> > > and c->globallock()?  Something to indicate that the first is
> > > to protect the struct lxc_container from simultaneous updates
> > > from other threads, while the latter is to protect the on-disk
> > > container.
> > 
> > That would be nice. I'm not sure this is what you're getting at but
> > it would also be nice if the caller didn't have to know they needed
> > the thread lock before going for the disk lock (as a result of the
> 
> Exactly.  So c->disklock() would grab c->memlock(), simplifying
> lxccontainer.c quite a bit already.

Totally agree. Its also nice to have it done one place to keep order of
taking locks consistent to prevent deadly embrace.

> > primitive we're using for the disk lock). So getting the disk lock
> > would automatically acquire the thread lock for the caller.
> 
> Do you have any suggestions for better names?  :)  I don't like mine,
> but can't think of anything better.

I think the names you've got are fine (don't really have a better
idea), I do think its good to name locks by what they cover. Its a bit
tricky here because one is a process (and thread) lock and the other
is just a thread lock and it would be nice to convey that somehow.

In thinking about this a bit, the memlock and the pthread mutex are
really the same (a thread lock), they just cover different regions (ie.
in memory struct lxc_container vs process data). The sem_t could just
as easily be a pthread_mutex_t or vice versa, I think they're built on
the same primitives in glibc. One reason for potentially switching the
sem to a pthread mutex is if we decide to expose the locking in an api
we'll almost certainly want to use robust locks.

I wonder if something like this is any clearer?

extern int lxclock(struct lxc_container *c, int type);

lxclock(c, LXC_LOCK_DISK);	-> flock c->slock
lxclock(c, LXC_LOCK_MEM);	-> pthread/sem lock c->privlock
lxclock(NULL, LXC_LOCK_PROC);	-> pthread/sem lock whole process

I don't know if this is any clearer or not, but I do think its a bit
bad to have thread_mutex exposed and not wrapped.

> > I think I'll have a few more comments/questions after I read
> > through the rest of the patch.
> 
> Excellent, thanks.
> 
> -serge





More information about the lxc-devel mailing list