[lxc-devel] [PATCH] c/r: remember to chown the cgroup path (correctly)

Tycho Andersen tycho.andersen at canonical.com
Thu Jan 14 13:31:50 UTC 2016


On Thu, Jan 14, 2016 at 09:28:07AM +0000, Serge Hallyn wrote:
> Quoting Tycho Andersen (tycho.andersen at canonical.com):
> > On Wed, Jan 13, 2016 at 09:47:50PM +0000, Serge Hallyn wrote:
> > > Quoting Tycho Andersen (tycho.andersen at canonical.com):
> > > > 1. remember to chown the cgroup path when migrating a container
> > > > 2. when restoring the cgroup path, try to compute the euid for root vs.
> > > >    using geteuid(); geteuid works for start, but it doesn't work for
> > > >    migration since we're still real root at that point.
> > > > 
> > > > Signed-off-by: Tycho Andersen <tycho.andersen at canonical.com>
> > > > ---
> > > >  src/lxc/cgmanager.c | 6 +++++-
> > > >  src/lxc/criu.c      | 5 +++++
> > > >  2 files changed, 10 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/src/lxc/cgmanager.c b/src/lxc/cgmanager.c
> > > > index 357182a..54e6912 100644
> > > > --- a/src/lxc/cgmanager.c
> > > > +++ b/src/lxc/cgmanager.c
> > > > @@ -488,7 +488,11 @@ static bool chown_cgroup(const char *cgroup_path, struct lxc_conf *conf)
> > > >  		return true;
> > > >  
> > > >  	data.cgroup_path = cgroup_path;
> > > > -	data.origuid = geteuid();
> > > > +	data.origuid = mapped_hostid(0, conf, ID_TYPE_UID);
> 
> now, when starting a container, this happens in the
> parent task in original uid.  So the geteuid() returns 1000,
> mapped_hostid(0, conf, ID_TYPE_UID) something like 100000.

Are you sure? Wouldn't it chmod everything to 1000 instead of 100000
in that case and be all screwed up?

> This is probably ok - but did you run all the lxc tests against
> this to make sure?  You can still run 'lxc-cgroup' etc as an
> unpriv user?

Well, it does change the behavior in ways I don't quite understand
yet, so that's probably worth talking about. Before this series (the
644 one and this patch), I get:

/sys/fs/cgroup/cpuset/lxc/unpriv $ ls -alh
total 0
drwxrwxr-x 2 root 100000 0 Jan 14 06:25 ./
drwxr-xr-x 3 root root   0 Jan 14 06:25 ../
-rw-r--r-- 1 root root   0 Jan 14 06:25 cgroup.clone_children
-rwxrwxr-x 1 root 100000 0 Jan 14 06:26 cgroup.procs*
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.cpu_exclusive
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.cpus
-r--r--r-- 1 root root   0 Jan 14 06:25 cpuset.effective_cpus
-r--r--r-- 1 root root   0 Jan 14 06:25 cpuset.effective_mems
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.mem_exclusive
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.mem_hardwall
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.memory_migrate
-r--r--r-- 1 root root   0 Jan 14 06:25 cpuset.memory_pressure
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.memory_spread_page
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.memory_spread_slab
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.mems
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.sched_load_balance
-rw-r--r-- 1 root root   0 Jan 14 06:25 cpuset.sched_relax_domain_level
-rw-r--r-- 1 root root   0 Jan 14 06:25 notify_on_release
-rwxrwxr-x 1 root 100000 0 Jan 14 06:25 tasks*

and after:

/sys/fs/cgroup/cpuset/lxc/unpriv $ ls -alh
total 0
drwxrwxr-x 2 100000 100000 0 Jan 14 06:27 ./
drwxr-xr-x 3 root   root   0 Jan 14 06:27 ../
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cgroup.clone_children
-rw-rw-r-- 1 100000 100000 0 Jan 14 06:27 cgroup.procs
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.cpu_exclusive
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.cpus
-r--r--r-- 1 root   root   0 Jan 14 06:27 cpuset.effective_cpus
-r--r--r-- 1 root   root   0 Jan 14 06:27 cpuset.effective_mems
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.mem_exclusive
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.mem_hardwall
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.memory_migrate
-r--r--r-- 1 root   root   0 Jan 14 06:27 cpuset.memory_pressure
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.memory_spread_page
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.memory_spread_slab
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.mems
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.sched_load_balance
-rw-r--r-- 1 root   root   0 Jan 14 06:27 cpuset.sched_relax_domain_level
-rw-r--r-- 1 root   root   0 Jan 14 06:27 notify_on_release
-rw-rw-r-- 1 100000 100000 0 Jan 14 06:27 tasks

However, it's not clear really /why/ from this patch the uid on tasks and
cgroup.progs gets changed as well (probably some mask bug? it was ID_TYPE_UID before but changing the gid? not sure).

So, what's the correct behavior here? Should we leave the uid on the files as
real root, or chmod those to container root as well? (if it's the latter, i.e.
the behavior introduced by this patch, it's probably still worth understanding
if the former works, just to make sure that there are no other latent bugs; I
don't mind figuring that out)

Tycho

> > > > +	if (data.origuid < 0) {
> > > 
> > > Can you confirm that this does not break
> > > 
> > > sudo lxc-create -t download -n x1 -- -d ubuntu -r trusty -a amd64
> > > sudo lxc-start -n x1
> > > 
> > > Because in that case I think we have no mappings, and mapped_hostid() will
> > > return -1.
> > 
> > You can't see it in the patch, but just above this is a
> 
> D'oh!  I even looked for such a thing this morning but missed it.
> 
> > lxc_list_empty() test, and this whole path isn't executed if
> > lxc_list_empty() is true, so I think it should be ok.
> > 
> > Tycho
> > 
> > > > +		ERROR("failed to get mapped root id");
> > > > +		return false;
> > > > +	}
> > > >  
> > > >  	/* Unpriv users can't chown it themselves, so chown from
> > > >  	 * a child namespace mapping both our own and the target uid
> > > > diff --git a/src/lxc/criu.c b/src/lxc/criu.c
> > > > index 6ef4905..f442612 100644
> > > > --- a/src/lxc/criu.c
> > > > +++ b/src/lxc/criu.c
> > > > @@ -466,6 +466,11 @@ void do_restore(struct lxc_container *c, int pipe, char *directory, bool verbose
> > > >  		goto out_fini_handler;
> > > >  	}
> > > >  
> > > > +	if (!cgroup_chown(handler)) {
> > > > +		ERROR("failed creating groups");
> > > > +		goto out_fini_handler;
> > > > +	}
> > > > +
> > > >  	if (!restore_net_info(c)) {
> > > >  		ERROR("failed restoring network info");
> > > >  		goto out_fini_handler;
> > > > -- 
> > > > 2.6.4
> > > > 
> > > > _______________________________________________
> > > > lxc-devel mailing list
> > > > lxc-devel at lists.linuxcontainers.org
> > > > http://lists.linuxcontainers.org/listinfo/lxc-devel
> > > _______________________________________________
> > > lxc-devel mailing list
> > > lxc-devel at lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-devel
> > _______________________________________________
> > lxc-devel mailing list
> > lxc-devel at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-devel
> _______________________________________________
> lxc-devel mailing list
> lxc-devel at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-devel


More information about the lxc-devel mailing list