[lxc-users] apparmor profile for systemd containers (WAS: Fedora container thinks it is not running)

Fajar A. Nugraha list at fajar.net
Thu Jun 19 14:32:16 UTC 2014


On Thu, Jun 19, 2014 at 9:01 PM, Michael H. Warfield <mhw at wittsend.com>
wrote:

> All concerned participants:
>
> Was there any further update on this problem?  I'd like to know if we
> (I) should be updating the templates for either this aa_profile thing or
> for the mount sets.
>
>

IIRC Christian was going to try something?

So far all my test with every suggested values of lxc.mount.auto (including
cgroup-full:mixed) isn't enough to got f20 container running under the
default apparmor profile. I either have to:
- use unconfined profile. Works, but vulnerable to most known lxc exploit.
- use lxc.hook.mount and lxc.hook.post-stop scripts that create and
bind-mount a new, "empty", systemd cgroup hiearchy to the container's
/sys/fs/cgroup/systemd.
Kinda messy, but this way it's still protected by the apparmor profile.

The second approach is more ideal if it can be made into something like
"lxc.mount.auto=cgroup:systemd-new" setting, but it's way beyond what I'm
capable of.

For the next lxc release, as a user I suggest to just uncomment the
aa_profile line.

-- 
Fajar


> Regards,
> Mike
>
> On Fri, 2014-05-30 at 01:00 +0200, Christian Seiler wrote:
> > Hi,
> >
> > > # lxc-attach -n f20 -- mount | grep cgroup
> > > cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k,mode=755)
> > > none on /sys/fs/cgroup/cgmanager type tmpfs
> (rw,relatime,size=4k,mode=755)
> > > tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
> >
> > :-( This appears to be a rather nasty bug...
> >
> > > lxc does read the file /etc/lxc/lxc.conf that I created, verfied by
> > > the fact that lxc.cgroup.pattern works correctly. It does not,
> > > however, create the directory /sys/fs/cgroup/systemd/lxc-all/f20
> > > (which, if I understand correctly, it should, since I use
> > > lxc.cgroup.use = @all)
> > >
> > > # ls -d /sys/fs/cgroup/*/lxc-all/f20
> > > /sys/fs/cgroup/blkio/lxc-all/f20    /sys/fs/cgroup/cpuset/lxc-all/f20
> > >  /sys/fs/cgroup/hugetlb/lxc-all/f20
> > > /sys/fs/cgroup/cpuacct/lxc-all/f20  /sys/fs/cgroup/devices/lxc-all/f20
> > >  /sys/fs/cgroup/memory/lxc-all/f20
> > > /sys/fs/cgroup/cpu/lxc-all/f20      /sys/fs/cgroup/freezer/lxc-all/f20
> > >  /sys/fs/cgroup/perf_event/lxc-all/f20
> > >
> > > # mount | grep cgroup
> > > none on /sys/fs/cgroup type tmpfs (rw,relatime,size=4k,mode=755)
> > > cgroup on /sys/fs/cgroup/cpuset type cgroup
> > >
> (rw,relatime,cpuset,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuset,clone_children)
> > > cgroup on /sys/fs/cgroup/cpu type cgroup
> > >
> (rw,relatime,cpu,release_agent=/run/cgmanager/agents/cgm-release-agent.cpu)
> > > cgroup on /sys/fs/cgroup/cpuacct type cgroup
> > >
> (rw,relatime,cpuacct,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuacct)
> > > cgroup on /sys/fs/cgroup/memory type cgroup
> > >
> (rw,relatime,memory,release_agent=/run/cgmanager/agents/cgm-release-agent.memory)
> > > cgroup on /sys/fs/cgroup/devices type cgroup
> > >
> (rw,relatime,devices,release_agent=/run/cgmanager/agents/cgm-release-agent.devices)
> > > cgroup on /sys/fs/cgroup/freezer type cgroup
> > >
> (rw,relatime,freezer,release_agent=/run/cgmanager/agents/cgm-release-agent.freezer)
> > > cgroup on /sys/fs/cgroup/blkio type cgroup
> > >
> (rw,relatime,blkio,release_agent=/run/cgmanager/agents/cgm-release-agent.blkio)
> > > cgroup on /sys/fs/cgroup/perf_event type cgroup
> > >
> (rw,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event)
> > > cgroup on /sys/fs/cgroup/hugetlb type cgroup
> > >
> (rw,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)
> > > systemd on /sys/fs/cgroup/systemd type cgroup
> > >
> (rw,nosuid,nodev,noexec,relatime,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd)
> >
> > Hmm, are you running cgmanager at the same time as systemd? I think this
> > might be a problem with the intersection of cgmanager with the cgroup
> > mounting code, i.e. the cgroup mounting code uses the cgfs stuff (which
> > was originally just cgroup before Serge implemented multiple drivers)
> > while the "put the container into cgroup" code uses cgmanager, which may
> > have some weird side effect in this case. I have to confess that so far
> > I haven't tried cgmanager myself (it's on my todo list), so I never
> > tested the interaction between Serge's cgmanager code and my cgroup
> > mounting code...
> >
> > If you are running cgmanager, could you try the same while cgmanager
> > being stopped? Then LXC should fall back to the cgfs code, which
> > *should* work in this case, unless something else broke this logic.
> >
> > Anyway, I'll have a chance to look at this more closely on Saturday (I'm
> > busy with other things tomorrow).
> >
> > Regards,
> > Christian
>
>
> --
> Michael H. Warfield (AI4NB) | (770) 978-7061 |  mhw at WittsEnd.com
>    /\/\|=mhw=|\/\/          | (678) 463-0932 |
> http://www.wittsend.com/mhw/
>    NIC whois: MHW9          | An optimist believes we live in the best of
> all
>  PGP Key: 0x674627FF        | possible worlds.  A pessimist is sure of it!
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20140619/461a8013/attachment.html>


More information about the lxc-users mailing list