<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Jun 19, 2014 at 9:01 PM, Michael H. Warfield <span dir="ltr"><<a href="mailto:mhw@wittsend.com" target="_blank">mhw@wittsend.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">All concerned participants:<br>
<br>
Was there any further update on this problem? I'd like to know if we<br>
(I) should be updating the templates for either this aa_profile thing or<br>
for the mount sets.<br>
<br></blockquote><div><br></div><div><br></div><div>IIRC Christian was going to try something?</div><div><br></div><div>So far all my test with every suggested values of lxc.mount.auto (including cgroup-full:mixed) isn't enough to got f20 container running under the default apparmor profile. I either have to:</div>
<div>- use unconfined profile. Works, but vulnerable to most known lxc exploit.</div><div>- use <span style="font-family:arial,sans-serif;font-size:13px">lxc.hook.mount and </span><span style="font-family:arial,sans-serif;font-size:13px">lxc.hook.post-stop scripts that create and bind-mount a new, "empty", systemd cgroup hiearchy to the container's </span><font face="arial, sans-serif">/sys/fs/cgroup/systemd. Kinda messy, but this way it's still protected by the apparmor profile.</font></div>
<div><br></div><div>The second approach is more ideal if it can be made into something like "lxc.mount.auto=cgroup:systemd-new" setting, but it's way beyond what I'm capable of.</div><div><br></div><div>
For the next lxc release, as a user I suggest to just uncomment the aa_profile line.</div><div><br></div><div><font face="arial, sans-serif">-- </font></div><div><font face="arial, sans-serif">Fajar</font></div><div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Regards,<br>
Mike<br>
<div class=""><div class="h5"><br>
On Fri, 2014-05-30 at 01:00 +0200, Christian Seiler wrote:<br>
> Hi,<br>
><br>
> > # lxc-attach -n f20 -- mount | grep cgroup<br>
> > cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,size=12k,mode=755)<br>
> > none on /sys/fs/cgroup/cgmanager type tmpfs (rw,relatime,size=4k,mode=755)<br>
> > tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)<br>
><br>
> :-( This appears to be a rather nasty bug...<br>
><br>
> > lxc does read the file /etc/lxc/lxc.conf that I created, verfied by<br>
> > the fact that lxc.cgroup.pattern works correctly. It does not,<br>
> > however, create the directory /sys/fs/cgroup/systemd/lxc-all/f20<br>
> > (which, if I understand correctly, it should, since I use<br>
> > lxc.cgroup.use = @all)<br>
> ><br>
> > # ls -d /sys/fs/cgroup/*/lxc-all/f20<br>
> > /sys/fs/cgroup/blkio/lxc-all/f20 /sys/fs/cgroup/cpuset/lxc-all/f20<br>
> > /sys/fs/cgroup/hugetlb/lxc-all/f20<br>
> > /sys/fs/cgroup/cpuacct/lxc-all/f20 /sys/fs/cgroup/devices/lxc-all/f20<br>
> > /sys/fs/cgroup/memory/lxc-all/f20<br>
> > /sys/fs/cgroup/cpu/lxc-all/f20 /sys/fs/cgroup/freezer/lxc-all/f20<br>
> > /sys/fs/cgroup/perf_event/lxc-all/f20<br>
> ><br>
> > # mount | grep cgroup<br>
> > none on /sys/fs/cgroup type tmpfs (rw,relatime,size=4k,mode=755)<br>
> > cgroup on /sys/fs/cgroup/cpuset type cgroup<br>
> > (rw,relatime,cpuset,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuset,clone_children)<br>
> > cgroup on /sys/fs/cgroup/cpu type cgroup<br>
> > (rw,relatime,cpu,release_agent=/run/cgmanager/agents/cgm-release-agent.cpu)<br>
> > cgroup on /sys/fs/cgroup/cpuacct type cgroup<br>
> > (rw,relatime,cpuacct,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuacct)<br>
> > cgroup on /sys/fs/cgroup/memory type cgroup<br>
> > (rw,relatime,memory,release_agent=/run/cgmanager/agents/cgm-release-agent.memory)<br>
> > cgroup on /sys/fs/cgroup/devices type cgroup<br>
> > (rw,relatime,devices,release_agent=/run/cgmanager/agents/cgm-release-agent.devices)<br>
> > cgroup on /sys/fs/cgroup/freezer type cgroup<br>
> > (rw,relatime,freezer,release_agent=/run/cgmanager/agents/cgm-release-agent.freezer)<br>
> > cgroup on /sys/fs/cgroup/blkio type cgroup<br>
> > (rw,relatime,blkio,release_agent=/run/cgmanager/agents/cgm-release-agent.blkio)<br>
> > cgroup on /sys/fs/cgroup/perf_event type cgroup<br>
> > (rw,relatime,perf_event,release_agent=/run/cgmanager/agents/cgm-release-agent.perf_event)<br>
> > cgroup on /sys/fs/cgroup/hugetlb type cgroup<br>
> > (rw,relatime,hugetlb,release_agent=/run/cgmanager/agents/cgm-release-agent.hugetlb)<br>
> > systemd on /sys/fs/cgroup/systemd type cgroup<br>
> > (rw,nosuid,nodev,noexec,relatime,release_agent=/run/cgmanager/agents/cgm-release-agent.systemd,name=systemd)<br>
><br>
> Hmm, are you running cgmanager at the same time as systemd? I think this<br>
> might be a problem with the intersection of cgmanager with the cgroup<br>
> mounting code, i.e. the cgroup mounting code uses the cgfs stuff (which<br>
> was originally just cgroup before Serge implemented multiple drivers)<br>
> while the "put the container into cgroup" code uses cgmanager, which may<br>
> have some weird side effect in this case. I have to confess that so far<br>
> I haven't tried cgmanager myself (it's on my todo list), so I never<br>
> tested the interaction between Serge's cgmanager code and my cgroup<br>
> mounting code...<br>
><br>
> If you are running cgmanager, could you try the same while cgmanager<br>
> being stopped? Then LXC should fall back to the cgfs code, which<br>
> *should* work in this case, unless something else broke this logic.<br>
><br>
> Anyway, I'll have a chance to look at this more closely on Saturday (I'm<br>
> busy with other things tomorrow).<br>
><br>
> Regards,<br>
> Christian<br>
<br>
<br>
</div></div><span class=""><font color="#888888">--<br>
Michael H. Warfield (AI4NB) | (770) 978-7061 | mhw@WittsEnd.com<br>
/\/\|=mhw=|\/\/ | (678) 463-0932 | <a href="http://www.wittsend.com/mhw/" target="_blank">http://www.wittsend.com/mhw/</a><br>
NIC whois: MHW9 | An optimist believes we live in the best of all<br>
PGP Key: 0x674627FF | possible worlds. A pessimist is sure of it!<br>
<br>
</font></span></blockquote></div><br></div></div>