[Lxc-users] read only rootfs
Michael H. Warfield
mhw at WittsEnd.com
Tue Jul 19 21:17:45 UTC 2011
On Tue, 2011-07-19 at 15:32 -0500, Serge E. Hallyn wrote:
> Quoting Michael H. Warfield (mhw at WittsEnd.com):
> > On Tue, 2011-07-19 at 13:34 -0500, Serge E. Hallyn wrote:
> > > Quoting C Anthony Risinger (anthony at xtfx.me):
> > > > there it would seem. however, while i could *maybe* see the rootfs
> > > > being an unconditional slave, i would NOT want to see any lxc
> > > > default/enforcement preventing container -> host propagation on a
> > > > globally recursive scale. im of the opinion that the implementor
> > > > should decide the best tactic ... especially in light of the fact the
> > > > one distro may not even have the same problems as say
> > > > ubutnu/fedora/etc because they keep mount points private by default.
> >
> > > Good point. (I don't see it on ubuntu either fwiw) Perhaps there
> > > should be a toggle in the per-container config file?
> >
> > Quick question.
> >
> > Is there any way to test for these flags (SHARED, PRIVATE, SLAVE)? I
> > don't see them showing up anywhere from mount, in proc mounts or
> > mountstats. How do you check to see if they are set?
> /proc/self/mountinfo is supposed to tell that. i.e. if you do
> a --make-shared on /mnt, it'll show 'shared' next to the /mnt entry.
> (I say 'is supposed to' bc --make-rslave just shows nothing, but
> maybe that's bc the way i did it it wasn't a slave to anything,
> so it was actually private)
Ok... This just gets weirder.
For giggles, I set my /srv partition (where all my VM's are located) to
"shared". Now. the first machine starts up fine but the second one,
Plover, and all subsequent ones blow up with this:
[root at forest ~]# lxc-start --name Plover
lxc-start: Invalid argument - pivot_root syscall failed
lxc-start: failed to setup pivot root
lxc-start: failed to set rootfs for 'Plover'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'Plover'
lxc-start: Device or resource busy - failed to remove cgroup '/sys/fs/cgroup/systemd/Plover'
And mount -t devpts shows ALL the devpts mounts for all the attempted
VM's. Ok... Guess that wasn't a good idea.
But... I got this for the root system on Alcove.
106 55 8:17 /lxc/private/Alcove / rw,relatime master:1 - ext4 /dev/sdb1 rw,barrier=1,data=ordered
Ok... That now says "master:1". Not sure what it signifies...
Shut him down and changed /srv to be slave and all the containers come
up but the remount still propagates back. Changed ran --make-rslave on
it and no influence. Seems like we're missing a piece of the puzzle
here.
Regards,
Mike
--
Michael H. Warfield (AI4NB) | (770) 985-6132 | mhw at WittsEnd.com
/\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/
NIC whois: MHW9 | An optimist believes we live in the best of all
PGP Key: 0x674627FF | possible worlds. A pessimist is sure of it!
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 482 bytes
Desc: This is a digitally signed message part
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20110719/df8106d6/attachment.pgp>
More information about the lxc-users
mailing list