[Lxc-users] read only rootfs
C Anthony Risinger
anthony at xtfx.me
Tue Jul 19 22:28:42 UTC 2011
On Tue, Jul 19, 2011 at 4:17 PM, Michael H. Warfield <mhw at wittsend.com> wrote:
> On Tue, 2011-07-19 at 15:32 -0500, Serge E. Hallyn wrote:
>> Quoting Michael H. Warfield (mhw at WittsEnd.com):
>> > On Tue, 2011-07-19 at 13:34 -0500, Serge E. Hallyn wrote:
>> > > Quoting C Anthony Risinger (anthony at xtfx.me):
>> > > > there it would seem. however, while i could *maybe* see the rootfs
>> > > > being an unconditional slave, i would NOT want to see any lxc
>> > > > default/enforcement preventing container -> host propagation on a
>> > > > globally recursive scale. im of the opinion that the implementor
>> > > > should decide the best tactic ... especially in light of the fact the
>> > > > one distro may not even have the same problems as say
>> > > > ubutnu/fedora/etc because they keep mount points private by default.
>> >
>> > > Good point. (I don't see it on ubuntu either fwiw) Perhaps there
>> > > should be a toggle in the per-container config file?
>> >
>> > Quick question.
>> >
>> > Is there any way to test for these flags (SHARED, PRIVATE, SLAVE)? I
>> > don't see them showing up anywhere from mount, in proc mounts or
>> > mountstats. How do you check to see if they are set?
>
>> /proc/self/mountinfo is supposed to tell that. i.e. if you do
>> a --make-shared on /mnt, it'll show 'shared' next to the /mnt entry.
>> (I say 'is supposed to' bc --make-rslave just shows nothing, but
>> maybe that's bc the way i did it it wasn't a slave to anything,
>> so it was actually private)
>
> Ok... This just gets weirder.
>
> For giggles, I set my /srv partition (where all my VM's are located) to
> "shared". Now. the first machine starts up fine but the second one,
> Plover, and all subsequent ones blow up with this:
>
> [root at forest ~]# lxc-start --name Plover
> lxc-start: Invalid argument - pivot_root syscall failed
> lxc-start: failed to setup pivot root
> lxc-start: failed to set rootfs for 'Plover'
> lxc-start: failed to setup the container
> lxc-start: invalid sequence number 1. expected 2
> lxc-start: failed to spawn 'Plover'
> lxc-start: Device or resource busy - failed to remove cgroup '/sys/fs/cgroup/systemd/Plover'
>
> And mount -t devpts shows ALL the devpts mounts for all the attempted
> VM's. Ok... Guess that wasn't a good idea.
>
> But... I got this for the root system on Alcove.
>
> 106 55 8:17 /lxc/private/Alcove / rw,relatime master:1 - ext4 /dev/sdb1 rw,barrier=1,data=ordered
>
> Ok... That now says "master:1". Not sure what it signifies...
>
> Shut him down and changed /srv to be slave and all the containers come
> up but the remount still propagates back. Changed ran --make-rslave on
> it and no influence. Seems like we're missing a piece of the puzzle
> here.
maybe not the best context for this response, but i wanted to point
out one thing that confused me for awhile since it might be related
...
... that fact that the shared/slave context only exists with BOTH
sides are mount points. eg. if DIR is only a directory:
mount --bind ./DIR ./TARGET
... it will never propagate mounts to TARGET (AFAICT), and does not
respond to --make-* ... before OR after the --bind. in order to get
propagation, one must:
mount --bind ./DIR ./DIR
mount --make-shared ./DIR
mount --bind ./DIR ./TARGET
[mount --make-slave ./TARGET]
... this tripped me up for awhile as it seemed like the semantics were changing.
C Anthony
More information about the lxc-users
mailing list