[lxc-devel] Container autostart proposal

Dwight Engen dwight.engen at oracle.com
Tue May 28 15:23:33 UTC 2013


On Tue, 28 May 2013 09:57:13 -0400
"Michael H. Warfield" <mhw at WittsEnd.com> wrote:

> On Tue, 2013-05-28 at 09:29 -0400, Stéphane Graber wrote: 
> > On 05/28/2013 05:11 AM, Jäkel, Guido wrote:
> > > Dear Stéphane,
> > > 
> > > To my opinion, we have to deal with two independent things:
> > > System crash recovery and container startup/shutdown dependencies
> > > 
> > >> Another problem with this implementation is that the autostart
> > >> flag is lost when migrating the container to another host. One
> > >> needs to manually remove the symlinks on the source and recreate
> > >> it on the destination.
> > > 
> > > May I repeat my proposal to (miss-) use the sticky bit (file mode
> > > 1000) of the config file as a marker. I would use it to persist
> > > the fact that a container is currently up. Therefore, it should
> > > be set by an successful lxc-start and cleared by a lxc-stop by
> > > default. 
> > > 
> > > After a system chrash, all containers with such a marker set
> > > should be restarted. For the one's with some "autostart"
> > > declaration, with conditions derived from this framework ruleset.
> > > But event with this framework is unused, also the containers not
> > > mentioned here should be brought up again if they were running
> > > before. This might be signaled by a special "powercycle"-option
> > > for lxc-start. The same option applied to lxc-stop may prevent to
> > > clear the running marker. By help of this, a reboot may be keep
> > > this flags to let the running containers restart but a shutdown
> > > or poweroff action of the host may also keep all containers
> > > stopped at next boot of the host.
> > > 
> > > 
> > > I vote to use lxc config options to declare the meta information
> > > like start timeout. Instead of ordering via a priority number I
> > > would strongly prefer to model it with a "needs foo,bar" tag for
> > > different reasons: First again the "portability to another host".
> > > In addition, it's a canonical description and it's much more easy
> > > to insert something or to mention/realize that no order is
> > > required.
> > > 
> > > 
> > > I also like the idea of Natanael to declare a group that might be
> > > used. One should be even able to list more than one group name in
> > > the configuration tag because a concrete container may be needed
> > > for different sets.
> > > 
> > > Maybe then a special syntax for the container argument on
> > > appropriate lxc command may be used to deals with such a set of
> > > containers. Maybe we allow a ','-separated list of names for the
> > > -n option. And prefix like '~' means a group name and is expanded
> > > by such a list of it's member's names. In addition, '~' may be
> > > expanded to all containers and '~~' to all "groupless" containers.
> > > 
> > > 
> > > Greetings
> > > 
> > > Guido
> 
> > So I'd rather not abuse the sticky bit for that kind of thing,
> > especially as there's no good reason to do so.
> > We can very simply create a separate state file that's removed on
> > shutdown, or just use the one we already have (rootfs.hold).
> 
> I have to also concur with this.  I would be appalled and adamently
> opposed to overloading the stick bit with that sort of functionality
> that could have untold side effects.  It would end up being poorly
> documented, confusing and very misunderstood.  We've seen that sort of
> thing in the Samba project.  A state file (similar to the .hold file)
> in the container's management directory makes vastly more sense.

I agree, and another reason for not (ab)using the sticky bit is that I
don't think it can be done with the atomic properties that file
creation provides.

> Regards,
> Mike
> 
> > Also as I said in my reply to Natanael, I'm not planning on doing
> > any dependency resolution in LXC. It's pretty difficult to do so
> > (I'm also upstream for an init system, so believe me, I know ;))
> > and a job that's much better done by existing init systems.
> > 
> > I think using priority + time is enough to deal with most of
> > everyone's current problems with our startup sequence. Those two
> > keys also allow someone to build a tool which would do dependency
> > resolution and then update those fields accordingly, though I still
> > think it'd be best to just use init scripts for such specific cases.
> > 
> > I also agree with lxc.group needing to be multi-value and we'll need
> > another multi-value field in the system config to list which groups
> > are to be autostarted (provided lxc.start.auto is also set for the
> > containers).
> > 
> > 
> > Now as for state preservation across reboot, it's not something I'm
> > particularly interested in so I'm not planning to spend time on this
> > myself. You're however welcome to contribute to that.
> > 
> > One thing we probably ought to do to simplify any future
> > implementation of this is make lxc.start.auto an integer instead of
> > a boolean. With values:
> >  - 0 => off
> >  - 1 => always auto-start
> >  - 2 => last state (not implemented initially)
> > 
> > So we'd need a patch that essentially changes the new "-a" option of
> > lxc-start to also start containers with lxc.start.auto set to 2 as
> > long as the state file exists in the container's directory.
> > Similarly the "-a" option of lxc-stop would need to be updated NOT
> > to remove the state file for containers with lxc.start.auto set to
> > 2.
> > 
> > 
> > ------------------------------------------------------------------------------
> > Try New Relic Now & We'll Send You this Cool Shirt
> > New Relic is the only SaaS-based application performance monitoring
> > service that delivers powerful full stack analytics. Optimize and
> > monitor your browser, app, & servers with just a few lines of code.
> > Try New Relic and get this awesome Nerd Life shirt!
> > http://p.sf.net/sfu/newrelic_d2d_may
> > _______________________________________________ Lxc-devel mailing
> > list Lxc-devel at lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/lxc-devel
> 





More information about the lxc-devel mailing list