[lxc-devel] Container autostart proposal V2

Stéphane Graber stgraber at ubuntu.com
Thu May 30 15:39:29 UTC 2013


On 05/30/2013 11:28 AM, Harald Dunkel wrote:
> Hi Stéphane,
> 
> On 05/30/13 15:33, Stéphane Graber wrote:
>>
>> That's already covered by my proposal and I believe covered in the use
>> cases listed within it.
>>
>> "lxc-stop -g any"
>>
>> That'll stop all containers that are in the "any" group. "any" is
>> documented as being a special group that contains all containers even if
>> they have lxc.group set.
>>
> 
> My appologies, I have missed that.
> 
> If LXC relies upon a special group to stop containers, wouldn't
> it be more consistent to use an "autostart" group, too?

There's nothing in theory preventing us from doing that, except that I'd
like to keep the number of special groups to a minimum.

If we were to use groups for everything, we'd end up having to reserve
"disabled", "autostart", "last-state".

And then make those 3 conflict so that a container couldn't be in more
than one of those at any given time.

This seems rather complicated and non-obvious for our users, so I'd
rather keep things simple and have separate lxc.start.auto and
lxc.start.disabled config entries.

>>> And one question:
>>>
>>> Do lxc-start -a ... and lxc-stop -a ... start/stop all LXC
>>> containers in parallel, if their order and group are the
>>> same? I am concerned about accumulating timeouts or delays
>>> at shutdown or startup time of the host.
>>
>> The usual STOPPED => RUNNING or RUNNING => STOPPED time is < 1s, so no,
>> we'll be doing that in serial order, but you won't really notice it
>> because of how quick it's.
>>
> 
> Sorry, but I disagree in this case. Surely the containers start init
> very fast, but on shutdown time lxc-stop has to wait for _all_
> processes running in the container. Some Java webapps might take an
> awful lot of time to stop (just as an example, meaning no offense to
> the Java folks).
> 
> The LXC server might have 16 cores or more. Its more efficient if
> the containers are triggered to shut down in parallel, instead of
> shutting down one after the other, while the rest is still using
> up their own CPU time and keep the disks busy.
> 
> If we assume a LXC server running 30 containers in parallel, and
> if every container needs 10 seconds to stop (this is not uncommon),
> then this means 5 minutes downtime just to stop all services.

lxc-stop sends SIGKILL by default which is usually instantaneous, if
it's not, that's because of I/O wait on the kernel side which
parallelization will just make worse.

>> That's assuming lxc.start.delay isn't set. If it's set, then obviously
>> startup will take longer because we'll wait of lxc.start.delay before
>> starting the next container (parallelizing would make the whole
>> priority/delay idea completely pointless obviously).
>>
> 
> Thats not obvious at all. The containers might have different order
> numbers. Only the containers with the same order number should
> be started (or stopped) in parallel. Lxc could wait for the largest
> start delay, before starting the next set of containers with the
> next order number.

I'm not planning on doing anything more clever than simply doing serial
start of the containers, waiting for lxc.start.delay if it's present.

Anyone who needs something more advance than that should use proper init
scripts for their containers.

The idea here is to behave the same way as a good old serial sysvinit
would, that means someone can assume that we first order the containers
by priority and then by name. So someone can have the container test02
have the same priority as test01 and be sure it'll start after test01 is
started and its lxc.start.delay has passed.

> 
> Regards
> Harri
> 


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 901 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20130530/1790fc40/attachment.pgp>


More information about the lxc-devel mailing list