[lxc-users] lxc-users Digest, Vol 204, Issue 2

Thouraya TH thouraya87 at gmail.com
Wed Nov 8 19:12:49 UTC 2017


Hi,
Thank you so much for answer :)
It did'nt work for me.

lxc config set workerTest limits.cpu 2

No command 'lxc' found, did you mean:
 Command 'lpc' from package 'cups-bsd' (main)
 Command 'lpc' from package 'lpr' (universe)
 Command 'lpc' from package 'lprng' (universe)
 Command 'axc' from package 'afnix' (universe)
 Command 'llc' from package 'llvm' (universe)
 Command 'lc' from package 'mono-devel' (main)
lxc: command not found


Best regards.

2017-11-07 13:00 GMT+01:00 <lxc-users-request at lists.linuxcontainers.org>:

> Send lxc-users mailing list submissions to
>         lxc-users at lists.linuxcontainers.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.linuxcontainers.org/listinfo/lxc-users
> or, via email, send a message with subject or body 'help' to
>         lxc-users-request at lists.linuxcontainers.org
>
> You can reach the person managing the list at
>         lxc-users-owner at lists.linuxcontainers.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of lxc-users digest..."
>
> Today's Topics:
>
>    1. Race condition in IPv6 network configuration (MegaBrutal)
>    2. Number of core for a container (Thouraya TH)
>    3. Re: Number of core for a container (Renato dos Santos)
>    4. Re: Number of core for a container (Stéphane Graber)
>    5. Re: Race condition in IPv6 network configuration (Marat Khalili)
>    6. Re: Race condition in IPv6 network configuration (MegaBrutal)
>    7. Re: Race condition in IPv6 network configuration (Marat Khalili)
>
>
> ---------- Message transféré ----------
> From: MegaBrutal <megabrutal at gmail.com>
> To: LXC users mailing-list <lxc-users at lists.linuxcontainers.org>
> Cc:
> Bcc:
> Date: Mon, 6 Nov 2017 14:15:00 +0100
> Subject: [lxc-users] Race condition in IPv6 network configuration
> Hi all,
>
> I experience an annoying race condition when my LXC container's
> network interface comes up. By default, nodes are expected to
> configure themselves with Router Advertisements; but for some
> containers, I'd prefer to set a static address. In these containers, I
> use a static configuration in /etc/network/interfaces, and explicitly
> disable RAs. Yet somehow, these containers configure themselves
> through RA before the static configuration would occur. Then the
> static address is added, but the default route acquired from RA stays
> there. Then as RAs get disabled, this default route expires after
> time, and the host remains without a default route. Then I receive
> ping test fail alerts on my monitoring system... If I reboot the
> container multiple times, once I'll get lucky and the interface
> configuration happens before an RA is received, and I get a permanent
> default route and everything's fine. But it's annoying because I have
> to reboot the container multiple times before it happens.
>
> What am I doing wrong?
>
> Here is my LXC config file:
>
> # Template used to create this container: /usr/share/lxc/templates/lxc-
> ubuntu
> # Parameters passed to the template: -S /home/megabrutal/.ssh/id_rsa.pub
> # For additional config options, please look at lxc.conf(5)
>
> # Common configuration
> lxc.include = /usr/share/lxc/config/ubuntu.common.conf
>
> # Container specific configuration
> lxc.start.auto = 1
> lxc.rootfs.path = /dev/vmdata-vg/lxc-reverse
> lxc.rootfs.options = subvol=@reverse
> lxc.uts.name = reverse
> lxc.arch = amd64
>
> # Network configuration
> lxc.net.0.type = veth
> lxc.net.0.hwaddr = 00:16:3e:7b:9e:b4
> #lxc.net.0.flags = up
> lxc.net.0.link = br0
>
>
> Take a note how I explicitly commented out the net flags, as I don't
> want the interface to be in UP state when the container starts; I want
> the container to configure its interface for itself. At first, I
> thought commenting out "lxc.net.0.flags = up" would solve the issue,
> and sure it did help something, because at least now the configuration
> SOMETIMES works as intended. Before that, the static configuration
> never succeeded.
>
> Here is /etc/network/interfaces from within the container:
>
> # This file describes the network interfaces available on your system
> # and how to activate them. For more information, see interfaces(5).
>
> # The loopback network interface
> auto lo
> iface lo inet loopback
>
> auto eth0
> iface eth0 inet dhcp
>
> iface eth0 inet6 static
>         address 2001:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx
>         netmask 64
>         gateway fe80::xxxx:xxxx:xxxx:xxxx
>         autoconf 0
>         accept_ra 0
>
>
> So I want the container to receive its IPv4 address through DHCP and
> have a static IPv6 configuration at the same time. What is the best
> practice for this without race conditions with RA?
>
> Note: I use the old LXC interface / tools, have no plans to migrate to LXD
> yet.
> Thanks for your help in advance!
>
>
> Regards,
> MegaBrutal
>
>
>
> ---------- Message transféré ----------
> From: Thouraya TH <thouraya87 at gmail.com>
> To: LXC users mailing-list <lxc-users at lists.linuxcontainers.org>
> Cc:
> Bcc:
> Date: Mon, 6 Nov 2017 19:40:03 +0100
> Subject: [lxc-users] Number of core for a container
> Hi all,
>
> Please, how can i fix number of cores (CPU) for a container when i use
> lxc-create or lxc-clone ?
>
> Thanks a lot for help.
> Best reagrds.
>
>
>
> ---------- Message transféré ----------
> From: Renato dos Santos <renato.santos at wplex.com.br>
> To: lxc-users at lists.linuxcontainers.org
> Cc:
> Bcc:
> Date: Mon, 6 Nov 2017 17:20:32 -0200
> Subject: Re: [lxc-users] Number of core for a container
>
> Hi Thouraya,
>
> You can use this after create the container:
> Set the container to use any 2 CPUs on the host.
>
>     $ lxc config set your-container limits.cpu 2
>
> Set the container to use physical CPU 0, 3, 7, 8 and 9 on the host.
>
>     $ lxc config set your-container limits.cpu 0,3,7-9
>
> Set the container to use 20% of the available CPU on the host or more if
> it’s available.
>
>     $ lxc config set your-container limits.cpu.allowance 20%
>
> Set the container to use no more than 50% of the available CPU on the
> host, or 100ms for every 200ms of CPU time available.
>
>     $ lxc config set your-container limits.cpu.allowance 100ms/200ms
>
> other option, use the profile.
>
>
> On 06/11/2017 16:40, Thouraya TH wrote:
>
> Hi all,
>
> Please, how can i fix number of cores (CPU) for a container when i use
> lxc-create or lxc-clone ?
>
> Thanks a lot for help.
> Best reagrds.
>
>
>
> _______________________________________________
> lxc-users mailing listlxc-users at lists.linuxcontainers.orghttp://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> --
> Renato dos Santos
> Analista de Infraestrutura
>
> 48 3239-2400 Pabx
> WPLEX Software Ltda.
> Rodovia SC 401, 8600 Corporate Park Bloco 5 Sala 101
> 88050-000 Santo Antônio de Lisboa, Florianópolis SC
> wplex.com.br
> [image: WPLEX]
>
>
> ---------- Message transféré ----------
> From: "Stéphane Graber" <stgraber at ubuntu.com>
> To: LXC users mailing-list <lxc-users at lists.linuxcontainers.org>
> Cc:
> Bcc:
> Date: Mon, 6 Nov 2017 14:23:12 -0500
> Subject: Re: [lxc-users] Number of core for a container
> Those instructions are for LXD, not for low level LXC.
>
> For LXC, you'd need to use the lxc.cgroup.cpuset config keys to directly
> configure the cpuset cgroup.
>
> Note that since LXC doesn't have a long running manager like LXD does,
> you're going to need to do the scheduling/balancing of those containers
> yourself.
>
> On Mon, Nov 06, 2017 at 05:20:32PM -0200, Renato dos Santos wrote:
> > Hi Thouraya,
> >
> > You can use this after create the container:
> >
> > Set the container to use any 2 CPUs on the host.
> >
> >     $ lxc config set your-container limits.cpu 2
> >
> > Set the container to use physical CPU 0, 3, 7, 8 and 9 on the host.
> >
> >     $ lxc config set your-container limits.cpu 0,3,7-9
> >
> > Set the container to use 20% of the available CPU on the host or more if
> > it’s available.
> >
> >     $ lxc config set your-container limits.cpu.allowance 20%
> >
> > Set the container to use no more than 50% of the available CPU on the
> host,
> > or 100ms for every 200ms of CPU time available.
> >
> >     $ lxc config set your-container limits.cpu.allowance 100ms/200ms
> >
> > other option, use the profile.
> >
> >
> > On 06/11/2017 16:40, Thouraya TH wrote:
> > > Hi all,
> > >
> > > Please, how can i fix number of cores (CPU) for a container when i use
> > > lxc-create or lxc-clone ?
> > >
> > > Thanks a lot for help.
> > > Best reagrds.
> > >
> > >
> > >
> > > _______________________________________________
> > > lxc-users mailing list
> > > lxc-users at lists.linuxcontainers.org
> > > http://lists.linuxcontainers.org/listinfo/lxc-users
> >
> > --
> > Renato dos Santos
> > Analista de Infraestrutura
> >
> > 48 3239-2400 Pabx
> > WPLEX Software Ltda.
> > Rodovia SC 401, 8600 Corporate Park Bloco 5 Sala 101
> > 88050-000 Santo Antônio de Lisboa, Florianópolis SC
> > wplex.com.br <http://wplex.com.br>
> > WPLEX
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> --
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
>
>
> ---------- Message transféré ----------
> From: Marat Khalili <mkh at rqc.ru>
> To: lxc-users at lists.linuxcontainers.org
> Cc:
> Bcc:
> Date: Tue, 7 Nov 2017 10:17:59 +0300
> Subject: Re: [lxc-users] Race condition in IPv6 network configuration
>
>> If I reboot the
>> container multiple times, once I'll get lucky and the interface
>> configuration happens before an RA is received, and I get a permanent
>> default route and everything's fine. But it's annoying because I have
>> to reboot the container multiple times before it happens.
>>
> I don't know the cause of your problem, but I also encountered some race
> in container network configuration: https://lists.linuxcontainers.
> org/pipermail/lxc-users/2017-June/013456.html . Since no one knows why it
> happens, I ended up checking container network configurations from a cron
> job and restarting resolvconf if it's wrong then email administrator;
> emailing is necessary since in my case some services may have already
> failed to start by the time configuration is auto-corrected. Even better
> solution would be to put a check in a systemd service with correct
> dependencies, but I haven't implemented it yet.
>
> Until someone finds proper solution of your problem, I suggest you to:
>
> 1) Restart not the whole container but something smaller, like networking
> or specific interface.
>
> 2) Automate check and restart as I did.
>
> 3) Write systemd service for this if you have enough time at hand, then
> share it :)
>
> --
>
> With Best Regards,
> Marat Khalili
>
>
>
>
> ---------- Message transféré ----------
> From: MegaBrutal <megabrutal at gmail.com>
> To: LXC users mailing-list <lxc-users at lists.linuxcontainers.org>
> Cc:
> Bcc:
> Date: Tue, 7 Nov 2017 11:45:00 +0100
> Subject: Re: [lxc-users] Race condition in IPv6 network configuration
> Hi Marat,
>
> First of all, I also suggest you to comment out the line
> "lxc.net.0.flags = up" in your LXC container configuration
> (/var/lib/lxc/containername/config). (Note, if you have an older
> version of LXC, the line is "lxc.network.flags", if I remember
> correctly.) Probably it would help your container to have it bring up
> an interface from DOWN state, than to configure an interface which is
> already UP and probably is in ambiguous state.
>
>
> 2017-11-07 8:17 GMT+01:00 Marat Khalili <mkh at rqc.ru>:
> >
> > Until someone finds proper solution of your problem, I suggest you to:
> >
> > 1) Restart not the whole container but something smaller, like
> networking or
> > specific interface.
>
> Actually, that's more cumbersome. I'd have to remove all IPs, routes,
> and bring the interface down (ip link set dev eth0 down), and then run
> ifup. A reboot is quicker, as I don't run heavy applications in the
> problematic containers.
>
> I also tried ifdown, but that usually doesn't work, because as far as
> the system is concerned, the interface is not configured (as it failed
> configuration when ifupdown tried to bring it up, so it is in a
> half-configured state).
>
> >
> > 2) Automate check and restart as I did.
>
> At least I monitor it with Zabbix.
>
> >
> > 3) Write systemd service for this if you have enough time at hand, then
> > share it :)
>
> Yes, but it feels like a workaround. I'd prefer to know the cause and
> find a better solution.
>
>
>
> ---------- Message transféré ----------
> From: Marat Khalili <mkh at rqc.ru>
> To: lxc-users at lists.linuxcontainers.org
> Cc:
> Bcc:
> Date: Tue, 7 Nov 2017 14:55:47 +0300
> Subject: Re: [lxc-users] Race condition in IPv6 network configuration
> On 07/11/17 13:45, MegaBrutal wrote:
>
> First of all, I also suggest you to comment out the line
>> "lxc.net.0.flags = up" in your LXC container configuration
>> (/var/lib/lxc/containername/config).
>>
> Will definitely try it, although since it happens so rarely in my case
> (approximately once per month in different containers) it will take some
> time to confirm the solution.
>
> I also tried ifdown, but that usually doesn't work, because as far as
>> the system is concerned, the interface is not configured (as it failed
>> configuration when ifupdown tried to bring it up, so it is in a
>> half-configured state).
>>
> Looks tough.
>
> 3) Write systemd service for this if you have enough time at hand, then
>>> share it :)
>>>
>> Yes, but it feels like a workaround. I'd prefer to know the cause and
>> find a better solution.
>>
> Me too. Just a wild idea: you could try to play with ip6tables to find out
> what process sends RA requests, and also (another workaround) to filter out
> corresponding packets.
>
> --
>
> With Best Regards,
> Marat Khalili
>
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20171108/80bc0be1/attachment.html>


More information about the lxc-users mailing list