[Lxc-users] Issues with mass-starting and bridge

Serge Hallyn serge.hallyn at canonical.com
Mon Oct 8 22:29:00 UTC 2012


I just created and started 99 containers on a cloud instance.  Those were
using the lxcbr0 bridge and using dnsmasq rather than statically assigned
ips.  But I had no problems.  So your issue isn't with the container's
veth devices themselves, though it still could be a bug in your kernel's
bridging code.

Is there anything in your /var/log/syslog?

Try adding '-l debug -o out$i' to the batch startup script, and see
in out$i exactly where the startup hangs.

-serge

Quoting Leon Waldman (le.waldman at gmail.com):
> Hi all,
> 
> 
> I'm having a strange issue with the bridge on my lxc host.
> 
> 
> First, some background info:
> I'm running lxc version 0.75 on a CentOS 6.3 install with the
> standard kernel (2.6.32-279.9.1.el6.i686).
> it's running inside a VirtualBox VM.
> 
> I created 100 containers and named from 000 to 099. they are all
> with sequential assigned IPs and macaddresses (from 10.0.1.0/8 till
> 10.0.1.99/8 | 80:04:C0:AB:00:00 till 80:04:C0:AB:00:63).
> 
> 
> When starting them in "mass" with something like:
> for i in `range 0 99`; do echo $i; lxc-start -n $i -d; done
> after a random number of containers is started the br0 bridge just
> stops to answer for anything. It just freezes (it's easy to note
> because I'm connecting to the host through ssh using the same
> bridge).
> 
> If I put a sleep 5 on the for above, things goes better, but then
> one or two of the containers became a "remote bridge bomb". The
> bridge stays working till a packet arrives to the "bomb" container,
> and them again the entire bridge freezes. This container bomb is
> also random.
> 
> if I remove the problematic container iface from the bridge, it
> unfreezes. If I add it again and send a packet... It freezes again.
> If I stop the container and re-start it... all goes fine and the
> issue vanish.
> 
> 
> From my point of view, looks like when the virtual iface is created
> during the container start (if under load or with too much
> concurrent starts... I don't know), sometimes something goes wrong
> with the iface and it mess with the entire bridge if this "messy"
> iface is attached.
> 
> 
> Do any one have any input on this (do this happened before? Is there
> a limit for concurrent containers initializations? Is there a kernel
> setting or some config that I could do to solve it)?
> 
> 
> In order to achieve what I need in production environment, I will
> need to start and stop containers several times per minute. Do this
> sound too problematic?
> 
> 
> If you guys need any additional information, please just let me know.
> 
> 
> Thanks in advance
> 
> --
> *Leon Waldman* <http://leonwaldman.com/>
> Senior Linux/UNIX SysAdmin & Consultant.
> Back-End & Infrastructure Architect.
> View Leon Waldman's profile on LinkedIn
> <http://br.linkedin.com/in/leonwaldman>
> <https://twitter.com/lewaldman>

> ------------------------------------------------------------------------------
> Don't let slow site performance ruin your business. Deploy New Relic APM
> Deploy New Relic app performance management and know exactly
> what is happening inside your Ruby, Python, PHP, Java, and .NET app
> Try New Relic at no cost today and get our sweet Data Nerd shirt too!
> http://p.sf.net/sfu/newrelic-dev2dev

> _______________________________________________
> Lxc-users mailing list
> Lxc-users at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/lxc-users





More information about the lxc-users mailing list