[Lxc-users] Bonding inside LXC container

wang yao yaowang2014 at gmail.com
Mon Nov 18 04:08:31 UTC 2013


Hi Jake,

First of all, thank you for your reply and I am very sorry for such a late
response.

Just as you said, I had ever tried the bonding style like this:

        eth0--+--bound0--[veth]--eth0
        eth1--/

But when I used mode=6(alb) of bonding following this way, there is 80%
packet loss in the container, I must patch the kernel to the problem.

On the other hand, my current approach:

        eth0--[phys]--eth0--+--bound0
        eth1--[phys]--eth1--/
My lxc configuration like this (Networking part) :








*# Networkinglxc.network.type = macvlanlxc.network.flags =
uplxc.network.link = eth0lxc.network.name <http://lxc.network.name> =
eth0lxc.network.ipv4 = 172.19.8.168/16
<http://172.19.8.168/16>lxc.network.mtu = 1500lxc.network.hwaddr =
fe:67:f5:42:40:14*







*lxc.network.type = macvlanlxc.network.flags = uplxc.network.link =
eth1lxc.network.name <http://lxc.network.name> = eth1lxc.network.ipv4 =
172.19.8.169/16 <http://172.19.8.169/16>lxc.network.mtu =
1500lxc.network.hwaddr = fe:67:f5:42:40:15*
*...*

I did the bonding in the container, the bonding configuration is the same
as what I did before on the host.  When I started bonding device in the
container, this message came out:
"Bringing up interface bond0: bonding device bond0 does not seem to be
present, delaying initialization."

I'm not sure but I doubt it may be Network namespace or something similar
that brings about this problem.

What's your idea?

Regards,
Yao


2013/11/15 Jäkel, Guido <G.Jaekel at dnb.de>

> Dear Yao,
>
> as I understand, you want to bound two physical interfaces of the host
> hardware to and use the bound inside a container.
>
>         eth0--[phys]--eth0--+--bound0
>         eth1--[phys]--eth1--/
>
> Because no other -- neither host nor another container -- may use one of
> NICs in addition, I would suggest to put the virtual bounding interface on
> the host and reach through the bound into the container via a veth. To me
> it's seems to be a better separation of concerns.
>
>         eth0--+--bound0--[veth]--eth0
>         eth1--/
>
> Following this way, you may also share the bound to more than one
> container by putting a virtual bridge between the virtual bounding
> interface and the virtual Ethernet adapters of the Containers.
>
>
> By the way, I don't see a clear reason why your current approach may fail.
> May you please present you configuration here?
>
>
> Greetings
>
> Guido
>
>
> >-----Original Message-----
> >From: wang yao [mailto:yaowang2014 at gmail.com]
> >Sent: Friday, November 15, 2013 4:33 AM
> >To: lxc-users at lists.sourceforge.net
> >Subject: [Lxc-users] Bonding inside LXC container
> >
> >Hi all,
> >I tried to bond two NICs (eth0 and eth1) in the container, but when I
> finished the bonding configuration (I think my configuraion is correct)
> >and started bonding device inside container, this message came out:
> >"Bringing up interface bond0:  bonding device bond0 does not seem to be
> present, delaying initialization."
> >So I want to know if LXC can't support the way of bonding configuration
> as I did, or I can do something to make this achieved.
> >I am glad to talk about "Bonding and LXC" with someone who has interest
> in it.
> >Regards,
> >Yao
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20131118/b1805fd1/attachment.html>


More information about the lxc-users mailing list