[Lxc-users] Bonding inside LXC container

wang yao yaowang2014 at gmail.com
Tue Nov 19 07:58:28 UTC 2013


I am sorry to make a mistake in the last mail, and I am using this style
actually :

         eth0--[macvlan]--eth0--+--bound0
         eth1--[macvlan]--eth1--/
I have ever tried to use "phys" type to link host and the container, but
there were some errors blocked me. When I resolved the errors, I had known
that "phys" type need some terms, e.g. high version kernel support, but the
version of kernel in my project is specified for some reason.

I am going to use the following style in the future :

         eth0--+--bound0--[macvlan]--eth0
         eth1--/

and I will patch the kernel to use bonding mode6(alb).

Anyway, thank you very much for your reply and have a nice day!

Regards,
Yao


2013/11/18 Serge Hallyn <serge.hallyn at ubuntu.com>

> Quoting wang yao (yaowang2014 at gmail.com):
> > Hi Jake,
> >
> > First of all, thank you for your reply and I am very sorry for such a
> late
> > response.
> >
> > Just as you said, I had ever tried the bonding style like this:
> >
> >         eth0--+--bound0--[veth]--eth0
> >         eth1--/
> >
> > But when I used mode=6(alb) of bonding following this way, there is 80%
> > packet loss in the container, I must patch the kernel to the problem.
> >
> > On the other hand, my current approach:
> >
> >         eth0--[phys]--eth0--+--bound0
> >         eth1--[phys]--eth1--/
>
> You say phys, but below you show macvlan.  Does it help if you
> actually use
>
>         lxc.network.type = phys
>
> ?
>
> > My lxc configuration like this (Networking part) :
> >
> >
> >
> >
> >
> >
> >
> >
> > *# Networkinglxc.network.type = macvlanlxc.network.flags =
> > uplxc.network.link = eth0lxc.network.name <http://lxc.network.name> =
> > eth0lxc.network.ipv4 = 172.19.8.168/16
> > <http://172.19.8.168/16>lxc.network.mtu = 1500lxc.network.hwaddr =
> > fe:67:f5:42:40:14*
> >
> >
> >
> >
> >
> >
> >
> > *lxc.network.type = macvlanlxc.network.flags = uplxc.network.link =
> > eth1lxc.network.name <http://lxc.network.name> = eth1lxc.network.ipv4 =
> > 172.19.8.169/16 <http://172.19.8.169/16>lxc.network.mtu =
> > 1500lxc.network.hwaddr = fe:67:f5:42:40:15*
> > *...*
> >
> > I did the bonding in the container, the bonding configuration is the same
> > as what I did before on the host.  When I started bonding device in the
> > container, this message came out:
> > "Bringing up interface bond0: bonding device bond0 does not seem to be
> > present, delaying initialization."
> >
> > I'm not sure but I doubt it may be Network namespace or something similar
> > that brings about this problem.
> >
> > What's your idea?
> >
> > Regards,
> > Yao
> >
> >
> > 2013/11/15 Jäkel, Guido <G.Jaekel at dnb.de>
> >
> > > Dear Yao,
> > >
> > > as I understand, you want to bound two physical interfaces of the host
> > > hardware to and use the bound inside a container.
> > >
> > >         eth0--[phys]--eth0--+--bound0
> > >         eth1--[phys]--eth1--/
> > >
> > > Because no other -- neither host nor another container -- may use one
> of
> > > NICs in addition, I would suggest to put the virtual bounding
> interface on
> > > the host and reach through the bound into the container via a veth. To
> me
> > > it's seems to be a better separation of concerns.
> > >
> > >         eth0--+--bound0--[veth]--eth0
> > >         eth1--/
> > >
> > > Following this way, you may also share the bound to more than one
> > > container by putting a virtual bridge between the virtual bounding
> > > interface and the virtual Ethernet adapters of the Containers.
> > >
> > >
> > > By the way, I don't see a clear reason why your current approach may
> fail.
> > > May you please present you configuration here?
> > >
> > >
> > > Greetings
> > >
> > > Guido
> > >
> > >
> > > >-----Original Message-----
> > > >From: wang yao [mailto:yaowang2014 at gmail.com]
> > > >Sent: Friday, November 15, 2013 4:33 AM
> > > >To: lxc-users at lists.sourceforge.net
> > > >Subject: [Lxc-users] Bonding inside LXC container
> > > >
> > > >Hi all,
> > > >I tried to bond two NICs (eth0 and eth1) in the container, but when I
> > > finished the bonding configuration (I think my configuraion is correct)
> > > >and started bonding device inside container, this message came out:
> > > >"Bringing up interface bond0:  bonding device bond0 does not seem to
> be
> > > present, delaying initialization."
> > > >So I want to know if LXC can't support the way of bonding
> configuration
> > > as I did, or I can do something to make this achieved.
> > > >I am glad to talk about "Bonding and LXC" with someone who has
> interest
> > > in it.
> > > >Regards,
> > > >Yao
> > >
>
> >
> ------------------------------------------------------------------------------
> > DreamFactory - Open Source REST & JSON Services for HTML5 & Native Apps
> > OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access
> > Free app hosting. Or install the open source package on any LAMP server.
> > Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native!
> >
> http://pubads.g.doubleclick.net/gampad/clk?id=63469471&iu=/4140/ostg.clktrk
>
> > _______________________________________________
> > Lxc-users mailing list
> > Lxc-users at lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/lxc-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20131119/43eb2485/attachment.html>


More information about the lxc-users mailing list