[Lxc-users] 10GbE Interface

Mayur Gowda m2r0007 at gmail.com
Sun Jul 10 18:59:11 UTC 2011


Hello All,

As planned I got 10GbE interfaces(with linux TOE) on my server(long
wait...!!) and got two debian containers up with following configuration:

>Set macvlan on the 10GbE physical interface
      * sudo ip link add link eth0 address 00:00:00:ab:00:01 macvlan0 type
macvlan mode bridge*

> Setup containers with macvlan

   * lxc.network.type = macvlan
    lxc.network.macvlan.mode = bridge
    lxc.network.link = eth0*

The containers can ping eachother but the problem is they are no way
near the 10gig mark (throughput IPERF < 1Mbps). Am I missing any
configuration here? my goal is to bridge the containers @ 10gig using
macvlan. The veth bridging yields 2.8Gbps ,with MTU tweak, but thats
the maximum I can get to. Need your expertise on pushing the rates !


Thanks & Regards

Mayur



On Mon, Mar 14, 2011 at 7:15 PM, Daniel Lezcano <daniel.lezcano at free.fr>wrote:

> On 03/14/2011 05:23 PM, Mayur Gowda wrote:
>
>> Hello Everyone,
>>                         I plan to use a 10GbE network interface on a PC
>> running ubuntu and want to pump traffic between 2 LXC containers close to
>> 10gig. Previous old posts indicate that bridging between containers with
>> VETH for 10GbE has limited performance and higher loss, is it true???
>> Mac-vlan has better performance at 10gig but how do I use to communicate
>> between multiple containers??
>>
>
> Hi Mayur,
>
> I think you can try with macvlan, yes.
> Make sure in your configuration you specify lxc.network.macvlan.mode =
> bridge for both containers.
>
> Let us know about the results
>
> Thanks
>  -- Daniel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20110710/8fe9b4dc/attachment.html>


More information about the lxc-users mailing list