[Lxc-users] 10GbE Interface

Oleg Motienko motienko at gmail.com
Mon Jul 11 10:14:01 UTC 2011


Hello,

Did you try to test speed without brigdes (from host machine) ?

On Mon, Jul 11, 2011 at 14:03, Mayur Gowda <m2r0007 at gmail.com> wrote:

> Hi Robert,
>
> Tried few combinations of disabling the netfilter calls and noticed that it
> does seem to make a small difference of around 100Mbps, increasing the
> thruput to around 2.8-2.9Gbps, but its still far off from the optimum range
> for 10Gig interfaces.
>
>
> Regards
> Mayur
>
> On Mon, Jul 11, 2011 at 1:50 AM, Robert Kawecki <thewanderer at gim11.pl>wrote:
>
>> Dnia 2011-07-10, nie o godzinie 19:59 +0100, Mayur Gowda pisze:
>> > Hello All,
>> >
>> >
>> > As planned I got 10GbE interfaces(with linux TOE) on my server(long
>> > wait...!!) and got two debian containers up with following
>> > configuration:
>> >
>> >
>> > >Set macvlan on the 10GbE physical interface
>> >        sudo ip link add link eth0 address 00:00:00:ab:00:01 macvlan0
>> > type macvlan mode bridge
>> >
>> >
>> > > Setup containers with macvlan
>> > lxc.network.type = macvlan
>> >     lxc.network.macvlan.mode = bridge
>> >     lxc.network.link = eth0
>> > The containers can ping eachother but the problem is they are no way
>> near the 10gig mark (throughput IPERF < 1Mbps). Am I missing any
>> configuration here? my goal is to bridge the containers @ 10gig using
>> macvlan. The veth bridging yields 2.8Gbps ,with MTU tweak, but thats the
>> maximum I can get to. Need your expertise on pushing the rates !
>> >
>> > Thanks & Regards
>> > Mayur
>>
>> Hi,
>> I'm not able to provide any hints on macvlan, but have you tried
>> disabling Netfilter calls on your bridge with veth? The control files
>> should be in /proc/sys/net/bridge - turning some of them off might
>> improve performance if you haven't done so already, I think.
>>
>>
>
>

-- 
Regards,
Oleg
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20110711/9d003e4b/attachment.html>


More information about the lxc-users mailing list