[Lxc-users] Still can not get macvlan to work.

Michel Normand michel.mno at free.fr
Mon Feb 8 06:12:35 UTC 2010


On 07/02/2010 23:01, Michael H. Warfield wrote:
> On Sun, 2010-02-07 at 16:40 -0500, Michael H. Warfield wrote:
>> I mentioned this in an earlier posting that I was using the veth method
>> with bridges because I could NOT get macvlan to work.  Problem is that
>> the containers will come up and will talk on the network but the host
>> can not talk to any of the guest containers.  Ping doesn't work and
>> connections don't work.  Not IPv4 or IPv6.  I can connect to containers
>> from other systems (both IPv4 and IPv6) but not from the system that's
>> hosting them.  Someone suggested that the problem was an old bug that
>> they thought was fixed in more recent kernels.  But wasn't more
>> specific.
>>

There is a reference to a required change in 2.6.33 kernel
in a previous post on lxc-devel:
http://sourceforge.net/mailarchive/forum.php?thread_name=20091227224036.688337477%40mai-009101017029.toulouse-stg.fr.ibm.com&forum_name=lxc-devel
extract:
"The future kernel 2.6.33 will incorporate the macvlan bridge
mode where all the macvlan will be able to communicate if they are
using the same physical interface ..."

---
Michel

>> I just recently moved several of my test containers from my Fedora 11
>> engine to a newer 64 bit Fedora 12 system.  In the process, I thought,
>> what the heck, lets give macvlan another shot, so I reconfigured a
>> couple of the containers from veth to macvlan.  Same problem.  Latest
>> kernel from Fedora and same problem.
>
> Another point on the curve.  Two containers, both on macvlan, can not
> ping each other but the other containers, on the veth bridge, on the
> same host, can ping each other and the macvlan containers.  IPtables
> firewall rules are completely flushed so it's not firewalling either.
>
>> The Fedora 11 kernel: kernel-2.6.30.10-105.2.4.fc11.i586
>> The Fedora 12 kernel: kernel-2.6.31.12-174.2.3.fc12.x86_64
>>
>> Anyone with thoughts or suggestions on what to try next?
>
> Mike
>




More information about the lxc-users mailing list