[lxc-users] LXC networking stop working between containers and real network

alex barchiesi alex.barchiesi at garr.it
Tue Jul 19 16:22:15 UTC 2016


We had a similar problem recently.

Apparently the difference is in the host machine:
-ubuntu 14 *has* the bridge module charged in the kernel with
by default (check with sysctl -a)
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

in this case we used to forward the traffic "from" and "to" the bridges
where we had LXC attached and to masq the ips when needed.

-ubuntu 16* has not* (even if you create bridges and set iptables to
forward the bridges traffic) unless you add the following rule:
(check with sysctl -a|grep bridges)
*-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT*
(check again with sysctl -a|grep bridges)

this way you'll have the same behaviour as with the Ubuntu 14 (well...more
or less, you may need to trim a bit the forwarding table)

Let me know if this helps
ciao


Dr. Alex Barchiesi
____________________________________
Senior cloud architect
Art -Science relationships responsible

GARR CSD department
linkedin: alex barchiesi
_____________________________________
I started with nothing and I still have most of it.


On Tue, Jul 19, 2016 at 2:00 PM, <
lxc-users-request at lists.linuxcontainers.org> wrote:

>
>
> ---------- Forwarded message ----------
> From: Ruzsinszky Attila <ruzsinszky.attila at gmail.com>
> To: lxc-users at lists.linuxcontainers.org
> Cc:
> Date: Tue, 19 Jul 2016 07:54:34 +0200
> Subject: [lxc-users] LXC networking stop working between containers and
> real network
> Hi,
>
> There is an Ubuntu 14.04 64 bit up to date host.
> LXC version is: 2.0.3 (from backport packages)
> OpenvSwitch: 2.0.2.
>
> Container1: Ubuntu 14.04
> Container2: Ubuntu 16.04 (both of them was installed from root.fs.zx,
> because lxc-create doesn't work with auth. Squid proxy)
>
> Both containers are working perfectly in "standalone" mode.
> I use lxcbr0 as a bridge between the containers. There is dnsmasq for DHCP
> and it is working, because containers get IP address (from 10.0.3.0/24
> range).
> There is an OVS bridge: vbr0 and its port is lxcbr0 on the host. The real
> Ethernet interface is: eth0 which is connected to the real network. There
> is a mgmtlxc0 virt. management interface which IP is: 10.0.3.2/24. I can
> ping every machine in the 10.0.3.0/24 range.
> The MAC addresses of the containers are different. I checked them.
> mgmtlxc0 and the lxcbr0 are tagged for VLAN (tag=800 in OVS config)
>
> I want to MASQUERADE the lxc-net to the real network:
> Chain POSTROUTING (policy ACCEPT 54626 packets, 5252K bytes)
>  pkts bytes target     prot opt in     out     source
> destination
>   246 20520 MASQUERADE  all  --  *      *       10.0.3.0/24         !
> 10.0.3.0/24
>
> Routing table:
> root at fcubi:~# route
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use
> Iface
> default        real_router     0.0.0.0         UG    0      0        0 eth0
> LXCnet          *               255.255.255.0   U     0      0        0
> mgmtlxc0
> FCnet           *               255.255.255.0   U     1      0        0
> eth0
>
> The problem is:
> I try to ping from container1 (lub4) to a host on the real network. It is
> working.
> I try to ping from container2 (lub5) to the same host and it is not
> working! The DNS resolving is OK, but no answer from the real host.
>
> I checked the traffic on eth0 on lub4 or 5 (inside the containers). I can
> see the ICMP echo REQ packets.
> They are arrived to the host's lxcbr0 interface. I think it is good.
> I checked the hosts's mgmtlxc0 interface which is the routing interface on
> IP level. I can see the REQ packets.
> ip4_forwarding is enabled (=1).
> The next interface is eth0 and no traffic from containers on it! I
> filtered for ICMP and no REQ! So the host "filter out" (or not routing) my
> MASQUed ICMP packets.
> I think it is not a MASQ problem, because without MASQUERADING I had to
> see the outgoing REQ packets with wrong source IP (10.0.3.x) and of course
> there won't be any answer because the real host knows nothing about routing
> to 10.0.3.0 lxcnet. But no any outgoing packets.
> I tried to remove the all iptables rules except MASQ and nothing was
> changed.
>
> If I ping between lub4 and 5 it is working (virtual) when the real not.
>
> If I restart the containers one by one and I change the ping test (1st is
> lub5 and the 2nd is lub4) the 2nd won't ping so not depend ont the
> containers OS version.
>
> I think the problem maybe in MASQ or routing between mgmtlxc0 and eth0.
> netstat-nat doesn't work and I don't know why.
> Do you have any clue?
>
> I've got another host which is Fedora 23 64 bit (OVS 2.5) with 3 U14.04
> containers and it seems working.
>
> I'll do some more test. For example making a new U14.04 container because
> on F23 the container's versions are the same.
> LXD was installed but not used or configured.
>
> TIA,
> Ruzsi
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160719/e1d87cd1/attachment.html>


More information about the lxc-users mailing list