[Lxc-users] Nested container networking problem

Randy Wilson randyedwilson at gmail.com
Thu Feb 7 11:21:29 UTC 2013


Hi,

Here's a brief summary of the issue, as this is quite a lengthy post:

* Ubuntu 12.04 host with eth0 bridged with br0 and lxcbr0 not used
* Ubuntu 12.04 container configured with macvlan,
lxc-container-with-nesting AppArmor profile running LXC with lxcbr0
configured on 10.16.0.1/12
* Ubuntu 12.04 nested container with veth configured on 10.16.4.76/12
with default AppArmor profile
* Nested container's external communication is received by the remote
end but the response is not routed back from the first container to
the nested container.


The full details:

I've followed Stéphane Graber's excellent guide to create a nested
container on Ubuntu 12.04:

https://www.stgraber.org/2012/05/04/lxc-in-ubuntu-12-04-lts/

The only difference with my setup is that the host does not make use
of the lxcbr0 bridge and the first level container uses macvlan
networking:

host# cat /etc/network/interfaces
...
iface eth0 inet manual

auto br0
iface br0 inet static
	address xx.xx.xx.12
	netmask 255.255.255.0
	gateway xx.xx.xx.1
	dns-nameservers 8.8.8.8
        bridge_ports eth0
...

host# cat /var/lib/lxc/first/config
lxc.network.type = macvlan
lxc.network.macvlan.mode = bridge
lxc.network.link = br0
lxc.network.flags = up
...

first# cat /etc/network/interfaces
...
auto eth0
iface eth0 inet static
	address xx.xx.xx.13
	netmask 255.255.255.0
	gateway xx.xx.xx.1
	dns-nameservers 8.8.8.8
...


Networking works fine in the first container.

LXC is configured in the first container to use lxcbr0 on a large
subnet, 10.16.0.0/12:

first# cat /etc/default/lxc
...
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.16.0.1"
LXC_NETMASK="255.240.0.0"
LXC_NETWORK="10.16.0.0/12"
LXC_DHCP_RANGE="10.16.0.2,10.31.255.254"
LXC_DHCP_MAX="1048573"
...


The networking configuration on the second container is setup as follows:

first# cat /var/lib/second/config
...
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.name = eth0
lxc.network.ipv4 = 10.16.4.76/12 10.31.255.255
...

second# cat /etc/network/interfaces
...
auto eth0
iface eth0 inet static
	address 10.16.4.76
	netmask 255.240.0.0
	gateway 10.16.0.1
...

second# ip route
default via 10.16.0.1 dev eth0
10.16.0.0/12 dev eth0  proto kernel  scope link  src 10.16.4.76


The second container can communicate with the first container:

second# ping -c 1 10.16.0.1
PING 10.16.0.1 (10.16.0.1) 56(84) bytes of data.
64 bytes from 10.16.0.1: icmp_req=1 ttl=64 time=0.521 ms

--- 10.16.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms

second# ping -c 1 xx.xx.xx.13
PING xx.xx.xx.13 (xx.xx.xx.13) 56(84) bytes of data.
64 bytes from xx.xx.xx.13: icmp_req=1 ttl=64 time=0.343 ms

--- xx.xx.xx.13 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms


But not with the outside world:

second# ping -c 1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms


This looks like a NAT issue, but I haven't been able to figure out
what's missing. The first container has the following NAT rules:

first# iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.16.0.0/12        !10.16.0.0/12


I've also added the following forwarding rule for the return traffic:

first# iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  0.0.0.0/0            10.16.0.0/12         state
NEW,RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            10.16.0.0/12         state
RELATED,ESTABLISHED
ACCEPT     all  --  10.16.0.0/12         0.0.0.0/0

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Created with:

iptables -A FORWARD -d 10.16.0.0/12 -m state --state
NEW,RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -d 10.16.0.0/12 -o lxcbr0 -m state --state
RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -s 10.16.0.0/12 -i lxcbr0 -j ACCEPT


Forwarding is enabled in both the host and the first container:

net.ipv4.ip_forward=1


The masquerading is working as expected:

first# tcpdump -i eth0 -n icmp
...
17:41:20.154412 IP xx.xx.xx.13 > 8.8.8.8: ICMP echo request, id 242,
seq 1, length 64
17:41:20.164829 IP 8.8.8.8 > xx.xx.xx.13: ICMP echo reply, id 242, seq
1, length 64
17:41:21.153603 IP xx.xx.xx.13 > 8.8.8.8: ICMP echo request, id 242,
seq 2, length 64
17:41:21.164112 IP 8.8.8.8 > xx.xx.xx.13: ICMP echo reply, id 242, seq
2, length 64
...

But the return traffic is not routed back to the second container:

first# tcpdump -i lxcbr0 -n icmp
...
17:41:29.153490 IP 10.16.4.76 > 8.8.8.8: ICMP echo request, id 242,
seq 10, length 64
17:41:30.153562 IP 10.16.4.76 > 8.8.8.8: ICMP echo request, id 242,
seq 11, length 64
...

I've tried changing the AppArmor profiles to unconfined for both
containers but that hasn't made any difference.

If anyone can point me in the right direction it would be appreciated.



Thanks,

REW




More information about the lxc-users mailing list