[lxc-users] Configuring LXC containers to use a host bridge under CentOS 7
Peter Steele
pwsteele at gmail.com
Fri Aug 28 20:37:45 UTC 2015
We're currently using the CentOS libvirt-LXC tool set for creating and
managing containers under CentOS 7.1. This tool set is being deprecated
though so we plan to change our containers to run under the
linuxcontainers.org framework instead. For simplicity I'll refer to this
as simply LXC instead of libvirt-LXC.
Under libvirt-LXC, we have our containers configured to use host
bridging and so are connected to the host network. Each container has
their own static IP and appear as physical machines on the network. They
can see each other as well as other systems running on the same network.
I've been unable so far to get host bridging to work with LXC. There is
a fair amount of information available on networking for LXC but there
seems to be a lot of different "flavors"--everyone has their own unique
solution. I've tried various configurations and I am able to get the
containers to see each other but have not been able to get them to see
the host network nor the external internet. The config I am using for my
containers looks like this:
lxc.utsname = test1
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
The br0 interface referenced here is the same bridge interface that I
have configured for use with my libvirt-LXC containers. Some of the
sites I've come across that discuss setting up host bridging for LXC say
to configure rules in iptables. However, we do not need any such rules
with libvirt-LXC, and in fact iptables (or more accurately, firewalld
under CentOS 7) isn't even enabled on our servers.
In addition to this LXC config I'm using, I have also created
/etc/sysconfig/network-scripts/ifcfg-eth0 with the following entries:
DEVICE=eth0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.110.222
NETMASK=255.255.0.0
GATEWAY=172.16.0.1
This is a pretty standard configuration for specifying static IPs. This
is the exact same file that I use for my libvirt-LXC based containers.
As I stated, the LXC containers I've created can see each other, but
they cannot access the host network. They can't even ping their own host
nor the gateway. The routing table though is the same for both my LXC
and libvirt-LXC containers:
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref
Use Iface
default 172.16.0.1 0.0.0.0 UG 0 0 0 eth0
link-local 0.0.0.0 255.255.0.0 U 1021 0
0 eth0
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
I'm not sure what LXC magic I am missing to open the containers up to
the outside network. I'm using the same container template for both my
LXC and libvirt-LXC tests, and I am using the same host for both. What
am I missing?
The output of "bridge link show br0" with one container running is:
# bridge link show br0
3: bond0 state UP : <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu
1500 master br0 state forwarding priority 32 cost 19
6: virbr0-nic state DOWN : <BROADCAST,MULTICAST> mtu 1500 master
virbr0 state disabled priority 32 cost 100
22: veth5BJDXU state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
1500 master virbr0 state forwarding priority 32 cost 2
The veth entry is present only when the container is running. In my
equivalent setup using libvirt-LXC with one container, the output of
this command produces essentially the same output except the generated
name is veth0.
Any advise on how to resolve this issue would be appreciated.
Peter
More information about the lxc-users
mailing list