[lxc-users] Configuring LXC containers to use a host bridge under CentOS 7
Fajar A. Nugraha
list at fajar.net
Sat Aug 29 22:10:39 UTC 2015
On Sat, Aug 29, 2015 at 10:40 PM, Peter Steele <pwsteele at gmail.com> wrote:
> On 08/29/2015 07:29 AM, Mark Constable wrote:
>
>> On 29/08/15 23:54, Peter Steele wrote:
>>
>>> For example, I see references to the file /etc/network/interfaces. Is
>>> this an
>>> LXC thing or is this a standard file in Ubuntu networking?
>>>
>>
>> It's a standard pre-systemd debian/ubuntu network config file.
>>
>
> That's what I was beginning to suspect since creating this in my CentOS
> environment seemed to have no effect on LXC at all. Knowing this will help
> me filter out examples that talk about creating these files.
>
> Do you suppose it's possible that Canonical LXC isn't entirely compatible
> with CentOS?
>
>
>
Actually there's no such thing as "canonical lxc".
While lxc's main developers are currently from canonical, the lxc project
itself isn't really tied to a specific distro. For example, since lxc-1.1.0
the bundled init script should function similarly on all distros, with
lxcbr0 (including dnsmasq) running by default.
The main advantage of ubuntu compared to other distros w.r.t lxc that I can
see is that:
- better apparmor integration, so (among others) it should be relatively
safer to run privileged container under ubuntu host
- better container/guest systemd support, where an ubuntu vivid/wily guest
should be able to run as privileged container out-of-the-box (and wily
should be able to run as unprivileged container)
If you only care about "having privileged containers running", then a
centos host should work fine.
Back to your original question, you need to have some basic understanding
of your distro's networking setup. For example, debian/ubuntu uses
/etc/network/interfaces (one file for all network interfaces) while centos
uses /etc/sysconfig/network-scripts/ifcfg-* (one file for each network
interface). To achieve what you want, basically you need to create a bridge
(e.g. br0) on top of your main network interface (e.g. eth0) that would be
used by containers. The instructions are specific to your distro (e.g.
centos and ubuntu are different), but not specific to lxc (i.e. the same
bridge setup can be used by kvm/xen).
One bridge setup example (from google):
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-networkscripts-interfaces_network-bridge.html
>From the snippets you posted, you created
"/etc/sysconfig/network-scripts/ifcfg-eth0", but you didn't mention where.
If it's on the host, then you get it wrong, since you seem to be using
"bond0" on the host. If it's in the container (which is correct), then the
easiest way to check where the problems lie is with tcpdump:
- on the container: "ping -n 172.16.0.1"
- on the host: "tcpdump -n -i bond0 172.16.0.1" and "tcpdump -n -i
veth5BJDXU 172.16.0.1" (substitute the veth name with whatever you have)
If all goes well, you should see both icmp reply and request on both
interfaces (bond0 and veth5BJDXU). If you have forwarding problems, you
will see packets on veth interface, but not on bond0.
--
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20150830/317655d0/attachment.html>
More information about the lxc-users
mailing list