<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 08/29/2015 03:26 PM, Fajar A. Nugraha wrote:<br>
<blockquote
cite="mid:CAG1y0sdfL2vt0=+ESCuOB8+xRrjRuG1igzsNaOepW3fjq2ojhw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Sun, Aug 30, 2015 at 5:10 AM,
Fajar A. Nugraha <span dir="ltr"><<a
moz-do-not-send="true" href="mailto:list@fajar.net"
target="_blank">list@fajar.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>- on the host: "tcpdump -n -i bond0 172.16.0.1"
and "tcpdump -n -i veth5BJDXU 172.16.0.1"
(substitute the veth name with whatever you have)</div>
<div><br>
</div>
</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>It should be "tcpdump -n -i bond0 host 172.16.0.1" and
"tcpdump -n -i veth5BJDXU host 172.16.0.1"</div>
</div>
</div>
</div>
<br>
</blockquote>
<br>
I will give this a try tomorrow. Just as an FYI, we originally were
running libvirt-qemu under CentOS and were using full VMs to host
our software. The VMs were setup to use bridged networking and had
full visibility of the subnet they were part of, and could also
access the external internet. We do not use iptables, selinix, or
apparmor. We did not use the virbr0 interface defined by default by
libvirt since this provides only NAT based addressing. We needed our
VMs to have full access to their host's subnet. The br0 bridge is
configured on the host using the ifcfg-br0 file I posted earlier,
along with associated files for ifcfg-bond0 and one or more
ifcfg-ethN interface files depending on how many NICs are tied to
the bond interface.<br>
<br>
The VMs have only a single /etc/sysconfig/network-scripts/ifcfg-eth0
file. This does not directly mention the br0 interface but are of
course indirectly connected to the br0 bridge interface of their
host. From the VM's perspective, they see themselves as a system
with a single NIC.<br>
<br>
We switched to libvirt-lxc and this was basically plug-n-play. No
changes were needed to the CentOS networking configuration we were
using with our VM based system on either the host or the containers.
The switch to containers was ultimately painless, and we were even
able to use the same basic CentOS template, with only a few changes
to make it container friendly (such as tweaking the /etc/fstab
file).<br>
<br>
Our decision to switch to "stock" LXC, as I mentioned in my original
post, is primarily motivated by the fact that libvirt-lxc is being
deprecated. I assume the switch to LXC should go relatively painly
as well, but there is clearly more of a learning curve. The fact
that our containers cannot access their host's subnet I suspect is a
missing parameter in the container's config file. My gut feeling is
that they are not talking to the br0 bridge, despite this being
specifically listed in their config files. I'll run the suggest
tcpdump tests to see if I can better understand what's going on.<br>
<br>
Peter<br>
<br>
</body>
</html>