<div dir="auto"><div>Hi Amblard, </div><div dir="auto"><br></div><div dir="auto">I have never used it, but this may be worth taking a look to solve your problem:</div><div dir="auto"><br></div><div dir="auto"><a href="https://wiki.ubuntu.com/FanNetworking">https://wiki.ubuntu.com/FanNetworking</a><br><div class="gmail_extra" dir="auto"><br><div class="gmail_quote">On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" <<a href="mailto:amaury@linux.com">amaury@linux.com</a>> wrote:<br type="attribution"><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div style="font-family:sans-serif;font-size:13.696px" dir="auto"><div style="margin:16px 0px"><div>Hello,<br><br>I am deploying 10< bare metal servers to serve as hosts for containers<br>managed through LXD.<br>As the number of container grows, management of inter-container<br>running on different hosts becomes difficult to manage and need to be<br>streamlined.<br><br>The goal is to setup a <a href="http://192.168.0.0/24" style="text-decoration-line:none;color:rgb(66,133,244)" target="_blank">192.168.0.0/24</a> network over which containers<br>could communicate regardless of their host. The solutions I looked at<br>[1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of<br>bridge.driver: openvswitch configuration for LXD.<br>Note: baremetal servers are hosted on different physical networks and<br>use of multicast was ruled out.<br><br>An illustration of the goal architecture is similar to the image visible on<br><a href="https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=6aJRw15HSf&pg=PA197#v=onepage&q&f=false" style="text-decoration-line:none;color:rgb(66,133,244)" target="_blank">https://books.google.fr/books?<wbr>id=vVMoDwAAQBAJ&lpg=PA168&ots=<wbr>6aJRw15HSf&pg=PA197#v=onepage&<wbr>q&f=false</a><br>Note: this extract is from a book about LXC, not LXD.<br><br>The point that is not clear is<br>- whether each container needs to have as many veth as there are<br>baremetal host, in which case [de]commission of a new baremetal would<br>require configuration updated of all existing containers (and<br>basically rule out this scenario)<br>- or whether it is possible to "hide" this mesh network at the host<br>level and have a single veth inside each container to access the<br>private network and communicate with all the other containers<br>regardless of their physical location and regardeless of the number of<br>physical peers<br><br>Has anyone built such a setup?<br>Does the OVS+GRE setup need to be build prior to LXD init or can LXD<br>automate part of the setup?<br>Online documentation is scarce on the topic so any help would be appreciated.<br><br>Regards,<br>Amaury<br><br>[1] <a href="https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/" style="text-decoration-line:none;color:rgb(66,133,244)" target="_blank">https://stgraber.org/2016/<wbr>10/27/network-management-with-<wbr>lxd-2-3/</a><br>[2] <a href="https://stackoverflow.com/questions/39094971/want-to-use-the-vlan-feature-of-openvswitch-with-lxd-lxc" style="text-decoration-line:none;color:rgb(66,133,244)" target="_blank">https://stackoverflow.com/<wbr>questions/39094971/want-to-use<wbr>-the-vlan-feature-of-openvswit<wbr>ch-with-lxd-lxc</a><br>[3] <a href="https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-networking-on-ubuntu-16-04-lts/" style="text-decoration-line:none;color:rgb(66,133,244)" target="_blank">https://bayton.org/docs/li<wbr>nux/lxd/lxd-zfs-and-bridged-ne<wbr>tworking-on-ubuntu-16-04-lts/</a><br></div></div><div style="height:0px"></div></div><br></div>
<br>______________________________<wbr>_________________<br>
lxc-users mailing list<br>
<a href="mailto:lxc-users@lists.linuxcontainers.org">lxc-users@lists.<wbr>linuxcontainers.org</a><br>
<a href="http://lists.linuxcontainers.org/listinfo/lxc-users" rel="noreferrer" target="_blank">http://lists.linuxcontainers.<wbr>org/listinfo/lxc-users</a><br></blockquote></div><br></div></div></div>