<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div><br>My comments between lines.</div><div><br>On Aug 3, 2017, at 10:22, Ron Kelley <<a href="mailto:rkelleyrtp@gmail.com">rkelleyrtp@gmail.com</a>> wrote:<br><br></div><blockquote type="cite"><div><span>We have implemented something similar to this using VXLAN (outside the scope of LXC).</span><br><span></span><br><span>Our setup: 6x servers colocated in the data center running LXD 2.15 - each server with 2x NICs: nic(a) for management and nic(b) </span><br><span></span><br><span>* nic(a) is strictly used for all server management tasks (lxd commands)</span><br><span>* nic(b) is used for all VXLAN network segments</span><br><span></span><br><span></span><br><span>On each server, we provision ethernet interface eth1 with a private IP Address (i.e.: 172.20.0.x/24) and run the following script at boot to provision the VXLAN interfaces (via multicast):</span><br><span>---------------------------------------</span><br><span>#!/bin/bash</span><br><span></span><br><span># Script to configure VxLAN networks</span><br><span>ACTION="$1"</span><br><span></span><br><span>case $ACTION in</span><br><span> up)</span><br><span> ip -4 route add 239.0.0.1 dev eth1</span><br><span> for i in {1101..1130}; do ip link add vxlan.${i} type vxlan group 239.0.0.1 dev eth1 dstport 0 id ${i} && ip link set vxlan.${i} up; done</span><br><span> ;;</span><br><span> down)</span><br><span> ip -4 route del 239.0.0.1 dev eth1</span><br><span> for i in {1101..1130}; do ip link set vxlan.${i} down && ip link del vxlan.${i}; done</span><br><span> ;;</span><br><span> *)</span><br><span> echo " ${0} up|down"; exit</span><br><span> ;;</span><br><span>esac</span><br><span>-----------------------</span><br><span></span><br><span>To get the containers talking, we simply assign a container to a respective VXLAN interface via the “lxc network attach” command like this: </span><br><span>/usr/bin/lxc network attach vxlan.${VXLANID} ${HOSTNAME} eth0 eth0.</span><br><span></span><br><span>We have single-armed (i.e.: eth0) containers that live exclusively behind a VXLAN interface, and we have dual-armed servers (eth0 and eth1) hat act as firewall/NAT containers for a VXLAN segment.</span><br><span></span><br><span>It took a while to get it all working, but it works great. We can move containers anywhere in our infrastructure without issue. </span><br><span></span><br><span>Hope this helps!</span><br><span></span><br><span></span><br><span></span><br><span>-Ron</span><br><span></span><br><span></span><br><span></span><br></div></blockquote><div><br></div><div>Second VXLan. Check the RFC is pretty straightforward[1]. In summary, you need a key database to map your remote networks; etcd is a way to implement this, or you can use multicast as Ron explained.</div><div><br></div><div>[1] <span style="font-size: 12pt; font-family: Helvetica;"><a href="https://tools.ietf.org/pdf/rfc7348.pdf">https://tools.ietf.org/pdf/rfc7348.pdf</a></span></div><div><br></div><br><blockquote type="cite"><div><span></span><br><blockquote type="cite"><span>On Aug 3, 2017, at 8:05 AM, Tomasz Chmielewski <<a href="mailto:mangoo@wpkg.org">mangoo@wpkg.org</a>> wrote:</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>I think fan is single server only and / or won't cross different networks.</span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>You may also take a look at <a href="https://www.tinc-vpn.org/">https://www.tinc-vpn.org/</a></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>Tomasz</span><br></blockquote><blockquote type="cite"><span><a href="https://lxadm.com">https://lxadm.com</a></span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><span>On Thursday, August 03, 2017 20:51 JST, Félix Archambault <<a href="mailto:fel.archambault@gmail.com">fel.archambault@gmail.com</a>> wrote: </span><br></blockquote><blockquote type="cite"><span></span><br></blockquote><blockquote type="cite"><blockquote type="cite"><span>Hi Amblard,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>I have never used it, but this may be worth taking a look to solve your</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>problem:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="https://wiki.ubuntu.com/FanNetworking">https://wiki.ubuntu.com/FanNetworking</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" <<a href="mailto:amaury@linux.com">amaury@linux.com</a>></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>wrote:</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Hello,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>I am deploying 10< bare metal servers to serve as hosts for containers</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>managed through LXD.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>As the number of container grows, management of inter-container</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>running on different hosts becomes difficult to manage and need to be</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>streamlined.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>The goal is to setup a 192.168.0.0/24 network over which containers</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>could communicate regardless of their host. The solutions I looked at</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>[1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>bridge.driver: openvswitch configuration for LXD.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Note: baremetal servers are hosted on different physical networks and</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>use of multicast was ruled out.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>An illustration of the goal architecture is similar to the image visible on</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=">https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>6aJRw15HSf&pg=PA197#v=onepage&q&f=false</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Note: this extract is from a book about LXC, not LXD.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>The point that is not clear is</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>- whether each container needs to have as many veth as there are</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>baremetal host, in which case [de]commission of a new baremetal would</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>require configuration updated of all existing containers (and</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>basically rule out this scenario)</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>- or whether it is possible to "hide" this mesh network at the host</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>level and have a single veth inside each container to access the</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>private network and communicate with all the other containers</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>regardless of their physical location and regardeless of the number of</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>physical peers</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Has anyone built such a setup?</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Does the OVS+GRE setup need to be build prior to LXD init or can LXD</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>automate part of the setup?</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Online documentation is scarce on the topic so any help would be</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>appreciated.</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Regards,</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>Amaury</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>[1] <a href="https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/">https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>[2] <a href="https://stackoverflow.com/questions/39094971/want-to-use">https://stackoverflow.com/questions/39094971/want-to-use</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>-the-vlan-feature-of-openvswitch-with-lxd-lxc</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>[3] <a href="https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne">https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>tworking-on-ubuntu-16-04-lts/</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>_______________________________________________</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span>lxc-users mailing list</span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="mailto:lxc-users@lists.linuxcontainers.org">lxc-users@lists.linuxcontainers.org</a></span><br></blockquote></blockquote><blockquote type="cite"><blockquote type="cite"><span><a href="http://lists.linuxcontainers.org/listinfo/lxc-users">http://lists.linuxcontainers.org/listinfo/lxc-users</a></span><br></blockquote></blockquote><blockquote type="cite"><span>_______________________________________________</span><br></blockquote><blockquote type="cite"><span>lxc-users mailing list</span><br></blockquote><blockquote type="cite"><span><a href="mailto:lxc-users@lists.linuxcontainers.org">lxc-users@lists.linuxcontainers.org</a></span><br></blockquote><blockquote type="cite"><span><a href="http://lists.linuxcontainers.org/listinfo/lxc-users">http://lists.linuxcontainers.org/listinfo/lxc-users</a></span><br></blockquote><span></span><br><span>_______________________________________________</span><br><span>lxc-users mailing list</span><br><span><a href="mailto:lxc-users@lists.linuxcontainers.org">lxc-users@lists.linuxcontainers.org</a></span><br><span><a href="http://lists.linuxcontainers.org/listinfo/lxc-users">http://lists.linuxcontainers.org/listinfo/lxc-users</a></span></div></blockquote><br><div>Luis Michael Ibarra</div></body></html>