[lxc-users] ?==?utf-8?q? OVS / GRE - guest-transparent mesh networking across multiple hosts
Tomasz Chmielewski
mangoo at wpkg.org
Thu Aug 3 12:05:29 UTC 2017
I think fan is single server only and / or won't cross different networks.
You may also take a look at https://www.tinc-vpn.org/
Tomasz
https://lxadm.com
On Thursday, August 03, 2017 20:51 JST, FĂ©lix Archambault <fel.archambault at gmail.com> wrote:
> Hi Amblard,
>
> I have never used it, but this may be worth taking a look to solve your
> problem:
>
> https://wiki.ubuntu.com/FanNetworking
>
> On Aug 3, 2017 12:46 AM, "Amaury Amblard-Ladurantie" <amaury at linux.com>
> wrote:
>
> Hello,
>
> I am deploying 10< bare metal servers to serve as hosts for containers
> managed through LXD.
> As the number of container grows, management of inter-container
> running on different hosts becomes difficult to manage and need to be
> streamlined.
>
> The goal is to setup a 192.168.0.0/24 network over which containers
> could communicate regardless of their host. The solutions I looked at
> [1] [2] [3] recommend use of OVS and/or GRE on hosts and the use of
> bridge.driver: openvswitch configuration for LXD.
> Note: baremetal servers are hosted on different physical networks and
> use of multicast was ruled out.
>
> An illustration of the goal architecture is similar to the image visible on
> https://books.google.fr/books?id=vVMoDwAAQBAJ&lpg=PA168&ots=
> 6aJRw15HSf&pg=PA197#v=onepage&q&f=false
> Note: this extract is from a book about LXC, not LXD.
>
> The point that is not clear is
> - whether each container needs to have as many veth as there are
> baremetal host, in which case [de]commission of a new baremetal would
> require configuration updated of all existing containers (and
> basically rule out this scenario)
> - or whether it is possible to "hide" this mesh network at the host
> level and have a single veth inside each container to access the
> private network and communicate with all the other containers
> regardless of their physical location and regardeless of the number of
> physical peers
>
> Has anyone built such a setup?
> Does the OVS+GRE setup need to be build prior to LXD init or can LXD
> automate part of the setup?
> Online documentation is scarce on the topic so any help would be
> appreciated.
>
> Regards,
> Amaury
>
> [1] https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
> [2] https://stackoverflow.com/questions/39094971/want-to-use
> -the-vlan-feature-of-openvswitch-with-lxd-lxc
> [3] https://bayton.org/docs/linux/lxd/lxd-zfs-and-bridged-ne
> tworking-on-ubuntu-16-04-lts/
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
More information about the lxc-users
mailing list