[lxc-users] "mesh networking" for lxc containers (similar to weave)?
Luis M. Ibarra
michael.ibarra at gmail.com
Mon Jun 22 16:52:12 UTC 2015
Have you checked Fan?
http://blog.dustinkirkland.com/2015/06/the-bits-have-hit-fan.html?m=1
2015-06-20 2:16 GMT-04:00 Janjaap Bos <janjaapbos at gmail.com>:
> Yes, ZeroTier provides peer-to-peer virtual networking. It is cloud /
> container / virtualiser agnostic. It will work anywhere and we use it for
> connecting containers & vm's across clouds. Also to provide access to users
> on Windows / OSX.
>
> Within the container you need access to the /dev/net/tun device and
> depending on the flavour (lxc / lxd / docker) net_admin capabilities.
>
> You can download it at https://www.zerotier.com or build it from
> https://github.com/zerotier/ZeroTierOne
>
> Since it is peer-to-peer there is very little overhead. Packets destined
> for local peers will stay within the local net. You can create very large
> distributed flat ether networks. Great for the type of cloud backplane you
> described.
>
> Also, this enables you to live migrate instances while maintaining their
> network configuration.
>
> 2015-06-20 3:37 GMT+02:00 Tomasz Chmielewski <mangoo at wpkg.org>:
>
>> I know this is just "normal networking", however, there are at least two
>> issues with your suggestions:
>>
>> - it assumes the hosts are in the same subnet (say, connected to the same
>> switch), so it won't work if the hosts have two different public IPs (i.e.
>> 46.1.2.3 and 124.8.9.10)
>>
>> - with just two hosts, you may overcome the above limitation with some
>> VPN magic; however, it becomes problematic as the number of hosts grows
>> (imagine 10 or more hosts, trying to set it up without SPOF / central VPN
>> server; ideally, the hosts should talk to themselves using the shortest
>> paths possible)
>>
>>
>> Therefore, I'm asking if there is any better "magic", as you say, for lxc
>> networking?
>> Possibly it could be achieved with tinc, running on hosts only -
>> http://www.tinc-vpn.org/ - but haven't really used it.
>> And maybe people have other ideas?
>>
>> --
>> Tomasz Chmielewski
>> http://wpkg.org
>>
>>
>> On 2015-06-20 03:20, Christoph Lehmann wrote:
>>
>>> There is no magic with lxcs networking. Its just a bridge and some
>>> iptables rules for NAT and a dhcp server.
>>>
>>> You can setup a bridge on your public interface, configure the
>>> container to use that bridge and do the same on your second host.
>>>
>>> Am 19. Juni 2015 18:15:23 MESZ, schrieb Tomasz Chmielewski
>>> <mangoo at wpkg.org>:
>>>
>>> Are there any solutions which would let one build "mesh networking"
>>>> for
>>>> lxc containers, similar to what weave does for docker?
>>>>
>>>> Assumptions:
>>>>
>>>> - multiple servers (hosts) which are not in the same subnet (i.e. in
>>>>
>>>> different DCs in different countries),
>>>> - containers share the same subnet (i.e. 10.0.0.0/8 [1]), no matter
>>>> on which
>>>> host they are running
>>>> - if container is migrated to a different host, it is still
>>>> reachable on
>>>> the same IP address without any changes in the networking
>>>>
>>>> I suppose the solution would run only once on each of the hosts,
>>>> rather
>>>> than in each container.
>>>>
>>>> Is there something similar for lxc?
>>>>
>>>
>>> --
>>> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail
>>> gesendet.
>>> _______________________________________________
>>> lxc-users mailing list
>>> lxc-users at lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>
>> _______________________________________________
>> lxc-users mailing list
>> lxc-users at lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
--
Luis M. Ibarra
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20150622/68bd903c/attachment.html>
More information about the lxc-users
mailing list