[lxc-users] Setting up network between containers on different machines

Justin Cormack justin at specialbusservice.com
Sat May 17 12:51:05 UTC 2014


If the internal and external interfaces are on different ports you should
be able to move one into a container. Virtual interfaces don't really exist
so you can't use them. You can't just pick a new network address without it
being allocated.

Your best solution is to get more IP addresses. Ipv6 is great if your
provider allocates a /64 as you get plenty of addresses....
 On May 5, 2014 8:29 PM, "Dmitry Demeshchuk" <demeshchuk at gmail.com> wrote:

> Hi, list,
>
> Here's what I'm trying to do: we have multiple physical machines in
> Softlayer network and I'm trying to make the containers (vanilla LXC or
> Docker, doesn't really matter for me) see each other even when being at
> different physical hosts.
>
> The obvious solution would be – give them our internal IP addresses.
> That's what I tried to do and so far with almost no result.
>
> The main problem is Softlayer. Their routers are set up in such way that
> you have to create an interface with the given IP at the host system, not
> inside the container. Otherwise, this IP just wouldn't be visible to the
> network. Of course, we can setup every host system as a router for its own
> container IPs but that's obviously very inconvenient.
>
> Now, here's a bunch of setups I tried:
>
> 1. Set up lxc.networking.type = phys with the interface set to eth0 –
> doesn't work for some reason. The interface just disappears and the IP
> address is no longer accessible. lxc.network.ipv4 set to the same IP
> address as eth0.
>
> 2. lxc.networking.type = phys with an interface eth0:0 (a virtual
> interface I set up myself). Breaks the interfaces table completely
> (ifconfig keeps failing until system restart, even networking restart
> doesn't help). Needless to say, container is still not accessible through
> network. I suspect that's a somewhat known bug, but couldn't google it
> anywhere. lxc.network.ipv4 is same as eth0:0.
>
> 3. lxc.networking.type = veth, create a bridge br0 bridged with eth0, with
> an IP address from the same space as the containers. Meaning, something
> like that:
> eth0: 192.168.0.3/26
> br0: 192.168.10.3/26
> lxc.network.ipv4: 192.168.10.5/26
>
> Doesn't work, the IP is just not visible.
>
> 4. Same setup, but lxc.networking.type = macvlan
>
> 5. Two previous setups with a local interface being a macvlan instead of a
> bridge
>
> 6. Finally, something close: create another address space that is visible
> host-wide and use virtual interfaces:
> lxc.network.type = macvlan
> eth0: 192.168.0.3/24
> eth0:0: 192.168.10.5/26
> br0: 10.0.0.1/24
> lxc.network.ipv4 = 10.0.0.2/24
>
> And now, add routes from 192.168.10.5 (eth0:0) to 10.0.0.2/24 (LXC eth0).
> Works – but involves ugly hacks and iptables.
>
>
>
> If you find any setup being close to correct, please let me know and I'll
> provide any extra details: routes I set up in routing table, visibility
> issues (like, visible from local machine but not externally, or completely
> invisible), etc.
>
> And, I guess, my main question is: can I set up some interface at the
> local machine to have an IP *and* make LXC use exactly that interface and
> that IP (but as far as I understand the answer is "no")?
>
> Thanks!
>
> --
> Best regards,
> Dmitry Demeshchuk
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20140517/4ca5d9dd/attachment.html>


More information about the lxc-users mailing list