[lxc-users] LXD 2.12 - VXLAN configuration connected to eth1

Ron Kelley rkelleyrtp at gmail.com
Sun Apr 23 21:25:21 UTC 2017


Thanks Stéphane.  Really appreciate the fast reply.  Will be looking forward to the next code drop.


As for the macvlan issue, turns out the interface was plumbed but never turned up.  After running “ifconfig vxlan1500 up”, I could get both containers pinging properly across the network.

For anyone else who might want to try VXLAN multicast between containers, here is a quick set of commands I used to get it working:
--------------------------------
ip route -4 add 239.0.0.1 eth1
ip link add vxlan1500 type vxlan group 239.0.0.1 dev eth1 dstport 0 id 1500
ifconfig vxlan1500 up
<edit LXD profile to match - set the nictype to “macvlan”, and the parent to “vxlan1500”> >
--------------------------------

Simply replace the “vxlan1500” with your interface name of choice and pick you physical ethernet port number (eth1 in the example above).  The parameters “id 1500” specify the VXLAN Network ID (0-16777215).

For what its worth, this is a huge win for me as I can setup a real environment using software-defined VLANs w/out modifying any top-of-rack switches.  I simply create a new VXLAN segment for each new customer on our LXD servers and deploy a software firewall that manages traffic between the VXLAN segment with a local gateway.

Awesome!


-Ron

 

> On Apr 23, 2017, at 5:00 PM, Stéphane Graber <stgraber at ubuntu.com> wrote:
> 
> Hi,
> 
> I sent a pull request which allows oevrriding the interface in multicast mode:
>    https://github.com/lxc/lxd/pull/3210
> 
> When writing that code, I did notice that in my earlier implementation I
> always selected the default interface for those, so that explains why no
> amount of routing trickery would help.
> 
> Stéphane
> 
> On Sun, Apr 23, 2017 at 04:36:43PM -0400, Ron Kelley wrote:
>> Thanks for the speedy reply!  From my testing, the VXLAN tunnel always seems to use eth0.  After running the “ip -4 route add” command per your note below, I disabled eth1 on one of the hosts but was still able to ping between the two containers.  I re-enabled that interface and disabled eth0; the ping stopped.  It seems the VXLAN tunnel is bound to eth0.
>> 
>> By chance, is there a workaround to make this work properly?  I also tried using the macvlan interface type specifying a VXLAN tunnel interface and it would not work either.  For clarity, this is what I did:
>> 
>> ip link add vxlan500 type vxlan group 239.0.0.1 dev eth1 dstport 0 id 500
>> ip route -4 add 239.0.0.1 eth1
>> <edit the LXD default profile; set the nictype to “macvlan”, and the parent to “vxlan500”>
>> 
>> I was hoping a raw VXLAN interface would work instead of using the LXD create command.
>> 
>> 
>> -Ron
>> 
>> 
>>> On Apr 23, 2017, at 4:18 PM, Stéphane Graber <stgraber at ubuntu.com> wrote:
>>> 
>>> Hi,
>>> 
>>> VXLAN in multicast mode (as is used in your case), when no multicast
>>> address is specified will be using 239.0.0.1.
>>> 
>>> This means that whatever route you have to reach "239.0.0.1" will be
>>> used by the kernel for the VXLAN tunnel, or so would I expect.
>>> 
>>> 
>>> Does:
>>> ip -4 route add 239.0.0.1 dev eth1
>>> 
>>> Cause the VXLAN traffic to now use eth1?
>>> 
>>> If it doesn't, then that'd suggest that the multicast VXLAN interface
>>> does in fact get tied to a particular parent interface and we should
>>> therefore add an option to LXD to let you choose that interface.
>>> 
>>> Stéphane
>>> 
>>> On Sun, Apr 23, 2017 at 04:04:03PM -0400, Ron Kelley wrote:
>>>> Greetings all.
>>>> 
>>>> Following Stéphane’s excellent guide on using multicast VXLAN with LXD (https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/).  In my lab, I have setup a few servers running Ubuntu 16.04 with LXD 2.12 and multiple interfaces (eth0, eth1, eth2).  My goal is to setup a multi-tenant computing solution using VXLAN to separate network traffic.  I want to dedicate eth0 as the mgmt-only interface and use eth1 (or other additional interfaces) as customer-only interfaces. I have read a number of guides but can’t find anything that clearly spells out how to create bridged interfaces using eth1, eth2, etc for LXD.
>>>> 
>>>> I can get everything working using a single “eth0” interface on my LXD hosts using the following commands:
>>>> -----------------------------------------------------------
>>>> lxc network create vxlan100 ipv4.address=none ipv6.address=none tunnel.vxlan100.protocol=vxlan tunnel.vxlan100.id=100
>>>> lxc launch ubuntu: testvm01
>>>> lxc network attach vxlan100 testvm01
>>>> -----------------------------------------------------------
>>>> 
>>>> All good so far.  I created two test containers running on separate LXD servers using the above VXLAN ID and gave each a static IP Address (i.e.: 10.1.1.1/24 and 10.1.1.2/24).  Both can ping back and forth.  100% working.
>>>> 
>>>> The next step is to use eth1 instead of eth0 on my LXD servers,  but I can’t find a keyword in the online docs that specify which interface to bind (https://github.com/lxc/lxd/blob/master/doc/networks.md).
>>>> 
>>>> Any pointers/clues?
>>>> 
>>>> Thanks,
>>>> 
>>>> -Ron
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20170423/d9385ccf/attachment.html>


More information about the lxc-users mailing list