[lxc-users] LXD Bridge issues with Openstack
Steve Searles
SSearles at zimcom.net
Fri Jun 17 14:29:48 UTC 2016
Thanks, I have tabled it for the next week or so while I work on other projects. If I find something I will let you know as well.
Steven Searles | ssearles at zimcom.net<mailto:ssearles at zimcom.net>
Zimcom Internet Solutions | www.zimcom.net<http://www.zimcom.net/>
O: 513.231.9500 | D: 513.233.4130
[cid:4E979D40-7111-4FF2-A814-86A43D457744]
From: lxc-users <lxc-users-bounces at lists.linuxcontainers.org<mailto:lxc-users-bounces at lists.linuxcontainers.org>> on behalf of Paul Hummer <paul.hummer at canonical.com<mailto:paul.hummer at canonical.com>>
Reply-To: LXC users mailing-list <lxc-users at lists.linuxcontainers.org<mailto:lxc-users at lists.linuxcontainers.org>>
Date: Friday, June 17, 2016 at 10:22 AM
To: LXC users mailing-list <lxc-users at lists.linuxcontainers.org<mailto:lxc-users at lists.linuxcontainers.org>>
Subject: Re: [lxc-users] LXD Bridge issues with Openstack
Hi Steve-
I'm currently investigating some issues in devstack that are identical to this. I don't currently have answers for you just yet, but please know that I'm actively trying to figure this out as well. It's likely a configuration issue with neutron, but I'm not entirely sure.
Cheers,
Paul
On Wed, Jun 8, 2016 at 12:10 PM, Steve Searles <SSearles at zimcom.net<mailto:SSearles at zimcom.net>> wrote:
Hello everyone, I have configured a nova-compute-lxd node in our openstack environment. We are currently running Openstack Mitaka (Not Devstack) with neutron. I am able to provision instances however they have no network connectivity. When using a regular provider network and not VxLAN networks, if I add the provider network (em2 in my case) to the bridge group manually it will function properly. So it appears that the interface is not added to the bridge during the provisioning process.
My neutron config on the host has
physical_interface_mappings = public:em2
This is for VxLAN support.
enable_vxlan = True
local_ip = 172.26.3.1
l2_population = True
After adding an instance.
root at lxd01:~# nova list
+--------------------------------------+------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------+
| d98ec618-1d98-47d1-95d1-31d39af38f3a | test | ACTIVE | - | Running | public=70.36.33.15 |
+--------------------------------------+------+--------+------------+-------------+——————————+
root at lxd01:~# lxc list
+-------------------+---------+------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+------+------+------------+-----------+
| instance-0000001b | RUNNING | | | PERSISTENT | 0 |
+-------------------+---------+------+------+------------+—————+
root at lxd01:~# brctl show
bridge name bridge idSTP enabled interfaces
brq5d71d51f-4b 8000.feec5f898aa5no vethNV5DFH
lxdbr0 8000.000000000000no
virbr0 8000.52540009839ayes virbr0-nic
root at lxd01:~#
root at lxd01:~# lxc exec instance-0000001b -- /bin/bash
root at ubuntu:~# ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:06:30:fc
inet6 addr: fe80::f816:3eff:fe06:30fc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:187 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:718 (718.0 B) TX bytes:61866 (61.8 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:128 errors:0 dropped:0 overruns:0 frame:0
TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:9968 (9.9 KB) TX bytes:9968 (9.9 KB)
root at ubuntu:~#
Exit
root at lxd01:~# brctl addif brq5d71d51f-4b em2
root at lxd01:~# brctl show
bridge name bridge idSTP enabled interfaces
brq5d71d51f-4b 8000.0026b942da6fno em2
vethNV5DFH
lxdbr0 8000.000000000000no
virbr0 8000.52540009839ayes virbr0-nic
root at lxd01:~#
root at lxd01:~# lxc restart instance-0000001b
root at lxd01:~# lxc list
+-------------------+---------+--------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------------------+---------+--------------------+------+------------+-----------+
| instance-0000001b | RUNNING | 70.36.33.15 (eth0) | | PERSISTENT | 0 |
+-------------------+---------+--------------------+------+------------+-----------+
root at lxd01:~# nova list
+--------------------------------------+------+--------+------------+-------------+--------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------+
| d98ec618-1d98-47d1-95d1-31d39af38f3a | test | ACTIVE | - | Running | public=70.36.33.15 |
+--------------------------------------+------+--------+------------+-------------+--------------------+
root at lxd01:~#
root at lxd01:~# lxc exec instance-0000001b -- /bin/bash
root at ubuntu:~# ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:06:30:fc
inet addr:70.36.33.15 Bcast:70.36.33.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe06:30fc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:491 errors:0 dropped:0 overruns:0 frame:0
TX packets:182 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:39890 (39.8 KB) TX bytes:17244 (17.2 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
root at ubuntu:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8<http://8.8.8.8>: icmp_seq=1 ttl=48 time=20.0 ms
64 bytes from 8.8.8.8<http://8.8.8.8>: icmp_seq=2 ttl=48 time=19.5 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 19.516/19.803/20.090/0.287 ms
root at ubuntu:~#
Yay all good.
Any idea on what I might be missing? Does the LXC implementation support VxLAN? When I provision an instance with VxLAN the vxnet—XX interfaces do not seem to get created either.
This is what brctl looks like after provisioning a VxLAN instance.
root at lxd01:~# brctl show
bridge name bridge idSTP enabled interfaces
brq5d71d51f-4b 8000.0026b942da6fno em2
veth8UXD1A
brq6d747849-47 8000.fec52a511007no veth4N38LY
lxdbr0 8000.000000000000no
virbr0 8000.52540009839ayes virbr0-nic
root at lxd01:~#
A new bridge was created with the veth from the container but no vxlan interface.
Can someone point me in the right direction?
Steven Searles | ssearles at zimcom.net<mailto:ssearles at zimcom.net>
Zimcom Internet Solutions | www.zimcom.net<http://www.zimcom.net/>
O: 513.231.9500<tel:513.231.9500> | D: 513.233.4130<tel:513.233.4130>
[cid:6BDB9D41-C564-43BB-911D-9CF98A243F53]
_______________________________________________
lxc-users mailing list
lxc-users at lists.linuxcontainers.org<mailto:lxc-users at lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160617/8ad5e73d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 847FA943-AFD4-4E4F-BB5E-0DDB37DEAF6A[40].png
Type: image/png
Size: 10713 bytes
Desc: 847FA943-AFD4-4E4F-BB5E-0DDB37DEAF6A[40].png
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160617/8ad5e73d/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 847FA943-AFD4-4E4F-BB5E-0DDB37DEAF6A[26].png
Type: image/png
Size: 10713 bytes
Desc: 847FA943-AFD4-4E4F-BB5E-0DDB37DEAF6A[26].png
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160617/8ad5e73d/attachment-0003.png>
More information about the lxc-users
mailing list