[lxc-users] Nova LXD - Timeout waiting for vif plugging callback
Joshua Schaeffer
jschaeffer0922 at gmail.com
Tue Aug 14 01:03:40 UTC 2018
I'm trying to get OpenStack setup with nova-compute-lxd and having problems with the networking portions. Wondering if anybody has run into this problem before. All my servers run Ubuntu 16.04. On my compute node I have nova-compute-lxd 15.0.2 installed and lxd version 3.0.1 installed (not sure this one matters):
root at bllcloudcmp02:~# dpkg -l | grep lxd
ii lxd 3.0.1-0ubuntu1~16.04.4 amd64 Container hypervisor based on LXC - daemon
ii lxd-client 3.0.1-0ubuntu1~16.04.4 amd64 Container hypervisor based on LXC - client
ii nova-compute-lxd 15.0.2-0ubuntu1~cloud0 all Openstack Compute - LXD container hypervisor support
ii python-nova-lxd 15.0.2-0ubuntu1~cloud0 all OpenStack Compute Python libraries - LXD driver
ii python-pylxd 2.2.4-0ubuntu0.17.04.1~cloud0 all Python library for interacting with LXD REST API
Everything from OpenStack appears to be operational:
root at bllcloudctl02:~# openstack compute service list
+----+------------------+---------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+---------------+----------+---------+-------+----------------------------+
| 3 | nova-conductor | bllcloudapi02 | internal | enabled | up | 2018-08-14T00:52:31.000000 |
| 5 | nova-scheduler | bllcloudapi02 | internal | enabled | up | 2018-08-14T00:52:32.000000 |
| 6 | nova-consoleauth | bllcloudapi02 | internal | enabled | up | 2018-08-14T00:52:34.000000 |
| 7 | nova-compute | bllcloudcmp02 | nova | enabled | up | 2018-08-14T00:52:31.000000 |
+----+------------------+---------------+----------+---------+-------+----------------------------+
root at bllcloudctl02:~# openstack network agent list
+--------------------------------------+--------------------+---------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+---------------+-------------------+-------+-------+---------------------------+
| 2e0e53b1-7c99-4acb-ac69-79c3bd9aed12 | DHCP agent | bllcloudnet02 | nova | True | UP | neutron-dhcp-agent |
| 86f9449f-1d85-4b80-9a4e-5c35b198f695 | Metadata agent | bllcloudnet02 | None | True | UP | neutron-metadata-agent |
| c8a6fc41-9f93-4ca8-a222-83b12baada8f | Linux bridge agent | bllcloudnet02 | None | True | UP | neutron-linuxbridge-agent |
| cc4073ce-664a-4046-aac1-8c0f1bfeee94 | Linux bridge agent | bllcloudcmp02 | None | True | UP | neutron-linuxbridge-agent |
| dca89565-7d65-49d6-8d26-60c7fc075d86 | L3 agent | bllcloudnet02 | nova | True | UP | neutron-l3-agent |
+--------------------------------------+--------------------+---------------+-------------------+-------+-------+---------------------------+
When I try to start a server I get the following in the nova log (some text removed for readability):
INFO nova.scheduler.client.report [...] Submitted allocation for instance
INFO os_vif [...] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:5b:d1:28,bridge_name='brqd03aa728-2e',has_traffic_filtering=True,id=7758ba79-b165-4883-95d8-83d39316f4bb,network=Network(d03aa728-2e48-4f99-8ba3-017559cd0890),plugin='linux_bridge',port_profile=<?>,preserve_on_delete=False,vif_name='tap7758ba79-b1')
WARNING nova.virt.lxd.driver [...] Timeout waiting for vif plugging callback for instance instance-00000015
This ultimately results in an error and the container not being created because I have the vif_plugging_is_fatal set to true (the default). I haven't been able to figure out why the container never gets a response after the vif interface is created. If I set vif_plugging_is_fatal to false the container gets created, but I can't ping the IP address that gets assigned to it. Any help would be appreciated.
Thanks,
Joshua Schaeffer
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20180813/83fd5a54/attachment.html>
More information about the lxc-users
mailing list