[lxc-users] LXD Cluster & Ceph storage: "Config key ... may not be used as node-specific key"
Robert Johnson
robert.j at bendtel.com
Thu May 16 00:01:41 UTC 2019
On 5/15/19 4:20 PM, Stéphane Graber wrote:
> On Wed, May 15, 2019 at 03:00:34PM -0700, Robert Johnson wrote:
>> I seem to be stuck in a catch-22 with adding a ceph storage pool to an
>> existing LXD cluster.
>>
>> When attempting to add a ceph storage pool, I am prompted to specify the
>> target node, but, when doing so, the config keys are not allowed. Once a
>> ceph pool is created, it's not possible to add config keys. Is there
>> something that I'm missing to the process of adding a ceph pool to a LXD
>> cluster?
>>
>> The documentation and examples that I have found all assume a stand-alone
>> LXD instance.
>>
>>
>> Example commands that I am trying to accomplish this:
>>
>> rob at stack1b:~$ lxd --version
>> 3.13
>>
>> rob at stack1b:~$ lxc cluster list
>> +---------+-------------------------------------+----------+--------+-------------------+
>> | NAME | URL | DATABASE | STATE |
>> MESSAGE |
>> +---------+-------------------------------------+----------+--------+-------------------+
>> | stack1a | https://[....................]:8443 | YES | ONLINE | fully
>> operational |
>> +---------+-------------------------------------+----------+--------+-------------------+
>> | stack1b | https://[....................]:8443 | YES | ONLINE | fully
>> operational |
>> +---------+-------------------------------------+----------+--------+-------------------+
>> | stack1c | https://[....................]:8443 | YES | ONLINE | fully
>> operational |
>> +---------+-------------------------------------+----------+--------+-------------------+
>>
>> rob at stack1b:~$ lxc storage list
>> +-------+-------------+--------+---------+---------+
>> | NAME | DESCRIPTION | DRIVER | STATE | USED BY |
>> +-------+-------------+--------+---------+---------+
>> | local | | zfs | CREATED | 10 |
>> +-------+-------------+--------+---------+---------+
>>
>> rob at stack1b:~$ lxc storage create lxd-slow ceph ceph.osd.pool_name=lxd-slow
>> ceph.user.name=user
>> Error: Pool not pending on any node (use --target <node> first)
>>
>> rob at stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
>> ceph.osd.pool_name=lxd-slow ceph.user.name=user
>> Error: Config key 'ceph.osd.pool_name' may not be used as node-specific key
>>
>> rob at stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
>> ceph.user.name=user
>> Error: Config key 'ceph.user.name' may not be used as node-specific key
>>
>> rob at stack1b:~$ lxc storage create --target stack1b lxd-slow ceph
>> Storage pool lxd-slow pending on member stack1b
>>
>> rob at stack1b:~$ lxc storage list
>> +----------+-------------+--------+---------+---------+
>> | NAME | DESCRIPTION | DRIVER | STATE | USED BY |
>> +----------+-------------+--------+---------+---------+
>> | local | | zfs | CREATED | 10 |
>> +----------+-------------+--------+---------+---------+
>> | lxd-slow | | ceph | PENDING | 0 |
>> +----------+-------------+--------+---------+---------+
>>
>> rob at stack1b:~$ lxc storage set lxd-slow ceph.osd.pool_name lxd-slow
>> Error: failed to notify peer [....................]:8443: The
>> [ceph.osd.pool_name] properties cannot be changed for "ceph" storage pools
>>
>> rob at stack1b:~$ lxc storage set lxd-slow ceph.user.name user
>> Error: failed to notify peer [....................]:8443: The
>> [ceph.user.name] properties cannot be changed for "ceph" storage pools
>
> lxc storage create lxd-slow ceph --target stack1a
> lxc storage create lxd-slow ceph --target stack1b
> lxc storage create lxd-slow ceph --target stack1c
> lxc storage create lxd-slow ceph ceph.osd.pool_name=lxd-slow ceph.user.name=user
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
Thank you!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20190515/77856454/attachment.html>
More information about the lxc-users
mailing list