[lxc-users] How to set default volume size using the "volume.size" property on the pool

Kees Bakker keesb at ghs.com
Wed Sep 26 13:26:15 UTC 2018


Ah, OK. Thanks.

I just happened to do just that. I was a little surprised that
it changed all containers. I can't yet oversee all consequences.

For now I will create new profiles for containers that need larger
volumes. Luckily, lxd allows me to change a containers profile
and it will adjust their volumes accordingly. Great.

lxc profile copy default bigvol
lxc profile set bigvol root.size 500GB
lxc stop somecontainer
lxc config edit somecontainer  (change profile to "bigvol")

On 26-09-18 14:10, Stéphane Graber wrote:
> You set the size property on the root device of the container, then
> restart the container, that should cause LXD to resize it on startup.
>
> On Wed, Sep 26, 2018 at 09:20:02AM +0200, Kees Bakker wrote:
>> Thanks, that works.
>>
>> Next, how do I change the volume size of an existing container?
>>
>> On 25-09-18 17:01, Stéphane Graber wrote:
>>> No worries, we tend to prefer support requests to go here or to
>>> https://discuss.linuxcontainers.org
>>>
>>> The command should have actually been:
>>>
>>>     lxc profile device set default root size 50GB
>>>
>>> On Tue, Sep 25, 2018 at 04:11:05PM +0200, Kees Bakker wrote:
>>>> This is a follow up of https://github.com/lxc/lxd/issues/5069
>>>> (( Sorry for creating the issue, Stéphane. I thought that "issues"
>>>> were not just for bugs. ))
>>>>
>>>> You said: "lxc profile set default root size 50GB should do the trick"
>>>>
>>>> Alas, ...
>>>>
>>>> root at maas:~# lxc profile set default root size 50GB
>>>> Description:
>>>>   Set profile configuration keys
>>>>
>>>> Usage:
>>>>   lxc profile set [<remote>:]<profile> <key> <value> [flags]
>>>>
>>>> Global Flags:
>>>>       --debug         Show all debug messages
>>>>       --force-local   Force using the local unix socket
>>>>   -h, --help          Print help
>>>>   -v, --verbose       Show all information messages
>>>>       --version       Print version number
>>>> Error: Invalid number of arguments
>>>>
>>>> Furthermore.
>>>> Somewhere else I read your suggestion to set volume.size but
>>>> I was not able to get anything useful result. I did set the
>>>> volume,size of my storage pool, but new containers were still
>>>> created with the 10GB default.
>>>>
>>>> And, perhaps related, if I have a container with a bigger volume than 10GB,
>>>> then it fails to copy that container. (Copying that container to another LXD
>>>> server with BTRFS, succeeds without problem.)
>>>>
>>>> Notice that I'm using LVM storage, on Ubuntu 18.04 with LXD/LXC 3.0.1
>>>> -- Kees
>>>>
>>>>
>>>>
>>>> On 24-09-18 08:56, Kees Bakker wrote:
>>>>> This is still unanswered.
>>>>>
>>>>> How do I set the default volume size of the storage pool?
>>>>>
>>>>> On 13-09-18 10:19, Kees Bakker wrote:
>>>>>> Hey,
>>>>>>
>>>>>> Forgive my ignorance, but how would you do that? I have a setup with LVM
>>>>>> and the default volume size is 10G. I wish to increase that default,
>>>>>> what would be the command syntax? Also I want to see the current
>>>>>> default settings, just so I know I'm on the right track.
>>>>>>
>>>>>> My pool is called "local".
>>>>>>
>>>>>> # lxc storage show local
>>>>>> config:
>>>>>>   lvm.thinpool_name: LXDThinPool
>>>>>>   lvm.vg_name: local
>>>>>> description: ""
>>>>>> name: local
>>>>>> driver: lvm
>>>>>> used_by:
>>>>>> - /1.0/containers/bionic01
>>>>>> - /1.0/containers/kanboard
>>>>>> - /1.0/containers/license4
>>>>>> - /1.0/containers/usrv1
>>>>>> - /1.0/containers/usrv1/snapshots/after-aptinstall-freeipa
>>>>>> - /1.0/images/7079d12b3253102b829d0fdd6f1f693a1654057ec054542e9e7506c7cf54fa2e
>>>>>> - /1.0/images/c395a7105278712478ec1dbfaab1865593fc11292f99afe01d5b94f1c34a9a3a
>>>>>> - /1.0/profiles/default
>>>>>> - /1.0/profiles/default_pub
>>>>>> - /1.0/profiles/testprof
>>>>>> status: Created
>>>>>> locations:
>>>>>> - maas
>>>>>>
>>>>>> There is no volume.size. Should I just add it?
>>>>> _______________________________________________
>>>>> lxc-users mailing list
>>>>> lxc-users at lists.linuxcontainers.org
>>>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>> _______________________________________________
>>>> lxc-users mailing list
>>>> lxc-users at lists.linuxcontainers.org
>>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>>>
>>> _______________________________________________
>>> lxc-users mailing list
>>> lxc-users at lists.linuxcontainers.org
>>> http://lists.linuxcontainers.org/listinfo/lxc-users
>> _______________________________________________
>> lxc-users mailing list
>> lxc-users at lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20180926/65d16d7a/attachment.html>


More information about the lxc-users mailing list