[lxc-users] How to copy "manually" a container ?
Fajar A. Nugraha
list at fajar.net
Thu Aug 23 10:14:41 UTC 2018
On Thu, Aug 23, 2018 at 2:38 PM, Pierre Couderc <pierre at couderc.eu> wrote:
> On 08/23/2018 09:24 AM, Fajar A. Nugraha wrote:
>
> On Thu, Aug 23, 2018 at 2:07 PM, Pierre Couderc <pierre at couderc.eu> wrote:
>
>> On 08/23/2018 07:37 AM, Tamas Papp wrote:
>>
>>>
>>> On 08/23/2018 05:36 AM, Pierre Couderc wrote:
>>>
>>>> If for any reason, "lxc copy" does not work, is it enough to copy
>>>> (rsync) /var/lib/lxd/containers/xxxx to another lxd on another computer in
>>>> /var/lib/lxd/containers/ ?
>>>>
>>>
>>> Copy the folder (watch out rsync flags) to /var/lib/lxd/storage-pools/default/containers/,
>>> symlink to /var/lib/lxd/containers and run 'lxd import'.
>>>
>>> Thank you very much. It nearlu worked.
>> Anyway, it fails (in this case) because :
>> Error: The storage pool's "default" driver "dir" conflicts with the
>> driver "btrfs" recorded in the container's backup file
>>
>
> If you know how lxd use btrfs to create the container storage (using
> subvolume?), you can probably create it manually, and rsync there.
>
> Or you can create another storage pool, but backed by dir (e.g. 'lxc
> storage create pool2 dir') instead of btrfs/zfs.
>
> Or yet another way:
> - create a new container
> - take note where its storage is (e.g. by looking at mount options, "df
> -h", etc)
> - shutdown the container
> - replace the storage with the one you need to restore
>
> --
> Fajar
>
> Thank you, I think to that.
> But what is sure is that my "old" container is labelled as btrfs and after
> rsync on a "non btrfs" volume, the btrfs label remains....
>
You can edit backup.yaml to reflect the changes. Here's an example on my
system:
-> my default pool is on zfs
# lxc storage show default
config:
source: HD/lxd
volatile.initial_source: HD/lxd
zfs.pool_name: HD/lxd
description: ""
name: default
driver: zfs
...
-> create a test container
# lxc launch images:alpine/3.8 test1
Creating test1
Starting test1
# df -h | grep test1
HD/lxd/containers/test1 239G 5.2M 239G 1%
/var/lib/lxd/storage-pools/default/containers/test1
-> copy it manually to a "directory" with rsync, then "lxd import". As
expected, it doesn't work.
# mkdir /var/lib/lxd/storage-pools/default/containers/test2
# rsync -a /var/lib/lxd/storage-pools/default/containers/test1/.
/var/lib/lxd/storage-pools/default/containers/test2/.
# sed -i 's/name: test1/name: test2/g'
/var/lib/lxd/storage-pools/default/containers/test2/backup.yaml
# lxd import test2
# lxc start test2
Error: no such file or directory
Try `lxc info --show-log test2` for more info
-> cleanup before next test
# rm -rf /var/lib/lxd/storage-pools/default/containers/test2
# lxc delete test2
-> now create a zfs dataset properly, mount it, and THEN rsync (or replace
the whole thing with 'zfs send | zfs receive') + lxd import. works.
# zfs create -o
mountpoint=/var/lib/lxd/storage-pools/default/containers/test2
HD/lxd/containers/test2
# rsync -a /var/lib/lxd/storage-pools/default/containers/test1/.
/var/lib/lxd/storage-pools/default/containers/test2/.
# sed -i 's/name: test1/name: test2/g'
/var/lib/lxd/storage-pools/default/containers/test2/backup.yaml
# lxd import test2
# lxc start test2
# lxc list test2
+-------+---------+-------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+-------------------+------+------------+-----------+
| test2 | RUNNING | 10.0.3.122 (eth0) | | PERSISTENT | 0 |
+-------+---------+-------------------+------+------------+-----------+
-> cleanup again
# lxc stop --force test2
# lxc delete test2
-> try again, this time using a different storage pool ('dir'). MUCH more
complicated, but possible
# lxc storage create testpool dir source=/tmp/testpool
Storage pool testpool created
# mkdir -p /tmp/testpool/containers/test2
# rsync -a /var/lib/lxd/storage-pools/default/containers/test1/.
/tmp/testpool/containers/test2/.
# sed -i 's/name: test1/name: test2/g'
/tmp/testpool/containers/test2/backup.yaml
# sed -i 's/pool: default/pool: testpool/g'
/tmp/testpool/containers/test2/backup.yaml
-> edit /tmp/testpool/containers/test2/backup.yaml manually
change device to this:
###
devices:
eth0:
nictype: bridged
parent: lxdbr0
type: nic
root:
path: /
pool: testpool
type: disk
###
and change pool info to this
###
pool:
config:
source: /tmp/testpool
volatile.initial_source: /tmp/testpool
description: ""
name: testpool
driver: dir
used_by: []
status: Created
locations:
- none
###
-> import and start it
# lxd import test2
# lxc start test2
# lxc list test2
+-------+---------+-------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+-------------------+------+------------+-----------+
| test2 | RUNNING | 10.0.3.122 (eth0) | | PERSISTENT | 0 |
+-------+---------+-------------------+------+------------+-----------+
-> final cleanup
# lxc stop --force test2
# lxc delete test2
# lxc storage delete testpool
Storage pool testpool deleted
In the case of "copy" (instead of backup and restore) like in my case,
you'd want to change "volatile.eth0.hwaddr" too. Otherwise you'd end up
with multiple containers with the same MAC and IP address.
--
Fajar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20180823/bca8d2a9/attachment.html>
More information about the lxc-users
mailing list