[lxc-devel] [lxd/master] Custom volume backups

monstermunchkin on Github lxc-bot at linuxcontainers.org
Wed Sep 2 21:07:43 UTC 2020


A non-text attachment was scrubbed...
Name: not available
Type: text/x-mailbox
Size: 301 bytes
Desc: not available
URL: <http://lists.linuxcontainers.org/pipermail/lxc-devel/attachments/20200902/77a61dd6/attachment-0001.bin>
-------------- next part --------------
From 14d5ae93e2e02d766f8635ed95fee0cd7ddbd6b5 Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Mon, 24 Aug 2020 12:27:13 +0200
Subject: [PATCH 01/14] shared/version: Add custom_volume_backup API extension

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 shared/version/api.go | 1 +
 1 file changed, 1 insertion(+)

diff --git a/shared/version/api.go b/shared/version/api.go
index 4771c18f21..e328a3c42d 100644
--- a/shared/version/api.go
+++ b/shared/version/api.go
@@ -225,6 +225,7 @@ var APIExtensions = []string{
 	"container_syscall_intercept_bpf_devices",
 	"network_type_ovn",
 	"network_bridge_ovn_bridge",
+	"custom_volume_backup",
 }
 
 // APIExtensionsCount returns the number of available API extensions.

From f558a328f6fbb87c5a19a8c57b7470cc2e48fcdd Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Mon, 24 Aug 2020 14:10:25 +0200
Subject: [PATCH 02/14] doc/rest-api: Add custom volume backups

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 doc/rest-api.md | 96 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 96 insertions(+)

diff --git a/doc/rest-api.md b/doc/rest-api.md
index 1444de1ba0..5e0e26b99a 100644
--- a/doc/rest-api.md
+++ b/doc/rest-api.md
@@ -278,6 +278,9 @@ much like `/1.0/containers` will only show you instances of that type.
          * [`/1.0/storage-pools/<pool>/volumes/<type>/<name>`](#10storage-poolspoolvolumestypename)
            * [`/1.0/storage-pools/<pool>/volumes/<type>/<name>/snapshots`](#10storage-poolspoolvolumestypenamesnapshots)
              * [`/1.0/storage-pools/<pool>/volumes/<type>/<volume>/snapshots/<name>`](#10storage-poolspoolvolumestypevolumesnapshotsname)
+            * [`/1.0/storage-pools/<pool>/volumes/<type>/<name>/backups`](#10storage-poolspoolvolumestypenamebackups)
+             * [`/1.0/storage-pools/<pool>/volumes/<type>/<volume>/backups/<name>`](#10storage-poolspoolvolumestypevolumebackupsname)
+               * [`/1.0/storage-pools/<pool>/volumes/<type>/<volume>/backups/<name>/export`](#10storage-poolspoolvolumestypevolumebackupsnameexport)
  * [`/1.0/resources`](#10resources)
  * [`/1.0/cluster`](#10cluster)
    * [`/1.0/cluster/members`](#10clustermembers)
@@ -3200,6 +3203,99 @@ Input:
 
 HTTP code for this should be 202 (Accepted).
 
+### `/1.0/storage-pools/<pool>/volumes/<type>/<name>/backups`
+#### GET
+ * Description: List of backups for the volume
+ * Introduced: with API extension `custom_volume_backup`
+ * Authentication: trusted
+ * Operation: sync
+ * Return: a list of backups for the volume
+
+Return value:
+
+```json
+[
+    "/1.0/storage-pools/pool1/custom/vol1/backups/backup0",
+    "/1.0/storage-pools/pool1/custom/vol1/backups/backup1",
+]
+```
+
+#### POST
+ * Description: Create a new backup
+ * Introduced: with API extension `custom_volume_backup`
+ * Authentication: trusted
+ * Operation: async
+ * Returns: background operation or standard error
+
+Input:
+
+```js
+{
+    "name": "backupName",      // unique identifier for the backup
+    "expiry": 3600,            // when to delete the backup automatically
+    "volume_only": true,     // if True, snapshots aren't included
+    "optimized_storage": true  // if True, btrfs send or zfs send is used for volume and snapshots
+}
+```
+
+### `/1.0/storage-pools/<pool>/volumes/<type>/<volume>/backups/<name>`
+#### GET
+ * Description: Backup information
+ * Introduced: with API extension `custom_volume_backup`
+ * Authentication: trusted
+ * Operation: sync
+ * Returns: dict of the backup
+
+Output:
+
+```json
+{
+    "name": "backupName",
+    "creation_date": "2018-04-23T12:16:09+02:00",
+    "expiry_date": "2018-04-23T12:16:09+02:00",
+    "instance_only": false,
+    "optimized_storage": false
+}
+```
+
+#### DELETE
+ * Description: remove the backup
+ * Introduced: with API extension `custom_volume_backup`
+ * Authentication: trusted
+ * Operation: async
+ * Return: background operation or standard error
+
+#### POST
+ * Description: used to rename the backup
+ * Introduced: with API extension `custom_volume_backup`
+ * Authentication: trusted
+ * Operation: async
+ * Return: background operation or standard error
+
+Input:
+
+```json
+{
+    "name": "new-name"
+}
+```
+
+### `storage-pools/<pool>/volumes/<type>/<volume>/backups/<name>/export`
+#### GET
+ * Description: fetch the backup tarball
+ * Introduced: with API extension `custom_volume_backup`
+ * Authentication: trusted
+ * Operation: sync
+ * Return: dict containing the backup tarball
+
+Output:
+
+```json
+{
+    "data": "<byte-stream>"
+}
+```
+
 ### `/1.0/resources`
 #### GET
  * Description: information about the resources available to the LXD server

From 086c7a63542586564ecc05a1276d8530f4f23567 Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 26 Aug 2020 13:56:23 +0200
Subject: [PATCH 03/14] lxd: Rename backup.Backup to backup.InstanceBackup

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/backup/backup.go                | 24 ++++++++++++------------
 lxd/instance/drivers/driver_lxc.go  |  4 ++--
 lxd/instance/drivers/driver_qemu.go |  4 ++--
 lxd/instance/instance_interface.go  |  2 +-
 lxd/instance/instance_utils.go      |  2 +-
 5 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/lxd/backup/backup.go b/lxd/backup/backup.go
index 5d20ac00f9..91716d989f 100644
--- a/lxd/backup/backup.go
+++ b/lxd/backup/backup.go
@@ -121,8 +121,8 @@ func GetInfo(r io.ReadSeeker) (*Info, error) {
 	return &result, nil
 }
 
-// Backup represents a container backup
-type Backup struct {
+// InstanceBackup represents an instance backup
+type InstanceBackup struct {
 	state    *state.State
 	instance Instance
 
@@ -137,8 +137,8 @@ type Backup struct {
 }
 
 // New instantiates a new Backup struct.
-func New(state *state.State, inst Instance, ID int, name string, creationDate, expiryDate time.Time, instanceOnly, optimizedStorage bool) *Backup {
-	return &Backup{
+func New(state *state.State, inst Instance, ID int, name string, creationDate, expiryDate time.Time, instanceOnly, optimizedStorage bool) *InstanceBackup {
+	return &InstanceBackup{
 		state:            state,
 		instance:         inst,
 		id:               ID,
@@ -151,33 +151,33 @@ func New(state *state.State, inst Instance, ID int, name string, creationDate, e
 }
 
 // CompressionAlgorithm returns the compression used for the tarball.
-func (b *Backup) CompressionAlgorithm() string {
+func (b *InstanceBackup) CompressionAlgorithm() string {
 	return b.compressionAlgorithm
 }
 
 // SetCompressionAlgorithm sets the tarball compression.
-func (b *Backup) SetCompressionAlgorithm(compression string) {
+func (b *InstanceBackup) SetCompressionAlgorithm(compression string) {
 	b.compressionAlgorithm = compression
 }
 
 // InstanceOnly returns whether only the instance itself is to be backed up.
-func (b *Backup) InstanceOnly() bool {
+func (b *InstanceBackup) InstanceOnly() bool {
 	return b.instanceOnly
 }
 
 // Name returns the name of the backup.
-func (b *Backup) Name() string {
+func (b *InstanceBackup) Name() string {
 	return b.name
 }
 
 // OptimizedStorage returns whether the backup is to be performed using
 // optimization supported by the storage driver.
-func (b *Backup) OptimizedStorage() bool {
+func (b *InstanceBackup) OptimizedStorage() bool {
 	return b.optimizedStorage
 }
 
 // Rename renames a container backup
-func (b *Backup) Rename(newName string) error {
+func (b *InstanceBackup) Rename(newName string) error {
 	oldBackupPath := shared.VarPath("backups", project.Instance(b.instance.Project(), b.name))
 	newBackupPath := shared.VarPath("backups", project.Instance(b.instance.Project(), newName))
 
@@ -215,12 +215,12 @@ func (b *Backup) Rename(newName string) error {
 }
 
 // Delete removes an instance backup
-func (b *Backup) Delete() error {
+func (b *InstanceBackup) Delete() error {
 	return DoBackupDelete(b.state, b.instance.Project(), b.name, b.instance.Name())
 }
 
 // Render returns an InstanceBackup struct of the backup.
-func (b *Backup) Render() *api.InstanceBackup {
+func (b *InstanceBackup) Render() *api.InstanceBackup {
 	return &api.InstanceBackup{
 		Name:             strings.SplitN(b.name, "/", 2)[1],
 		CreatedAt:        b.creationDate,
diff --git a/lxd/instance/drivers/driver_lxc.go b/lxd/instance/drivers/driver_lxc.go
index 196e70cdbc..215301c326 100644
--- a/lxd/instance/drivers/driver_lxc.go
+++ b/lxd/instance/drivers/driver_lxc.go
@@ -3268,7 +3268,7 @@ func (c *lxc) Snapshots() ([]instance.Instance, error) {
 }
 
 // Backups returns the backups of the instance.
-func (c *lxc) Backups() ([]backup.Backup, error) {
+func (c *lxc) Backups() ([]backup.InstanceBackup, error) {
 	// Get all the backups
 	backupNames, err := c.state.Cluster.GetInstanceBackups(c.project, c.name)
 	if err != nil {
@@ -3276,7 +3276,7 @@ func (c *lxc) Backups() ([]backup.Backup, error) {
 	}
 
 	// Build the backup list
-	backups := []backup.Backup{}
+	backups := []backup.InstanceBackup{}
 	for _, backupName := range backupNames {
 		backup, err := instance.BackupLoadByName(c.state, c.project, backupName)
 		if err != nil {
diff --git a/lxd/instance/drivers/driver_qemu.go b/lxd/instance/drivers/driver_qemu.go
index 1d14ca7555..2d81b58118 100644
--- a/lxd/instance/drivers/driver_qemu.go
+++ b/lxd/instance/drivers/driver_qemu.go
@@ -2464,8 +2464,8 @@ func (vm *qemu) Snapshots() ([]instance.Instance, error) {
 }
 
 // Backups returns a list of backups.
-func (vm *qemu) Backups() ([]backup.Backup, error) {
-	return []backup.Backup{}, nil
+func (vm *qemu) Backups() ([]backup.InstanceBackup, error) {
+	return []backup.InstanceBackup{}, nil
 }
 
 // Rename the instance.
diff --git a/lxd/instance/instance_interface.go b/lxd/instance/instance_interface.go
index f0c117f6e7..c89fb230a4 100644
--- a/lxd/instance/instance_interface.go
+++ b/lxd/instance/instance_interface.go
@@ -59,7 +59,7 @@ type Instance interface {
 	// Snapshots & migration & backups.
 	Restore(source Instance, stateful bool) error
 	Snapshots() ([]Instance, error)
-	Backups() ([]backup.Backup, error)
+	Backups() ([]backup.InstanceBackup, error)
 	UpdateBackupFile() error
 
 	// Config handling.
diff --git a/lxd/instance/instance_utils.go b/lxd/instance/instance_utils.go
index 5dd419c8b6..dbd0a34c93 100644
--- a/lxd/instance/instance_utils.go
+++ b/lxd/instance/instance_utils.go
@@ -661,7 +661,7 @@ func DeviceNextInterfaceHWAddr() (string, error) {
 }
 
 // BackupLoadByName load an instance backup from the database.
-func BackupLoadByName(s *state.State, project, name string) (*backup.Backup, error) {
+func BackupLoadByName(s *state.State, project, name string) (*backup.InstanceBackup, error) {
 	// Get the backup database record
 	args, err := s.Cluster.GetInstanceBackup(project, name)
 	if err != nil {

From 09ae7a21d523e3d56f63f0ffcb785a4c1f211929 Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 26 Aug 2020 16:27:28 +0200
Subject: [PATCH 04/14] lxd: Rename backup.New to backup.NewInstance

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/backup/backup.go           | 4 ++--
 lxd/instance/instance_utils.go | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/lxd/backup/backup.go b/lxd/backup/backup.go
index 91716d989f..dcc1c2ada1 100644
--- a/lxd/backup/backup.go
+++ b/lxd/backup/backup.go
@@ -136,8 +136,8 @@ type InstanceBackup struct {
 	compressionAlgorithm string
 }
 
-// New instantiates a new Backup struct.
-func New(state *state.State, inst Instance, ID int, name string, creationDate, expiryDate time.Time, instanceOnly, optimizedStorage bool) *InstanceBackup {
+// NewInstance instantiates a new Backup struct.
+func NewInstance(state *state.State, inst Instance, ID int, name string, creationDate, expiryDate time.Time, instanceOnly, optimizedStorage bool) *InstanceBackup {
 	return &InstanceBackup{
 		state:            state,
 		instance:         inst,
diff --git a/lxd/instance/instance_utils.go b/lxd/instance/instance_utils.go
index dbd0a34c93..db40cd1a76 100644
--- a/lxd/instance/instance_utils.go
+++ b/lxd/instance/instance_utils.go
@@ -674,7 +674,7 @@ func BackupLoadByName(s *state.State, project, name string) (*backup.InstanceBac
 		return nil, errors.Wrap(err, "Load instance from database")
 	}
 
-	return backup.New(s, instance, args.ID, name, args.CreationDate, args.ExpiryDate, args.InstanceOnly, args.OptimizedStorage), nil
+	return backup.NewInstance(s, instance, args.ID, name, args.CreationDate, args.ExpiryDate, args.InstanceOnly, args.OptimizedStorage), nil
 }
 
 // ResolveImage takes an instance source and returns a hash suitable for instance creation or download.

From b183b8a5f8a7cfc8677b2014fcb85ee6461bfe1e Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 26 Aug 2020 18:20:58 +0200
Subject: [PATCH 05/14] lxd/db/cluster: Add storage_volumes_backups table

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/db/cluster/schema.go | 13 ++++++++++++-
 lxd/db/cluster/update.go | 24 ++++++++++++++++++++++++
 2 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/lxd/db/cluster/schema.go b/lxd/db/cluster/schema.go
index de1981674d..051f0ec846 100644
--- a/lxd/db/cluster/schema.go
+++ b/lxd/db/cluster/schema.go
@@ -532,6 +532,17 @@ CREATE VIEW storage_volumes_all (
          storage_volumes.content_type
     FROM storage_volumes
     JOIN storage_volumes_snapshots ON storage_volumes.id = storage_volumes_snapshots.storage_volume_id;
+CREATE TABLE storage_volumes_backups (
+    id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
+    storage_volume_id INTEGER NOT NULL,
+    name VARCHAR(255) NOT NULL,
+    creation_date DATETIME,
+    expiry_date DATETIME,
+    volume_only INTEGER NOT NULL default 0,
+    optimized_storage INTEGER NOT NULL default 0,
+    FOREIGN KEY (storage_volume_id) REFERENCES "storage_volumes" (id) ON DELETE CASCADE,
+    UNIQUE (storage_volume_id, name)
+);
 CREATE TRIGGER storage_volumes_check_id
   BEFORE INSERT ON storage_volumes
   WHEN NEW.id IN (SELECT id FROM storage_volumes_snapshots)
@@ -573,5 +584,5 @@ CREATE TABLE storage_volumes_snapshots_config (
     UNIQUE (storage_volume_snapshot_id, key)
 );
 
-INSERT INTO schema (version, updated_at) VALUES (36, strftime("%s"))
+INSERT INTO schema (version, updated_at) VALUES (37, strftime("%s"))
 `
diff --git a/lxd/db/cluster/update.go b/lxd/db/cluster/update.go
index 52b8c96997..714b8fcf35 100644
--- a/lxd/db/cluster/update.go
+++ b/lxd/db/cluster/update.go
@@ -73,6 +73,30 @@ var updates = map[int]schema.Update{
 	34: updateFromV33,
 	35: updateFromV34,
 	36: updateFromV35,
+	37: updateFromV36,
+}
+
+// Add storage_volumes_backups table.
+func updateFromV36(tx *sql.Tx) error {
+	stmt := `
+CREATE TABLE storage_volumes_backups (
+    id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
+    storage_volume_id INTEGER NOT NULL,
+    name VARCHAR(255) NOT NULL,
+    creation_date DATETIME,
+    expiry_date DATETIME,
+    volume_only INTEGER NOT NULL default 0,
+    optimized_storage INTEGER NOT NULL default 0,
+    FOREIGN KEY (storage_volume_id) REFERENCES "storage_volumes" (id) ON DELETE CASCADE,
+    UNIQUE (storage_volume_id, name)
+);
+`
+	_, err := tx.Exec(stmt)
+	if err != nil {
+		return err
+	}
+
+	return nil
 }
 
 // This fixes node IDs of storage volumes on non-remote pools which were

From 49436dd0ec9a6f899be22777c4f4f1476fea3e8a Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 22:05:50 +0200
Subject: [PATCH 06/14] shared/api: Add custom volume backup structs

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 shared/api/storage_pool_volume.go | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/shared/api/storage_pool_volume.go b/shared/api/storage_pool_volume.go
index 85950c8461..c67f7a0a9c 100644
--- a/shared/api/storage_pool_volume.go
+++ b/shared/api/storage_pool_volume.go
@@ -1,5 +1,7 @@
 package api
 
+import "time"
+
 // StorageVolumesPost represents the fields of a new LXD storage pool volume
 //
 // API extension: storage
@@ -91,6 +93,29 @@ type StorageVolumeSource struct {
 	VolumeOnly bool `json:"volume_only" yaml:"volume_only"`
 }
 
+// API extension: custom_volume_backup
+type StoragePoolVolumeBackup struct {
+	Name             string    `json:"name" yaml:"name"`
+	CreatedAt        time.Time `json:"created_at" yaml:"created_at"`
+	ExpiresAt        time.Time `json:"expires_at" yaml:"expires_at"`
+	VolumeOnly       bool      `json:"volume_only" yaml:"volume_only"`
+	OptimizedStorage bool      `json:"optimized_storage" yaml:"optimized_storage"`
+}
+
+// API extension: custom_volume_backup
+type StoragePoolVolumeBackupsPost struct {
+	Name                 string    `json:"name" yaml:"name"`
+	ExpiresAt            time.Time `json:"expires_at" yaml:"expires_at"`
+	VolumeOnly           bool      `json:"volume_only" yaml:"volume_only"`
+	OptimizedStorage     bool      `json:"optimized_storage" yaml:"optimized_storage"`
+	CompressionAlgorithm string    `json:"compression_algorithm" yaml:"compression_algorithm"`
+}
+
+// API extension: custom_volume_backup
+type StoragePoolVolumeBackupPost struct {
+	Name string `json:"name" yaml:"name"`
+}
+
 // Writable converts a full StorageVolume struct into a StorageVolumePut struct
 // (filters read-only fields).
 func (storageVolume *StorageVolume) Writable() StorageVolumePut {

From 22f15d30fe507291a0d6c6f69c36bdfe0284469c Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 26 Aug 2020 18:22:57 +0200
Subject: [PATCH 07/14] client: Add custom volume backup functions

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 client/interfaces.go          |  17 ++++
 client/lxd_storage_volumes.go | 176 ++++++++++++++++++++++++++++++++++
 2 files changed, 193 insertions(+)

diff --git a/client/interfaces.go b/client/interfaces.go
index e0dfdd59cf..e676a12cdc 100644
--- a/client/interfaces.go
+++ b/client/interfaces.go
@@ -283,6 +283,16 @@ type InstanceServer interface {
 	RenameStoragePoolVolumeSnapshot(pool string, volumeType string, volumeName string, snapshotName string, snapshot api.StorageVolumeSnapshotPost) (op Operation, err error)
 	UpdateStoragePoolVolumeSnapshot(pool string, volumeType string, volumeName string, snapshotName string, volume api.StorageVolumeSnapshotPut, ETag string) (err error)
 
+	// Storage volume backup functions ("custom_volume_backup" API extension)
+	GetStoragePoolVolumeBackupNames(pool string, volName string) (names []string, err error)
+	GetStoragePoolVolumeBackups(pool string, volName string) (backups []api.StoragePoolVolumeBackup, err error)
+	GetStoragePoolVolumeBackup(pool string, volName string, name string) (backup *api.StoragePoolVolumeBackup, ETag string, err error)
+	CreateStoragePoolVolumeBackup(pool string, volName string, backup api.StoragePoolVolumeBackupsPost) (op Operation, err error)
+	RenameStoragePoolVolumeBackup(pool string, volName string, name string, backup api.StoragePoolVolumeBackupPost) (op Operation, err error)
+	DeleteStoragePoolVolumeBackup(pool string, volName string, name string) (op Operation, err error)
+	GetStoragePoolVolumeBackupFile(pool string, volName string, name string, req *BackupFileRequest) (resp *BackupFileResponse, err error)
+	CreateStoragePoolVolumeFromBackup(pool string, args StoragePoolVolumeBackupArgs) (op Operation, err error)
+
 	// Cluster functions ("cluster" API extensions)
 	GetCluster() (cluster *api.Cluster, ETag string, err error)
 	UpdateCluster(cluster api.ClusterPut, ETag string) (op Operation, err error)
@@ -422,6 +432,13 @@ type StoragePoolVolumeMoveArgs struct {
 	StoragePoolVolumeCopyArgs
 }
 
+// The StoragePoolVolumeBackupArgs struct is used when creating a storage volume from a backup.
+// API extension: custom_volume_backup
+type StoragePoolVolumeBackupArgs struct {
+	// The backup file
+	BackupFile io.Reader
+}
+
 // The InstanceBackupArgs struct is used when creating a instance from a backup.
 type InstanceBackupArgs struct {
 	// The backup file
diff --git a/client/lxd_storage_volumes.go b/client/lxd_storage_volumes.go
index 368255b2d2..34df864bd4 100644
--- a/client/lxd_storage_volumes.go
+++ b/client/lxd_storage_volumes.go
@@ -2,10 +2,15 @@ package lxd
 
 import (
 	"fmt"
+	"io"
+	"net/http"
 	"net/url"
 	"strings"
 
 	"github.com/lxc/lxd/shared/api"
+	"github.com/lxc/lxd/shared/cancel"
+	"github.com/lxc/lxd/shared/ioprogress"
+	"github.com/lxc/lxd/shared/units"
 )
 
 // Storage volumes handling function
@@ -607,3 +612,174 @@ func (r *ProtocolLXD) RenameStoragePoolVolume(pool string, volType string, name
 
 	return nil
 }
+
+func (r *ProtocolLXD) GetStoragePoolVolumeBackupNames(pool string, volName string) ([]string, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Fetch the raw value
+	urls := []string{}
+	_, err := r.queryStruct("GET", fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups", url.PathEscape(pool), url.PathEscape(volName)), nil, "", &urls)
+	if err != nil {
+		return nil, err
+	}
+
+	// Parse it
+	names := []string{}
+	for _, uri := range urls {
+		fields := strings.Split(uri, fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups", url.PathEscape(pool), url.PathEscape(volName)))
+		names = append(names, fields[len(fields)-1])
+	}
+
+	return names, nil
+}
+
+func (r *ProtocolLXD) GetStoragePoolVolumeBackups(pool string, volName string) ([]api.StoragePoolVolumeBackup, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Fetch the raw value
+	backups := []api.StoragePoolVolumeBackup{}
+
+	_, err := r.queryStruct("GET", fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups?recursion=1", url.PathEscape(pool), url.PathEscape(volName)), nil, "", &backups)
+	if err != nil {
+		return nil, err
+	}
+
+	return backups, nil
+}
+
+func (r *ProtocolLXD) GetStoragePoolVolumeBackup(pool string, volName string, name string) (*api.StoragePoolVolumeBackup, string, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, "", fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Fetch the raw value
+	backup := api.StoragePoolVolumeBackup{}
+	etag, err := r.queryStruct("GET", fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups/%s", url.PathEscape(pool), url.PathEscape(volName), url.PathEscape(name)), nil, "", &backup)
+	if err != nil {
+		return nil, "", err
+	}
+
+	return &backup, etag, nil
+}
+
+func (r *ProtocolLXD) CreateStoragePoolVolumeBackup(pool string, volName string, backup api.StoragePoolVolumeBackupsPost) (Operation, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Send the request
+	op, _, err := r.queryOperation("POST", fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups", url.PathEscape(pool), url.PathEscape(volName)), backup, "")
+	if err != nil {
+		return nil, err
+	}
+
+	return op, nil
+
+}
+
+func (r *ProtocolLXD) RenameStoragePoolVolumeBackup(pool string, volName string, name string, backup api.StoragePoolVolumeBackupPost) (Operation, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Send the request
+	op, _, err := r.queryOperation("POST", fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups/%s", url.PathEscape(pool), url.PathEscape(volName), url.PathEscape(name)), backup, "")
+	if err != nil {
+		return nil, err
+	}
+
+	return op, nil
+}
+
+func (r *ProtocolLXD) DeleteStoragePoolVolumeBackup(pool string, volName string, name string) (Operation, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Send the request
+	op, _, err := r.queryOperation("DELETE", fmt.Sprintf("/storage-pools/%s/volumes/custom/%s/backups/%s", url.PathEscape(pool), url.PathEscape(volName), url.PathEscape(name)), nil, "")
+	if err != nil {
+		return nil, err
+	}
+
+	return op, nil
+}
+
+func (r *ProtocolLXD) GetStoragePoolVolumeBackupFile(pool string, volName string, name string, req *BackupFileRequest) (*BackupFileResponse, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+
+	// Build the URL
+	uri := fmt.Sprintf("%s/1.0/storage-pools/%s/volumes/custom/%s/backups/%s/export", r.httpHost, url.PathEscape(pool), url.PathEscape(volName), url.PathEscape(name))
+
+	if r.project != "" {
+		uri += fmt.Sprintf("?project=%s", url.QueryEscape(r.project))
+	}
+
+	// Prepare the download request
+	request, err := http.NewRequest("GET", uri, nil)
+	if err != nil {
+		return nil, err
+	}
+
+	if r.httpUserAgent != "" {
+		request.Header.Set("User-Agent", r.httpUserAgent)
+	}
+
+	// Start the request
+	response, doneCh, err := cancel.CancelableDownload(req.Canceler, r.http, request)
+	if err != nil {
+		return nil, err
+	}
+	defer response.Body.Close()
+	defer close(doneCh)
+
+	if response.StatusCode != http.StatusOK {
+		_, _, err := lxdParseResponse(response)
+		if err != nil {
+			return nil, err
+		}
+	}
+
+	// Handle the data
+	body := response.Body
+	if req.ProgressHandler != nil {
+		body = &ioprogress.ProgressReader{
+			ReadCloser: response.Body,
+			Tracker: &ioprogress.ProgressTracker{
+				Length: response.ContentLength,
+				Handler: func(percent int64, speed int64) {
+					req.ProgressHandler(ioprogress.ProgressData{Text: fmt.Sprintf("%d%% (%s/s)", percent, units.GetByteSizeString(speed, 2))})
+				},
+			},
+		}
+	}
+
+	size, err := io.Copy(req.BackupFile, body)
+	if err != nil {
+		return nil, err
+	}
+
+	resp := BackupFileResponse{}
+	resp.Size = size
+
+	return &resp, nil
+}
+
+func (r *ProtocolLXD) CreateStoragePoolVolumeFromBackup(pool string, args StoragePoolVolumeBackupArgs) (Operation, error) {
+	if !r.HasExtension("custom_volume_backup") {
+		return nil, fmt.Errorf("The server is missing the required \"custom_volume_backup\" API extension")
+	}
+	// Send the request
+	op, _, err := r.queryOperation("POST", fmt.Sprintf("/storage-pools/%s/volumes", url.PathEscape(pool)), args.BackupFile, "")
+	if err != nil {
+		return nil, err
+	}
+
+	return op, nil
+}

From f6cb4bdac958cdea7c31c2d37e9e33c37036d18a Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 21:58:41 +0200
Subject: [PATCH 08/14] doc/api-extensions: Add custom_volume_backup

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 doc/api-extensions.md | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/doc/api-extensions.md b/doc/api-extensions.md
index fa749cb371..a65d5efb44 100644
--- a/doc/api-extensions.md
+++ b/doc/api-extensions.md
@@ -1158,3 +1158,21 @@ Adds the `ovn.ovs_bridge` setting to `bridge` networks to allow the `ovn` networ
 
 If missing, the first `ovn` network to specify a `bridge` network as its parent `network` will cause the
 setting to be populated with a random interface name prefixed with "ovn".
+
+## custom\_volume\_backup
+Add custom volume backup support.
+
+This includes the following new endpoints (see [RESTful API](rest-api.md) for details):
+
+* `GET /1.0/storage-pools/<pool>/<type>/<volume>/backups`
+* `POST /1.0/storage-pools/<pool>/<type>/<volume>/backups`
+
+* `GET /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>`
+* `POST /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>`
+* `DELETE /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>`
+
+* `GET /1.0/storage-pools/<pool>/<type>/<volume>/backups/<name>/export`
+
+The following existing endpoint has been modified:
+
+ * `POST /1.0/storage-pools/<pool>/<type>/<volume>` accepts the new source type `backup`

From 6ad23ac1acbae6d8188d08719e94e7905b22addb Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 22:05:18 +0200
Subject: [PATCH 09/14] lxd/db: Handle custom volume backups

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/db/backups.go                  | 174 ++++++++++++++++++++++++++++-
 lxd/db/operations_types.go         |  20 ++++
 lxd/db/storage_volume_snapshots.go |  21 ++++
 3 files changed, 214 insertions(+), 1 deletion(-)

diff --git a/lxd/db/backups.go b/lxd/db/backups.go
index d22892d4df..2ed8b024ce 100644
--- a/lxd/db/backups.go
+++ b/lxd/db/backups.go
@@ -12,7 +12,7 @@ import (
 	log "github.com/lxc/lxd/shared/log15"
 )
 
-// InstanceBackup is a value object holding all db-related details about a backup.
+// InstanceBackup is a value object holding all db-related details about an instance backup.
 type InstanceBackup struct {
 	ID                   int
 	InstanceID           int
@@ -24,6 +24,18 @@ type InstanceBackup struct {
 	CompressionAlgorithm string
 }
 
+// StoragePoolVolumeBackup is a value object holding all db-related details about a storage volume backup.
+type StoragePoolVolumeBackup struct {
+	ID                   int
+	VolumeID             int64
+	Name                 string
+	CreationDate         time.Time
+	ExpiryDate           time.Time
+	VolumeOnly           bool
+	OptimizedStorage     bool
+	CompressionAlgorithm string
+}
+
 // Returns the ID of the instance backup with the given name.
 func (c *Cluster) getInstanceBackupID(name string) (int, error) {
 	q := "SELECT id FROM instances_backups WHERE name=?"
@@ -225,3 +237,163 @@ func (c *Cluster) GetExpiredInstanceBackups() ([]InstanceBackup, error) {
 
 	return result, nil
 }
+
+// GetStoragePoolVolumeBackups returns the names of all backups of the storage volume with the
+// given name.
+func (c *Cluster) GetStoragePoolVolumeBackups(project, volumeName string, poolID int64) ([]string, error) {
+	var result []string
+
+	q := `SELECT storage_volumes_backups.name FROM storage_volumes_backups
+JOIN storage_volumes ON storage_volumes_backups.storage_volume_id=storage_volumes.id
+JOIN projects ON projects.id=storage_volumes.project_id
+WHERE projects.name=? AND storage_volumes.name=?`
+	inargs := []interface{}{project, volumeName}
+	outfmt := []interface{}{volumeName}
+	dbResults, err := queryScan(c, q, inargs, outfmt)
+	if err != nil {
+		return nil, err
+	}
+
+	for _, r := range dbResults {
+		result = append(result, r[0].(string))
+	}
+
+	return result, nil
+}
+
+// CreateStoragePoolVolumeBackup creates a new storage volume backup.
+func (c *Cluster) CreateStoragePoolVolumeBackup(args StoragePoolVolumeBackup) error {
+	_, err := c.getStoragePoolVolumeBackupID(args.Name)
+	if err == nil {
+		return ErrAlreadyDefined
+	}
+
+	err = c.Transaction(func(tx *ClusterTx) error {
+		volumeOnlyInt := 0
+		if args.VolumeOnly {
+			volumeOnlyInt = 1
+		}
+
+		optimizedStorageInt := 0
+		if args.OptimizedStorage {
+			optimizedStorageInt = 1
+		}
+
+		str := fmt.Sprintf("INSERT INTO storage_volumes_backups (storage_volume_id, name, creation_date, expiry_date, volume_only, optimized_storage) VALUES (?, ?, ?, ?, ?, ?)")
+		stmt, err := tx.tx.Prepare(str)
+		if err != nil {
+			return err
+		}
+		defer stmt.Close()
+		result, err := stmt.Exec(args.VolumeID, args.Name,
+			args.CreationDate.Unix(), args.ExpiryDate.Unix(), volumeOnlyInt,
+			optimizedStorageInt)
+		if err != nil {
+			return err
+		}
+
+		_, err = result.LastInsertId()
+		if err != nil {
+			return fmt.Errorf("Error inserting %q into database", args.Name)
+		}
+
+		return nil
+	})
+
+	return err
+}
+
+// Returns the ID of the storage volume backup with the given name.
+func (c *Cluster) getStoragePoolVolumeBackupID(name string) (int, error) {
+	q := "SELECT id FROM storage_volumes_backups WHERE name=?"
+	id := -1
+	arg1 := []interface{}{name}
+	arg2 := []interface{}{&id}
+	err := dbQueryRowScan(c, q, arg1, arg2)
+	if err == sql.ErrNoRows {
+		return -1, ErrNoSuchObject
+	}
+
+	return id, err
+}
+
+// DeleteStoragePoolVolumeBackup removes the storage volume backup with the given name from the database.
+func (c *Cluster) DeleteStoragePoolVolumeBackup(name string) error {
+	id, err := c.getStoragePoolVolumeBackupID(name)
+	if err != nil {
+		return err
+	}
+
+	err = exec(c, "DELETE FROM storage_volumes_backups WHERE id=?", id)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// GetStoragePoolVolumeBackup returns the volume backup with the given name.
+func (c *Cluster) GetStoragePoolVolumeBackup(projectName string, poolName string, backupName string) (StoragePoolVolumeBackup, error) {
+	args := StoragePoolVolumeBackup{}
+	args.Name = backupName
+
+	volumeOnlyInt := -1
+	optimizedStorageInt := -1
+	q := `
+SELECT storage_volumes_backups.id, storage_volumes_backups.storage_volume_id,
+       storage_volumes_backups.creation_date, storage_volumes_backups.expiry_date,
+       storage_volumes_backups.volume_only, storage_volumes_backups.optimized_storage
+    FROM storage_volumes_backups
+    JOIN storage_volumes ON storage_volumes.id=storage_volumes_backups.storage_volume_id
+    JOIN projects ON projects.id=storage_volumes.project_id
+    WHERE projects.name=? AND storage_volumes_backups.name=?
+`
+	arg1 := []interface{}{projectName, backupName}
+	arg2 := []interface{}{&args.ID, &args.VolumeID, &args.CreationDate,
+		&args.ExpiryDate, &volumeOnlyInt, &optimizedStorageInt}
+
+	err := dbQueryRowScan(c, q, arg1, arg2)
+	if err != nil {
+		if err == sql.ErrNoRows {
+			return args, ErrNoSuchObject
+		}
+
+		return args, err
+	}
+
+	if volumeOnlyInt == 1 {
+		args.VolumeOnly = true
+	}
+
+	if optimizedStorageInt == 1 {
+		args.OptimizedStorage = true
+	}
+
+	return args, nil
+}
+
+// RenameVolumeBackup renames a volume backup from the given current name
+// to the new one.
+func (c *Cluster) RenameVolumeBackup(oldName, newName string) error {
+	err := c.Transaction(func(tx *ClusterTx) error {
+		str := fmt.Sprintf("UPDATE storage_volumes_backups SET name = ? WHERE name = ?")
+		stmt, err := tx.tx.Prepare(str)
+		if err != nil {
+			return err
+		}
+		defer stmt.Close()
+
+		logger.Debug(
+			"Calling SQL Query",
+			log.Ctx{
+				"query":   "UPDATE storage_volumes_backups SET name = ? WHERE name = ?",
+				"oldName": oldName,
+				"newName": newName})
+		if _, err := stmt.Exec(newName, oldName); err != nil {
+			return err
+		}
+
+		return nil
+	})
+	return err
+}
diff --git a/lxd/db/operations_types.go b/lxd/db/operations_types.go
index 3ff629a5d6..893a6db4ea 100644
--- a/lxd/db/operations_types.go
+++ b/lxd/db/operations_types.go
@@ -56,6 +56,10 @@ const (
 	OperationBackupsExpire
 	OperationSnapshotsExpire
 	OperationCustomVolumeSnapshotsExpire
+	OperationCustomVolumeBackupCreate
+	OperationCustomVolumeBackupRemove
+	OperationCustomVolumeBackupRename
+	OperationCustomVolumeBackupRestore
 )
 
 // Description return a human-readable description of the operation type.
@@ -153,6 +157,14 @@ func (t OperationType) Description() string {
 		return "Cleaning up expired instance snapshots"
 	case OperationCustomVolumeSnapshotsExpire:
 		return "Cleaning up expired volume snapshots"
+	case OperationCustomVolumeBackupCreate:
+		return "Creating custom volume backup"
+	case OperationCustomVolumeBackupRemove:
+		return "Deleting custom volume backup"
+	case OperationCustomVolumeBackupRename:
+		return "Renaming custom volume backup"
+	case OperationCustomVolumeBackupRestore:
+		return "Restoring custom volume backup"
 	default:
 		return "Executing operation"
 	}
@@ -224,6 +236,14 @@ func (t OperationType) Permission() string {
 
 	case OperationCustomVolumeSnapshotsExpire:
 		return "operate-volumes"
+	case OperationCustomVolumeBackupCreate:
+		return "manage-storage-volumes"
+	case OperationCustomVolumeBackupRemove:
+		return "manage-storage-volumes"
+	case OperationCustomVolumeBackupRename:
+		return "manage-storage-volumes"
+	case OperationCustomVolumeBackupRestore:
+		return "manage-storage-volumes"
 	}
 
 	return ""
diff --git a/lxd/db/storage_volume_snapshots.go b/lxd/db/storage_volume_snapshots.go
index 2fbb390732..e0755d8d33 100644
--- a/lxd/db/storage_volume_snapshots.go
+++ b/lxd/db/storage_volume_snapshots.go
@@ -103,6 +103,27 @@ func (c *Cluster) UpdateStorageVolumeSnapshot(project, volumeName string, volume
 	return err
 }
 
+// GetStorageVolumeSnapshotsNames gets the snapshot names of a storage volume.
+func (c *Cluster) GetStorageVolumeSnapshotsNames(volumeID int64) ([]string, error) {
+	var snapshotName string
+	query := "SELECT name FROM storage_volumes_snapshots WHERE storage_volume_id=?"
+	inargs := []interface{}{volumeID}
+	outargs := []interface{}{snapshotName}
+
+	result, err := queryScan(c, query, inargs, outargs)
+	if err != nil {
+		return []string{}, err
+	}
+
+	var out []string
+
+	for _, r := range result {
+		out = append(out, r[0].(string))
+	}
+
+	return out, nil
+}
+
 // GetStorageVolumeSnapshotExpiry gets the expiry date of a storage volume snapshot.
 func (c *Cluster) GetStorageVolumeSnapshotExpiry(volumeID int64) (time.Time, error) {
 	var expiry time.Time

From 2e25f1c722bf22b4cd5e1d8c3cdc2c5c7bd57531 Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 22:11:38 +0200
Subject: [PATCH 10/14] lxd: Add custom volume backup functionality

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/backup.go         |   2 +-
 lxd/backup/backup.go  | 164 +++++++++++++++++++++++++++++++++++++++---
 lxd/instances_post.go |   2 +-
 3 files changed, 158 insertions(+), 10 deletions(-)

diff --git a/lxd/backup.go b/lxd/backup.go
index a938311e14..685bc1eb00 100644
--- a/lxd/backup.go
+++ b/lxd/backup.go
@@ -282,7 +282,7 @@ func pruneExpiredContainerBackups(ctx context.Context, d *Daemon) error {
 			return errors.Wrapf(err, "Error deleting instance backup %s", b.Name)
 		}
 
-		err = backup.DoBackupDelete(d.State(), inst.Project(), b.Name, inst.Name())
+		err = backup.DoInstanceBackupDelete(d.State(), inst.Project(), b.Name, inst.Name())
 		if err != nil {
 			return errors.Wrapf(err, "Error deleting instance backup %s", b.Name)
 		}
diff --git a/lxd/backup/backup.go b/lxd/backup/backup.go
index dcc1c2ada1..ea9acd34e4 100644
--- a/lxd/backup/backup.go
+++ b/lxd/backup/backup.go
@@ -33,11 +33,12 @@ type Info struct {
 	Snapshots        []string         `json:"snapshots,omitempty" yaml:"snapshots,omitempty"`
 	OptimizedStorage *bool            `json:"optimized,omitempty" yaml:"optimized,omitempty"`               // Optional field to handle older optimized backups that don't have this field.
 	OptimizedHeader  *bool            `json:"optimized_header,omitempty" yaml:"optimized_header,omitempty"` // Optional field to handle older optimized backups that don't have this field.
-	Type             api.InstanceType `json:"type" yaml:"type"`
+	Type             api.InstanceType `json:"type,omitempty" yaml:"type,omitempty"`                         // Type is only set for instance backups.
+	ContentType      string           `json:"content_Type,omitempty" yaml:"content_type,omitempty"`         // ContentType is only set for custom volumes as there is no other way of knowing what kind it is.
 }
 
 // GetInfo extracts backup information from a given ReadSeeker.
-func GetInfo(r io.ReadSeeker) (*Info, error) {
+func GetInfo(r io.ReadSeeker, isInstance bool) (*Info, error) {
 	result := Info{}
 	hasIndexFile := false
 
@@ -80,8 +81,12 @@ func GetInfo(r io.ReadSeeker) (*Info, error) {
 			hasIndexFile = true
 
 			// Default to container if index doesn't specify instance type.
-			if result.Type == api.InstanceTypeAny {
-				result.Type = api.InstanceTypeContainer
+			if isInstance {
+				if result.Type == api.InstanceTypeAny {
+					result.Type = api.InstanceTypeContainer
+				}
+			} else {
+				result.Type = ""
 			}
 
 			// Default to no optimized header if not specified.
@@ -102,7 +107,7 @@ func GetInfo(r io.ReadSeeker) (*Info, error) {
 		}
 
 		// If the tarball contains a binary dump of the container, then this is an optimized backup.
-		if hdr.Name == "backup/container.bin" {
+		if hdr.Name == "backup/container.bin" || hdr.Name == "backup/volume.bin" {
 			optimizedStorageTrue := true
 			result.OptimizedStorage = &optimizedStorageTrue
 
@@ -136,6 +141,23 @@ type InstanceBackup struct {
 	compressionAlgorithm string
 }
 
+// VolumeBackup represents an instance backup
+type VolumeBackup struct {
+	state       *state.State
+	projectName string
+	poolName    string
+	volumeName  string
+
+	// Properties
+	id                   int
+	name                 string
+	creationDate         time.Time
+	expiryDate           time.Time
+	volumeOnly           bool
+	optimizedStorage     bool
+	compressionAlgorithm string
+}
+
 // NewInstance instantiates a new Backup struct.
 func NewInstance(state *state.State, inst Instance, ID int, name string, creationDate, expiryDate time.Time, instanceOnly, optimizedStorage bool) *InstanceBackup {
 	return &InstanceBackup{
@@ -216,7 +238,7 @@ func (b *InstanceBackup) Rename(newName string) error {
 
 // Delete removes an instance backup
 func (b *InstanceBackup) Delete() error {
-	return DoBackupDelete(b.state, b.instance.Project(), b.name, b.instance.Name())
+	return DoInstanceBackupDelete(b.state, b.instance.Project(), b.name, b.instance.Name())
 }
 
 // Render returns an InstanceBackup struct of the backup.
@@ -231,8 +253,104 @@ func (b *InstanceBackup) Render() *api.InstanceBackup {
 	}
 }
 
-// DoBackupDelete deletes a backup.
-func DoBackupDelete(s *state.State, projectName, backupName, containerName string) error {
+// NewVolume instantiates a new Backup struct.
+func NewVolume(state *state.State, projectName, poolName, volumeName string, ID int, name string, creationDate, expiryDate time.Time, volumeOnly, optimizedStorage bool) *VolumeBackup {
+	return &VolumeBackup{
+		state:            state,
+		projectName:      projectName,
+		poolName:         poolName,
+		volumeName:       volumeName,
+		id:               ID,
+		name:             name,
+		creationDate:     creationDate,
+		expiryDate:       expiryDate,
+		volumeOnly:       volumeOnly,
+		optimizedStorage: optimizedStorage,
+	}
+}
+
+// CompressionAlgorithm returns the compression used for the tarball.
+func (b *VolumeBackup) CompressionAlgorithm() string {
+	return b.compressionAlgorithm
+}
+
+// SetCompressionAlgorithm sets the tarball compression.
+func (b *VolumeBackup) SetCompressionAlgorithm(compression string) {
+	b.compressionAlgorithm = compression
+}
+
+// VolumeOnly returns whether only the volume itself is to be backed up.
+func (b *VolumeBackup) VolumeOnly() bool {
+	return b.volumeOnly
+}
+
+// Name returns the name of the backup.
+func (b *VolumeBackup) Name() string {
+	return b.name
+}
+
+// OptimizedStorage returns whether the backup is to be performed using
+// optimization supported by the storage driver.
+func (b *VolumeBackup) OptimizedStorage() bool {
+	return b.optimizedStorage
+}
+
+// Rename renames a container backup
+func (b *VolumeBackup) Rename(newName string) error {
+	oldBackupPath := shared.VarPath("storage-pools", b.poolName, "custom-backups", project.StorageVolume(b.projectName, b.name))
+	newBackupPath := shared.VarPath("storage-pools", b.poolName, "custom-backups", project.StorageVolume(b.projectName, newName))
+
+	// Create the new backup path
+	backupsPath := shared.VarPath("storage-pools", b.poolName, "custom-backups", project.StorageVolume(b.projectName, b.volumeName))
+	if !shared.PathExists(backupsPath) {
+		err := os.MkdirAll(backupsPath, 0700)
+		if err != nil {
+			return err
+		}
+	}
+
+	// Rename the backup directory
+	err := os.Rename(oldBackupPath, newBackupPath)
+	if err != nil {
+		return err
+	}
+
+	// Check if we can remove the container directory
+	empty, _ := shared.PathIsEmpty(backupsPath)
+	if empty {
+		err := os.Remove(backupsPath)
+		if err != nil {
+			return err
+		}
+	}
+
+	// Rename the database record
+	err = b.state.Cluster.RenameVolumeBackup(b.name, newName)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// Delete removes a volume backup
+func (b *VolumeBackup) Delete() error {
+	return DoVolumeBackupDelete(b.state, b.projectName, b.poolName, b.name, b.volumeName)
+}
+
+// Render returns a VolumeBackup struct of the backup.
+func (b *VolumeBackup) Render() *api.StoragePoolVolumeBackup {
+	return &api.StoragePoolVolumeBackup{
+		Name:             strings.SplitN(b.name, "/", 2)[1],
+		CreatedAt:        b.creationDate,
+		ExpiresAt:        b.expiryDate,
+		VolumeOnly:       b.volumeOnly,
+		OptimizedStorage: b.optimizedStorage,
+	}
+}
+
+// DoInstanceBackupDelete deletes a backup.
+func DoInstanceBackupDelete(s *state.State, projectName, backupName, containerName string) error {
 	backupPath := shared.VarPath("backups", project.Instance(projectName, backupName))
 
 	// Delete the on-disk data
@@ -261,3 +379,33 @@ func DoBackupDelete(s *state.State, projectName, backupName, containerName strin
 
 	return nil
 }
+
+// DoVolumeBackupDelete deletes a volume backup.
+func DoVolumeBackupDelete(s *state.State, projectName, poolName, backupName, volumeName string) error {
+	backupPath := shared.VarPath("storage-pools", poolName, "custom-backups", project.StorageVolume(projectName, backupName))
+	// Delete the on-disk data
+	if shared.PathExists(backupPath) {
+		err := os.RemoveAll(backupPath)
+		if err != nil {
+			return err
+		}
+	}
+
+	// Check if we can remove the container directory
+	backupsPath := shared.VarPath("storage-pools", poolName, "custom-backups", project.StorageVolume(projectName, volumeName))
+	empty, _ := shared.PathIsEmpty(backupsPath)
+	if empty {
+		err := os.Remove(backupsPath)
+		if err != nil {
+			return err
+		}
+	}
+
+	// Remove the database record
+	err := s.Cluster.DeleteStoragePoolVolumeBackup(backupName)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
diff --git a/lxd/instances_post.go b/lxd/instances_post.go
index b3a83e89b8..dfcaa176c9 100644
--- a/lxd/instances_post.go
+++ b/lxd/instances_post.go
@@ -598,7 +598,7 @@ func createFromBackup(d *Daemon, project string, data io.Reader, pool string) re
 	// Parse the backup information.
 	backupFile.Seek(0, 0)
 	logger.Debug("Reading backup file info")
-	bInfo, err := backup.GetInfo(backupFile)
+	bInfo, err := backup.GetInfo(backupFile, true)
 	if err != nil {
 		return response.BadRequest(err)
 	}

From 19a6278c17f800982a1d873c3523bd57c0457314 Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 22:02:03 +0200
Subject: [PATCH 11/14] lxd/storage: Handle custom volume backups

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/storage/backend_lxd.go                  | 65 +++++++++++++++++++++
 lxd/storage/backend_mock.go                 |  8 +++
 lxd/storage/drivers/driver_btrfs_volumes.go |  8 +++
 lxd/storage/drivers/driver_zfs_volumes.go   |  8 +++
 lxd/storage/drivers/generic_vfs.go          | 31 ++++++++--
 lxd/storage/drivers/volume.go               |  2 +-
 lxd/storage/pool_interface.go               |  4 ++
 lxd/storage/storage.go                      |  6 ++
 8 files changed, 127 insertions(+), 5 deletions(-)

diff --git a/lxd/storage/backend_lxd.go b/lxd/storage/backend_lxd.go
index c4f34c08d6..6e8cb5d800 100644
--- a/lxd/storage/backend_lxd.go
+++ b/lxd/storage/backend_lxd.go
@@ -3397,3 +3397,68 @@ func (b *lxdBackend) CheckInstanceBackupFileSnapshots(backupConf *backup.Instanc
 
 	return existingSnapshots, nil
 }
+
+func (b *lxdBackend) BackupCustomVolume(projectName string, volName string, tarWriter *instancewriter.InstanceTarWriter, optimized bool, snapshots bool, op *operations.Operation) error {
+	logger := logging.AddContext(b.logger, log.Ctx{"project": projectName, "volume": volName, "optimized": optimized, "snapshots": snapshots})
+	logger.Debug("BackupCustomVolume started")
+	defer logger.Debug("BackupCustomVolume finished")
+
+	// Get the volume name on storage.
+	volStorageName := project.StorageVolume(projectName, volName)
+
+	_, volume, err := b.state.Cluster.GetLocalStoragePoolVolume(projectName, volName, db.StoragePoolVolumeTypeCustom, b.id)
+	if err != nil {
+		return err
+	}
+
+	vol := b.newVolume(drivers.VolumeTypeCustom, drivers.ContentType(volume.ContentType), volStorageName, volume.Config)
+
+	err = b.driver.BackupVolume(vol, tarWriter, optimized, snapshots, op)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func (b *lxdBackend) CreateCustomVolumeFromBackup(srcBackup backup.Info, srcData io.ReadSeeker, op *operations.Operation) error {
+	logger := logging.AddContext(b.logger, log.Ctx{"project": srcBackup.Project, "volume": srcBackup.Name, "snapshots": srcBackup.Snapshots, "optimizedStorage": *srcBackup.OptimizedStorage})
+	logger.Debug("CreateCustomVolumeFromBackup started")
+	defer logger.Debug("CreateCustomVolumeFromBackup finished")
+
+	// Get the volume name on storage.
+	volStorageName := project.Instance(srcBackup.Project, srcBackup.Name)
+
+	// We don't know the volume's config yet as tarball hasn't been unpacked.
+	// We will apply the config as part of the post hook function returned if driver needs to.
+	vol := b.newVolume(drivers.VolumeTypeCustom, drivers.ContentType(srcBackup.ContentType), volStorageName, nil)
+
+	revert := revert.New()
+	defer revert.Fail()
+
+	// Unpack the backup into the new storage volume(s).
+	volPostHook, revertHook, err := b.driver.CreateVolumeFromBackup(vol, srcBackup, srcData, op)
+	if err != nil {
+		return err
+	}
+
+	if revertHook != nil {
+		revert.Add(revertHook)
+	}
+
+	logger.Debug("CreateCustomVolumeFromBackup post hook started")
+	defer logger.Debug("CreateCustomVolumeFromBackup post hook finished")
+
+	// If the driver returned a post hook, run it now.
+	if volPostHook != nil {
+		vol := b.newVolume(drivers.VolumeTypeCustom, drivers.ContentType(srcBackup.ContentType), volStorageName, nil)
+
+		err = volPostHook(vol)
+		if err != nil {
+			return err
+		}
+	}
+
+	revert.Success()
+	return nil
+}
diff --git a/lxd/storage/backend_mock.go b/lxd/storage/backend_mock.go
index 48d8be7ecd..857e52e559 100644
--- a/lxd/storage/backend_mock.go
+++ b/lxd/storage/backend_mock.go
@@ -242,3 +242,11 @@ func (b *mockBackend) UpdateCustomVolumeSnapshot(projectName string, volName str
 func (b *mockBackend) RestoreCustomVolume(projectName string, volName string, snapshotName string, op *operations.Operation) error {
 	return nil
 }
+
+func (b *mockBackend) BackupCustomVolume(projectName string, volName string, tarWriter *instancewriter.InstanceTarWriter, optimized bool, snapshots bool, op *operations.Operation) error {
+	return nil
+}
+
+func (b *mockBackend) CreateCustomVolumeFromBackup(srcBackup backup.Info, srcData io.ReadSeeker, op *operations.Operation) error {
+	return nil
+}
diff --git a/lxd/storage/drivers/driver_btrfs_volumes.go b/lxd/storage/drivers/driver_btrfs_volumes.go
index 222efcd031..9d2e9bdaf3 100644
--- a/lxd/storage/drivers/driver_btrfs_volumes.go
+++ b/lxd/storage/drivers/driver_btrfs_volumes.go
@@ -291,6 +291,8 @@ func (d *btrfs) CreateVolumeFromBackup(vol Volume, srcBackup backup.Info, srcDat
 		} else {
 			srcFilePrefix = "virtual-machine"
 		}
+	} else if vol.volType == VolumeTypeCustom {
+		srcFilePrefix = "volume"
 	}
 
 	err = unpackVolume(vol, srcFilePrefix)
@@ -1029,6 +1031,10 @@ func (d *btrfs) BackupVolume(vol Volume, tarWriter *instancewriter.InstanceTarWr
 
 		// Create temporary file to store output of btrfs send.
 		backupsPath := shared.VarPath("backups")
+
+		if vol.volType == VolumeTypeCustom {
+			backupsPath = shared.VarPath("storage-pools", d.name, "custom-backups")
+		}
 		tmpFile, err := ioutil.TempFile(backupsPath, "lxd_backup_btrfs")
 		if err != nil {
 			return errors.Wrapf(err, "Failed to open temporary file for BTRFS backup")
@@ -1187,6 +1193,8 @@ func (d *btrfs) BackupVolume(vol Volume, tarWriter *instancewriter.InstanceTarWr
 		} else {
 			fileNamePrefix = "virtual-machine"
 		}
+	} else if vol.volType == VolumeTypeCustom {
+		fileNamePrefix = "volume"
 	}
 
 	err = addVolume(vol, targetVolume, lastVolPath, fileNamePrefix)
diff --git a/lxd/storage/drivers/driver_zfs_volumes.go b/lxd/storage/drivers/driver_zfs_volumes.go
index 5c548061e0..e9cbb19898 100644
--- a/lxd/storage/drivers/driver_zfs_volumes.go
+++ b/lxd/storage/drivers/driver_zfs_volumes.go
@@ -367,6 +367,8 @@ func (d *zfs) CreateVolumeFromBackup(vol Volume, srcBackup backup.Info, srcData
 		} else {
 			fileName = "virtual-machine.bin"
 		}
+	} else if vol.volType == VolumeTypeCustom {
+		fileName = "volume.bin"
 	}
 
 	err = unpackVolume(srcData, unpacker, fmt.Sprintf("backup/%s", fileName), d.dataset(vol, false))
@@ -1390,6 +1392,10 @@ func (d *zfs) BackupVolume(vol Volume, tarWriter *instancewriter.InstanceTarWrit
 
 		// Create temporary file to store output of ZFS send.
 		backupsPath := shared.VarPath("backups")
+
+		if vol.volType == VolumeTypeCustom {
+			backupsPath = shared.VarPath("storage-pools", d.name, "custom-backups")
+		}
 		tmpFile, err := ioutil.TempFile(backupsPath, "lxd_backup_zfs")
 		if err != nil {
 			return errors.Wrapf(err, "Failed to open temporary file for ZFS backup")
@@ -1475,6 +1481,8 @@ func (d *zfs) BackupVolume(vol Volume, tarWriter *instancewriter.InstanceTarWrit
 		} else {
 			fileName = "virtual-machine.bin"
 		}
+	} else if vol.volType == VolumeTypeCustom {
+		fileName = "volume.bin"
 	}
 
 	err = sendToFile(srcSnapshot, finalParent, fmt.Sprintf("backup/%s", fileName))
diff --git a/lxd/storage/drivers/generic_vfs.go b/lxd/storage/drivers/generic_vfs.go
index 8c2c7e12c9..2ebfc8ec6c 100644
--- a/lxd/storage/drivers/generic_vfs.go
+++ b/lxd/storage/drivers/generic_vfs.go
@@ -459,7 +459,12 @@ func genericVFSBackupVolume(d Driver, vol Volume, tarWriter *instancewriter.Inst
 			if v.IsVMBlock() {
 				blockPath, err := d.GetVolumeDiskPath(v)
 				if err != nil {
-					return errors.Wrapf(err, "Error getting VM block volume disk path")
+					errMsg := "Error getting VM block volume disk path"
+					if vol.volType == VolumeTypeCustom {
+						errMsg = "Error getting custom block volume disk path"
+					}
+
+					return errors.Wrapf(err, errMsg)
 				}
 
 				var blockDiskSize int64
@@ -477,7 +482,12 @@ func genericVFSBackupVolume(d Driver, vol Volume, tarWriter *instancewriter.Inst
 					exclude = append(exclude, blockPath)
 				}
 
-				d.Logger().Debug("Copying virtual machine config volume", log.Ctx{"sourcePath": mountPath, "prefix": prefix})
+				logMsg := "Copying virtual machine config volume"
+				if vol.volType == VolumeTypeCustom {
+					logMsg = "Copying custom config volume"
+				}
+
+				d.Logger().Debug(logMsg, log.Ctx{"sourcePath": mountPath, "prefix": prefix})
 				err = filepath.Walk(mountPath, func(srcPath string, fi os.FileInfo, err error) error {
 					if err != nil {
 						return err
@@ -501,7 +511,13 @@ func genericVFSBackupVolume(d Driver, vol Volume, tarWriter *instancewriter.Inst
 				}
 
 				name := fmt.Sprintf("%s.img", prefix)
-				d.Logger().Debug("Copying virtual machine block volume", log.Ctx{"sourcePath": blockPath, "file": name, "size": blockDiskSize})
+
+				logMsg = "Copying virtual machine block volume"
+				if vol.volType == VolumeTypeCustom {
+					logMsg = "Copying custom block volume"
+				}
+
+				d.Logger().Debug(logMsg, log.Ctx{"sourcePath": blockPath, "file": name, "size": blockDiskSize})
 				from, err := os.Open(blockPath)
 				if err != nil {
 					return errors.Wrapf(err, "Error opening file for reading %q", blockPath)
@@ -520,7 +536,12 @@ func genericVFSBackupVolume(d Driver, vol Volume, tarWriter *instancewriter.Inst
 					return errors.Wrapf(err, "Error copying %q as %q to tarball", blockPath, name)
 				}
 			} else {
-				d.Logger().Debug("Copying container filesystem volume", log.Ctx{"sourcePath": mountPath, "prefix": prefix})
+				logMsg := "Copying container filesystem volume"
+				if vol.volType == VolumeTypeCustom {
+					logMsg = "Copying custom filesystem volume"
+				}
+
+				d.Logger().Debug(logMsg, log.Ctx{"sourcePath": mountPath, "prefix": prefix})
 				return filepath.Walk(mountPath, func(srcPath string, fi os.FileInfo, err error) error {
 					if err != nil {
 						if os.IsNotExist(err) {
@@ -576,6 +597,8 @@ func genericVFSBackupVolume(d Driver, vol Volume, tarWriter *instancewriter.Inst
 	prefix := "backup/container"
 	if vol.IsVMBlock() {
 		prefix = "backup/virtual-machine"
+	} else if vol.volType == VolumeTypeCustom {
+		prefix = "backup/volume"
 	}
 
 	err := backupVolume(vol, prefix)
diff --git a/lxd/storage/drivers/volume.go b/lxd/storage/drivers/volume.go
index d5678e0b2f..3365f7936b 100644
--- a/lxd/storage/drivers/volume.go
+++ b/lxd/storage/drivers/volume.go
@@ -59,7 +59,7 @@ const ContentTypeBlock = ContentType("block")
 // BaseDirectories maps volume types to the expected directories.
 var BaseDirectories = map[VolumeType][]string{
 	VolumeTypeContainer: {"containers", "containers-snapshots"},
-	VolumeTypeCustom:    {"custom", "custom-snapshots"},
+	VolumeTypeCustom:    {"custom", "custom-snapshots", "custom-backups"},
 	VolumeTypeImage:     {"images"},
 	VolumeTypeVM:        {"virtual-machines", "virtual-machines-snapshots"},
 }
diff --git a/lxd/storage/pool_interface.go b/lxd/storage/pool_interface.go
index 82a424f07f..5f5829279a 100644
--- a/lxd/storage/pool_interface.go
+++ b/lxd/storage/pool_interface.go
@@ -88,4 +88,8 @@ type Pool interface {
 	MigrationTypes(contentType drivers.ContentType, refresh bool) []migration.Type
 	CreateCustomVolumeFromMigration(projectName string, conn io.ReadWriteCloser, args migration.VolumeTargetArgs, op *operations.Operation) error
 	MigrateCustomVolume(projectName string, conn io.ReadWriteCloser, args *migration.VolumeSourceArgs, op *operations.Operation) error
+
+	// Custom volume backups.
+	BackupCustomVolume(projectName string, volName string, tarWriter *instancewriter.InstanceTarWriter, optimized bool, snapshots bool, op *operations.Operation) error
+	CreateCustomVolumeFromBackup(srcBackup backup.Info, srcData io.ReadSeeker, op *operations.Operation) error
 }
diff --git a/lxd/storage/storage.go b/lxd/storage/storage.go
index 2c680aa8f6..4a16f3e5fd 100644
--- a/lxd/storage/storage.go
+++ b/lxd/storage/storage.go
@@ -72,6 +72,12 @@ func GetStoragePoolVolumeSnapshotMountPoint(poolName string, snapshotName string
 	return shared.VarPath("storage-pools", poolName, "custom-snapshots", snapshotName)
 }
 
+// GetStoragePoolVolumeBackupMountPoint returns the mountpoint of the given pool volume backup.
+// ${LXD_DIR}/storage-pools/<pool>/custom-backups/<custom volume name>/<snapshot name>
+func GetStoragePoolVolumeBackupMountPoint(poolName string, backupName string) string {
+	return shared.VarPath("storage-pools", poolName, "custom-backups", backupName)
+}
+
 // CreateContainerMountpoint creates the provided container mountpoint and symlink.
 func CreateContainerMountpoint(mountPoint string, mountPointSymlink string, privileged bool) error {
 	mntPointSymlinkExist := shared.PathExists(mountPointSymlink)

From 02339940f88bfb33aa8917382a915164bd86813a Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 22:12:42 +0200
Subject: [PATCH 12/14] lxd: Handle custom volume backups

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxd/api_1.0.go                |   3 +
 lxd/backup.go                 | 200 +++++++++++++
 lxd/storage_volumes.go        | 165 ++++++++++-
 lxd/storage_volumes_backup.go | 515 ++++++++++++++++++++++++++++++++++
 lxd/storage_volumes_utils.go  |  16 ++
 5 files changed, 892 insertions(+), 7 deletions(-)
 create mode 100644 lxd/storage_volumes_backup.go

diff --git a/lxd/api_1.0.go b/lxd/api_1.0.go
index 7c34d158fb..a9cc397146 100644
--- a/lxd/api_1.0.go
+++ b/lxd/api_1.0.go
@@ -84,6 +84,9 @@ var api10 = []APIEndpoint{
 	storagePoolVolumeTypeCustomCmd,
 	storagePoolVolumeTypeImageCmd,
 	storagePoolVolumeTypeVMCmd,
+	storagePoolVolumeTypeCustomBackupsCmd,
+	storagePoolVolumeTypeCustomBackupCmd,
+	storagePoolVolumeTypeCustomBackupExportCmd,
 }
 
 func api10Get(d *Daemon, r *http.Request) response.Response {
diff --git a/lxd/backup.go b/lxd/backup.go
index 685bc1eb00..ae46040763 100644
--- a/lxd/backup.go
+++ b/lxd/backup.go
@@ -290,3 +290,203 @@ func pruneExpiredContainerBackups(ctx context.Context, d *Daemon) error {
 
 	return nil
 }
+
+func volumeBackupCreate(s *state.State, args db.StoragePoolVolumeBackup, projectName string, poolName string, volumeName string) error {
+	logger := logging.AddContext(logger.Log, log.Ctx{"project": projectName, "storage_volume": volumeName, "name": args.Name})
+	logger.Debug("Volume backup started")
+	defer logger.Debug("Volume backup finished")
+
+	revert := revert.New()
+	defer revert.Fail()
+
+	// Get storage pool.
+	pool, err := storagePools.GetPoolByName(s, poolName)
+	if err != nil {
+		return errors.Wrap(err, "Load storage pool")
+	}
+
+	_, vol, err := s.Cluster.GetLocalStoragePoolVolume(projectName, volumeName, db.StoragePoolVolumeTypeCustom, pool.ID())
+	if err != nil {
+		return err
+	}
+
+	// Ignore requests for optimized backups when pool driver doesn't support it.
+	if args.OptimizedStorage && !pool.Driver().Info().OptimizedBackups {
+		args.OptimizedStorage = false
+	}
+
+	// Create the database entry.
+	err = s.Cluster.CreateStoragePoolVolumeBackup(args)
+	if err != nil {
+		if err == db.ErrAlreadyDefined {
+			return fmt.Errorf("Backup %q already exists", args.Name)
+		}
+
+		return errors.Wrap(err, "Insert backup info into database")
+	}
+
+	revert.Add(func() { s.Cluster.DeleteStoragePoolVolumeBackup(args.Name) })
+
+	backup, err := s.Cluster.GetStoragePoolVolumeBackup(projectName, poolName, args.Name)
+	if err != nil {
+		return errors.Wrap(err, "Failed to get backup from database")
+	}
+
+	// Detect compression method.
+	var compress string
+
+	backup.CompressionAlgorithm = args.CompressionAlgorithm
+
+	if backup.CompressionAlgorithm != "" {
+		compress = backup.CompressionAlgorithm
+	} else {
+		compress, err = cluster.ConfigGetString(s.Cluster, "backups.compression_algorithm")
+		if err != nil {
+			return err
+		}
+	}
+
+	// Create the target path if needed.
+	backupsPath := storagePools.GetStoragePoolVolumeBackupMountPoint(poolName, project.StorageVolume(projectName, volumeName))
+
+	if !shared.PathExists(backupsPath) {
+		err := os.MkdirAll(backupsPath, 0700)
+		if err != nil {
+			return err
+		}
+
+		revert.Add(func() { os.Remove(backupsPath) })
+	}
+
+	target := storagePools.GetStoragePoolVolumeBackupMountPoint(poolName, project.StorageVolume(projectName, backup.Name))
+
+	// Setup the tarball writer.
+	logger.Debug("Opening backup tarball for writing", log.Ctx{"path": target})
+	tarFileWriter, err := os.OpenFile(target, os.O_CREATE|os.O_WRONLY, 0600)
+	if err != nil {
+		return errors.Wrapf(err, "Error opening backup tarball for writing %q", target)
+	}
+	defer tarFileWriter.Close()
+	revert.Add(func() { os.Remove(target) })
+
+	// Create the tarball.
+	tarPipeReader, tarPipeWriter := io.Pipe()
+	defer tarPipeWriter.Close() // Ensure that go routine below always ends.
+	tarWriter := instancewriter.NewInstanceTarWriter(tarPipeWriter, nil)
+
+	// Setup tar writer go routine, with optional compression.
+	tarWriterRes := make(chan error, 0)
+	var compressErr error
+
+	go func(resCh chan<- error) {
+		logger.Debug("Started backup tarball writer")
+		defer logger.Debug("Finished backup tarball writer")
+		if compress != "none" {
+			compressErr = compressFile(compress, tarPipeReader, tarFileWriter)
+
+			// If a compression error occurred, close the tarPipeWriter to end the export.
+			if compressErr != nil {
+				tarPipeWriter.Close()
+			}
+		} else {
+			_, err = io.Copy(tarFileWriter, tarPipeReader)
+		}
+		resCh <- err
+	}(tarWriterRes)
+
+	// Write index file.
+	logger.Debug("Adding backup index file")
+	err = volumeBackupWriteIndex(s, projectName, volumeName, pool, backup.OptimizedStorage, !backup.VolumeOnly, vol.ContentType, tarWriter)
+
+	// Check compression errors.
+	if compressErr != nil {
+		return compressErr
+	}
+
+	// Check backupWriteIndex for errors.
+	if err != nil {
+		return errors.Wrapf(err, "Error writing backup index file")
+	}
+
+	err = pool.BackupCustomVolume(projectName, volumeName, tarWriter, backup.OptimizedStorage, !backup.VolumeOnly, nil)
+	if err != nil {
+		return errors.Wrap(err, "Backup create")
+	}
+
+	// Close off the tarball file.
+	err = tarWriter.Close()
+	if err != nil {
+		return errors.Wrap(err, "Error closing tarball writer")
+	}
+
+	// Close off the tarball pipe writer (this will end the go routine above).
+	err = tarPipeWriter.Close()
+	if err != nil {
+		return errors.Wrap(err, "Error closing tarball pipe writer")
+	}
+
+	err = <-tarWriterRes
+	if err != nil {
+		return errors.Wrap(err, "Error writing tarball")
+	}
+
+	revert.Success()
+	return nil
+}
+
+// volumeBackupWriteIndex generates an index.yaml file and then writes it to the root of the backup tarball.
+func volumeBackupWriteIndex(s *state.State, projectName string, volumeName string, pool storagePools.Pool, optimized bool, snapshots bool, contentType string, tarWriter *instancewriter.InstanceTarWriter) error {
+	// Indicate whether the driver will include a driver-specific optimized header.
+	poolDriverOptimizedHeader := false
+	if optimized {
+		poolDriverOptimizedHeader = pool.Driver().Info().OptimizedBackupHeader
+	}
+
+	indexInfo := backup.Info{
+		Name:             volumeName,
+		Pool:             pool.Name(),
+		Snapshots:        []string{},
+		Backend:          pool.Driver().Info().Name,
+		OptimizedStorage: &optimized,
+		OptimizedHeader:  &poolDriverOptimizedHeader,
+		ContentType:      contentType,
+	}
+
+	volID, err := s.Cluster.GetStoragePoolNodeVolumeID(projectName, volumeName, db.StoragePoolVolumeTypeCustom, pool.ID())
+	if err != nil {
+		return err
+	}
+
+	if snapshots {
+		snaps, err := s.Cluster.GetStorageVolumeSnapshotsNames(volID)
+		if err != nil {
+			return err
+		}
+
+		for _, snap := range snaps {
+			indexInfo.Snapshots = append(indexInfo.Snapshots, snap)
+		}
+	}
+
+	// Convert to YAML.
+	indexData, err := yaml.Marshal(&indexInfo)
+	if err != nil {
+		return err
+	}
+	r := bytes.NewReader(indexData)
+
+	indexFileInfo := instancewriter.FileInfo{
+		FileName:    "backup/index.yaml",
+		FileSize:    int64(len(indexData)),
+		FileMode:    0644,
+		FileModTime: time.Now(),
+	}
+
+	// Write to tarball.
+	err = tarWriter.WriteFileFromReader(r, &indexFileInfo)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
diff --git a/lxd/storage_volumes.go b/lxd/storage_volumes.go
index c4a8b865c8..18bb39d8d5 100644
--- a/lxd/storage_volumes.go
+++ b/lxd/storage_volumes.go
@@ -6,16 +6,21 @@ import (
 	"encoding/json"
 	"encoding/pem"
 	"fmt"
+	"io"
+	"io/ioutil"
 	"net/http"
+	"os"
 	"strings"
 
 	"github.com/gorilla/mux"
 	"github.com/gorilla/websocket"
+	"github.com/lxc/lxd/lxd/backup"
 	"github.com/lxc/lxd/lxd/db"
 	"github.com/lxc/lxd/lxd/instance"
 	"github.com/lxc/lxd/lxd/operations"
 	"github.com/lxc/lxd/lxd/project"
 	"github.com/lxc/lxd/lxd/response"
+	"github.com/lxc/lxd/lxd/revert"
 	"github.com/lxc/lxd/lxd/state"
 	storagePools "github.com/lxc/lxd/lxd/storage"
 	"github.com/lxc/lxd/lxd/util"
@@ -24,6 +29,7 @@ import (
 	log "github.com/lxc/lxd/shared/log15"
 	"github.com/lxc/lxd/shared/logger"
 	"github.com/lxc/lxd/shared/version"
+	"github.com/pkg/errors"
 )
 
 var storagePoolVolumesCmd = APIEndpoint{
@@ -274,15 +280,27 @@ func storagePoolVolumesTypeGet(d *Daemon, r *http.Request) response.Response {
 // /1.0/storage-pools/{name}/volumes/{type}
 // Create a storage volume in a given storage pool.
 func storagePoolVolumesTypePost(d *Daemon, r *http.Request) response.Response {
+	poolName := mux.Vars(r)["name"]
+
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
 	resp := forwardedResponseIfTargetIsRemote(d, r)
 	if resp != nil {
 		return resp
 	}
 
+	// If we're getting binary content, process separately
+	if r.Header.Get("Content-Type") == "application/octet-stream" {
+		return createStoragePoolVolumeFromBackup(d, projectName, poolName, r.Body)
+	}
+
 	req := api.StorageVolumesPost{}
 
 	// Parse the request.
-	err := json.NewDecoder(r.Body).Decode(&req)
+	err = json.NewDecoder(r.Body).Decode(&req)
 	if err != nil {
 		return response.BadRequest(err)
 	}
@@ -314,12 +332,6 @@ func storagePoolVolumesTypePost(d *Daemon, r *http.Request) response.Response {
 		return response.BadRequest(fmt.Errorf(`Currently not allowed to create storage volumes of type %q`, req.Type))
 	}
 
-	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
-	if err != nil {
-		return response.SmartError(err)
-	}
-
-	poolName := mux.Vars(r)["name"]
 	poolID, err := d.cluster.GetStoragePoolID(poolName)
 	if err != nil {
 		return response.SmartError(err)
@@ -1219,3 +1231,142 @@ func storagePoolVolumeTypeCustomDelete(d *Daemon, r *http.Request) response.Resp
 func storagePoolVolumeTypeImageDelete(d *Daemon, r *http.Request) response.Response {
 	return storagePoolVolumeTypeDelete(d, r, "image")
 }
+
+func createStoragePoolVolumeFromBackup(d *Daemon, project string, pool string, data io.Reader) response.Response {
+	revert := revert.New()
+	defer revert.Fail()
+
+	// Create temporary file to store uploaded backup data.
+	backupFile, err := ioutil.TempFile(shared.VarPath("storage-pools", pool, "custom-backups"), "lxd_backup_")
+	if err != nil {
+		return response.InternalError(err)
+	}
+	defer os.Remove(backupFile.Name())
+	revert.Add(func() { backupFile.Close() })
+
+	// Stream uploaded backup data into temporary file.
+	_, err = io.Copy(backupFile, data)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	// Detect squashfs compression and convert to tarball.
+	backupFile.Seek(0, 0)
+	_, algo, decomArgs, err := shared.DetectCompressionFile(backupFile)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	if algo == ".squashfs" {
+		// Pass the temporary file as program argument to the decompression command.
+		decomArgs := append(decomArgs, backupFile.Name())
+
+		// Create temporary file to store the decompressed tarball in.
+		tarFile, err := ioutil.TempFile(shared.VarPath("storage-pools", pool, "custom-backups"), "lxd_backup_decompress_")
+		if err != nil {
+			return response.InternalError(err)
+		}
+		defer os.Remove(tarFile.Name())
+
+		// Decompress to tarData temporary file.
+		err = shared.RunCommandWithFds(nil, tarFile, decomArgs[0], decomArgs[1:]...)
+		if err != nil {
+			return response.InternalError(err)
+		}
+
+		// We don't need the original squashfs file anymore.
+		backupFile.Close()
+		os.Remove(backupFile.Name())
+
+		// Replace the backup file handle with the handle to the tar file.
+		backupFile = tarFile
+	}
+
+	// Parse the backup information.
+	backupFile.Seek(0, 0)
+	logger.Debug("Reading backup file info")
+	bInfo, err := backup.GetInfo(backupFile, false)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+	bInfo.Project = project
+
+	// Override pool.
+	if pool != "" {
+		bInfo.Pool = pool
+	}
+
+	logger.Debug("Backup file info loaded", log.Ctx{
+		"name":      bInfo.Name,
+		"project":   bInfo.Project,
+		"backend":   bInfo.Backend,
+		"pool":      bInfo.Pool,
+		"optimized": *bInfo.OptimizedStorage,
+		"snapshots": bInfo.Snapshots,
+	})
+
+	// Check storage pool exists.
+	_, _, err = d.State().Cluster.GetStoragePoolInAnyState(bInfo.Pool)
+	if errors.Cause(err) == db.ErrNoSuchObject {
+		// The storage pool doesn't exist. If backup is in binary format (so we cannot alter
+		// the backup.yaml) or the pool has been specified directly from the user restoring
+		// the backup then we cannot proceed so return an error.
+		if *bInfo.OptimizedStorage || pool != "" {
+			return response.InternalError(errors.Wrap(err, "Storage pool not found"))
+		}
+
+		// Otherwise try and restore to the project's default profile pool.
+		_, profile, err := d.State().Cluster.GetProfile(bInfo.Project, "default")
+		if err != nil {
+			return response.InternalError(errors.Wrap(err, "Failed to get default profile"))
+		}
+
+		_, v, err := shared.GetRootDiskDevice(profile.Devices)
+		if err != nil {
+			return response.InternalError(errors.Wrap(err, "Failed to get root disk device"))
+		}
+
+		// Use the default-profile's root pool.
+		bInfo.Pool = v["pool"]
+	} else if err != nil {
+		return response.InternalError(err)
+	}
+
+	// Copy reverter so far so we can use it inside run after this function has finished.
+	runRevert := revert.Clone()
+
+	run := func(op *operations.Operation) error {
+		defer backupFile.Close()
+		defer runRevert.Fail()
+
+		pool, err := storagePools.GetPoolByName(d.State(), bInfo.Pool)
+		if err != nil {
+			return err
+		}
+
+		// Check if the backup is optimized that the source pool driver matches the target pool driver.
+		if *bInfo.OptimizedStorage && pool.Driver().Info().Name != bInfo.Backend {
+			return fmt.Errorf("Optimized backup storage driver %q differs from the target storage pool driver %q", bInfo.Backend, pool.Driver().Info().Name)
+		}
+
+		// Dump tarball to storage.
+		err = pool.CreateCustomVolumeFromBackup(*bInfo, backupFile, nil)
+		if err != nil {
+			return errors.Wrap(err, "Create instance from backup")
+		}
+
+		runRevert.Success()
+		return nil
+	}
+
+	resources := map[string][]string{}
+	resources["storage_volumes"] = []string{bInfo.Name}
+
+	op, err := operations.OperationCreate(d.State(), project, operations.OperationClassTask, db.OperationCustomVolumeBackupRestore, resources, nil, run, nil, nil)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	revert.Success()
+	return operations.OperationResponse(op)
+}
diff --git a/lxd/storage_volumes_backup.go b/lxd/storage_volumes_backup.go
new file mode 100644
index 0000000000..e69248beca
--- /dev/null
+++ b/lxd/storage_volumes_backup.go
@@ -0,0 +1,515 @@
+package main
+
+import (
+	"encoding/json"
+	"fmt"
+	"net/http"
+	"strings"
+	"time"
+
+	"github.com/gorilla/mux"
+	"github.com/lxc/lxd/lxd/backup"
+	"github.com/lxc/lxd/lxd/db"
+	"github.com/lxc/lxd/lxd/operations"
+	"github.com/lxc/lxd/lxd/project"
+	"github.com/lxc/lxd/lxd/response"
+	storagePools "github.com/lxc/lxd/lxd/storage"
+	"github.com/lxc/lxd/lxd/util"
+	"github.com/lxc/lxd/shared"
+	"github.com/lxc/lxd/shared/api"
+	"github.com/lxc/lxd/shared/version"
+	"github.com/pkg/errors"
+)
+
+var storagePoolVolumeTypeCustomBackupsCmd = APIEndpoint{
+	Path: "storage-pools/{pool}/volumes/{type}/{name}/backups",
+
+	Get:  APIEndpointAction{Handler: storagePoolVolumeTypeCustomBackupsGet, AccessHandler: allowProjectPermission("storage-volumes", "view")},
+	Post: APIEndpointAction{Handler: storagePoolVolumeTypeCustomBackupsPost, AccessHandler: allowProjectPermission("storage-volumes", "manage-storage-volumes")},
+}
+
+var storagePoolVolumeTypeCustomBackupCmd = APIEndpoint{
+	Path: "storage-pools/{pool}/volumes/{type}/{name}/backups/{backupName}",
+
+	Get:    APIEndpointAction{Handler: storagePoolVolumeTypeCustomBackupGet, AccessHandler: allowProjectPermission("storage-volumes", "view")},
+	Post:   APIEndpointAction{Handler: storagePoolVolumeTypeCustomBackupPost, AccessHandler: allowProjectPermission("storage-volumes", "manage-storage-volumes")},
+	Delete: APIEndpointAction{Handler: storagePoolVolumeTypeCustomBackupDelete, AccessHandler: allowProjectPermission("storage-volumes", "manage-storage-volumes")},
+}
+
+var storagePoolVolumeTypeCustomBackupExportCmd = APIEndpoint{
+	Path: "storage-pools/{pool}/volumes/{type}/{name}/backups/{backupName}/export",
+
+	Get: APIEndpointAction{Handler: storagePoolVolumeTypeCustomBackupExportGet, AccessHandler: allowProjectPermission("storage-volumes", "view")},
+}
+
+func storagePoolVolumeTypeCustomBackupsGet(d *Daemon, r *http.Request) response.Response {
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	// Get the name of the storage volume.
+	volumeName := mux.Vars(r)["name"]
+	// Get the name of the storage pool the volume is supposed to be attached to.
+	poolName := mux.Vars(r)["pool"]
+	// Get the volume type.
+	volumeTypeName := mux.Vars(r)["type"]
+
+	// Convert the volume type name to our internal integer representation.
+	volumeType, err := storagePools.VolumeTypeNameToType(volumeTypeName)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Check that the storage volume type is valid.
+	if volumeType != db.StoragePoolVolumeTypeCustom {
+		return response.BadRequest(fmt.Errorf("Invalid storage volume type %q", volumeTypeName))
+	}
+
+	poolID, _, err := d.cluster.GetStoragePool(poolName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	// Handle requests targeted to a volume on a different node
+	resp := forwardedResponseIfVolumeIsRemote(d, r, poolID, volumeName, db.StoragePoolVolumeTypeCustom)
+	if resp != nil {
+		return resp
+	}
+
+	recursion := util.IsRecursionRequest(r)
+
+	backupNames, err := d.State().Cluster.GetStoragePoolVolumeBackups(projectName, volumeName, poolID)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	backups := make([]*backup.VolumeBackup, len(backupNames))
+
+	for i, backupName := range backupNames {
+		b, err := d.State().Cluster.GetStoragePoolVolumeBackup(projectName, poolName, backupName)
+		if err != nil {
+			return response.SmartError(err)
+		}
+
+		backups[i] = backup.NewVolume(d.State(), projectName, poolName, volumeName, b.ID, b.Name, b.CreationDate, b.ExpiryDate, b.VolumeOnly, b.OptimizedStorage)
+	}
+
+	resultString := []string{}
+	resultMap := []*api.StoragePoolVolumeBackup{}
+
+	for _, backup := range backups {
+		if !recursion {
+			url := fmt.Sprintf("/%s/storage-pools/%s/custom/%s/backups/%s",
+				version.APIVersion, poolName, volumeName, strings.Split(backup.Name(), "/")[1])
+			resultString = append(resultString, url)
+		} else {
+			render := backup.Render()
+			resultMap = append(resultMap, render)
+		}
+	}
+
+	if !recursion {
+		return response.SyncResponse(true, resultString)
+	}
+
+	return response.SyncResponse(true, resultMap)
+}
+
+func storagePoolVolumeTypeCustomBackupsPost(d *Daemon, r *http.Request) response.Response {
+	// Get the name of the storage volume.
+	volumeName := mux.Vars(r)["name"]
+	// Get the name of the storage pool the volume is supposed to be attached to.
+	poolName := mux.Vars(r)["pool"]
+	// Get the volume type.
+	volumeTypeName := mux.Vars(r)["type"]
+
+	// Convert the volume type name to our internal integer representation.
+	volumeType, err := storagePools.VolumeTypeNameToType(volumeTypeName)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Check that the storage volume type is valid.
+	if volumeType != db.StoragePoolVolumeTypeCustom {
+		return response.BadRequest(fmt.Errorf("Invalid storage volume type %q", volumeTypeName))
+	}
+
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp := forwardedResponseIfTargetIsRemote(d, r)
+	if resp != nil {
+		return resp
+	}
+
+	poolID, _, err := d.cluster.GetStoragePool(poolName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp = forwardedResponseIfVolumeIsRemote(d, r, poolID, volumeName, db.StoragePoolVolumeTypeCustom)
+	if resp != nil {
+		return resp
+	}
+
+	volumeID, _, err := d.cluster.GetLocalStoragePoolVolume(projectName, volumeName, db.StoragePoolVolumeTypeCustom, poolID)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	rj := shared.Jmap{}
+	err = json.NewDecoder(r.Body).Decode(&rj)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	expiry, _ := rj.GetString("expires_at")
+	if expiry == "" {
+		// Disable expiration by setting it to zero time.
+		rj["expires_at"] = time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC)
+	}
+
+	// Create body with correct expiry.
+	body, err := json.Marshal(rj)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	req := api.StoragePoolVolumeBackupsPost{}
+
+	err = json.Unmarshal(body, &req)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	if req.Name == "" {
+		// come up with a name.
+		backups, err := d.cluster.GetStoragePoolVolumeBackups(projectName, volumeName, poolID)
+		if err != nil {
+			return response.BadRequest(err)
+		}
+
+		base := volumeName + shared.SnapshotDelimiter + "backup"
+		length := len(base)
+		max := 0
+
+		for _, backup := range backups {
+			// Ignore backups not containing base.
+			if !strings.HasPrefix(backup, base) {
+				continue
+			}
+
+			substr := backup[length:]
+			var num int
+			count, err := fmt.Sscanf(substr, "%d", &num)
+			if err != nil || count != 1 {
+				continue
+			}
+			if num >= max {
+				max = num + 1
+			}
+		}
+
+		req.Name = fmt.Sprintf("backup%d", max)
+	}
+
+	// Validate the name.
+	if strings.Contains(req.Name, "/") {
+		return response.BadRequest(fmt.Errorf("Backup names may not contain slashes"))
+	}
+
+	fullName := volumeName + shared.SnapshotDelimiter + req.Name
+	volumeOnly := req.VolumeOnly
+
+	backup := func(op *operations.Operation) error {
+		args := db.StoragePoolVolumeBackup{
+			Name:                 fullName,
+			VolumeID:             volumeID,
+			CreationDate:         time.Now(),
+			ExpiryDate:           req.ExpiresAt,
+			VolumeOnly:           volumeOnly,
+			OptimizedStorage:     req.OptimizedStorage,
+			CompressionAlgorithm: req.CompressionAlgorithm,
+		}
+
+		err := volumeBackupCreate(d.State(), args, projectName, poolName, volumeName)
+		if err != nil {
+			return errors.Wrap(err, "Create volume backup")
+		}
+
+		return nil
+	}
+
+	resources := map[string][]string{}
+	resources["storage_volumes"] = []string{volumeName}
+	resources["backups"] = []string{req.Name}
+
+	op, err := operations.OperationCreate(d.State(), projectName, operations.OperationClassTask,
+		db.OperationCustomVolumeBackupCreate, resources, nil, backup, nil, nil)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	return operations.OperationResponse(op)
+}
+
+func storagePoolVolumeTypeCustomBackupGet(d *Daemon, r *http.Request) response.Response {
+	// Get the name of the storage volume.
+	volumeName := mux.Vars(r)["name"]
+	// Get the name of the storage pool the volume is supposed to be attached to.
+	poolName := mux.Vars(r)["pool"]
+	// Get the volume type.
+	volumeTypeName := mux.Vars(r)["type"]
+	// Get backup name.
+	backupName := mux.Vars(r)["backupName"]
+
+	// Convert the volume type name to our internal integer representation.
+	volumeType, err := storagePools.VolumeTypeNameToType(volumeTypeName)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Check that the storage volume type is valid.
+	if volumeType != db.StoragePoolVolumeTypeCustom {
+		return response.BadRequest(fmt.Errorf("Invalid storage volume type %q", volumeTypeName))
+	}
+
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp := forwardedResponseIfTargetIsRemote(d, r)
+	if resp != nil {
+		return resp
+	}
+
+	poolID, _, err := d.cluster.GetStoragePool(poolName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp = forwardedResponseIfVolumeIsRemote(d, r, poolID, volumeName, db.StoragePoolVolumeTypeCustom)
+	if resp != nil {
+		return resp
+	}
+
+	fullName := volumeName + shared.SnapshotDelimiter + backupName
+
+	backup, err := storagePoolVolumeBackupLoadByName(d.State(), projectName, poolName, fullName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	return response.SyncResponse(true, backup.Render())
+}
+
+func storagePoolVolumeTypeCustomBackupPost(d *Daemon, r *http.Request) response.Response {
+	// Get the name of the storage volume.
+	volumeName := mux.Vars(r)["name"]
+	// Get the name of the storage pool the volume is supposed to be attached to.
+	poolName := mux.Vars(r)["pool"]
+	// Get the volume type.
+	volumeTypeName := mux.Vars(r)["type"]
+	// Get backup name.
+	backupName := mux.Vars(r)["backupName"]
+
+	// Convert the volume type name to our internal integer representation.
+	volumeType, err := storagePools.VolumeTypeNameToType(volumeTypeName)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Check that the storage volume type is valid.
+	if volumeType != db.StoragePoolVolumeTypeCustom {
+		return response.BadRequest(fmt.Errorf("Invalid storage volume type %q", volumeTypeName))
+	}
+
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp := forwardedResponseIfTargetIsRemote(d, r)
+	if resp != nil {
+		return resp
+	}
+
+	poolID, _, err := d.cluster.GetStoragePool(poolName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp = forwardedResponseIfVolumeIsRemote(d, r, poolID, volumeName, db.StoragePoolVolumeTypeCustom)
+	if resp != nil {
+		return resp
+	}
+
+	req := api.StoragePoolVolumeBackupPost{}
+	err = json.NewDecoder(r.Body).Decode(&req)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Validate the name
+	if strings.Contains(req.Name, "/") {
+		return response.BadRequest(fmt.Errorf("Backup names may not contain slashes"))
+	}
+
+	oldName := volumeName + shared.SnapshotDelimiter + backupName
+
+	backup, err := storagePoolVolumeBackupLoadByName(d.State(), projectName, poolName, oldName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	newName := volumeName + shared.SnapshotDelimiter + req.Name
+
+	rename := func(op *operations.Operation) error {
+		err := backup.Rename(newName)
+		if err != nil {
+			return err
+		}
+
+		return nil
+	}
+
+	resources := map[string][]string{}
+	resources["volume"] = []string{volumeName}
+
+	op, err := operations.OperationCreate(d.State(), projectName, operations.OperationClassTask,
+		db.OperationCustomVolumeBackupRename, resources, nil, rename, nil, nil)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	return operations.OperationResponse(op)
+}
+
+func storagePoolVolumeTypeCustomBackupDelete(d *Daemon, r *http.Request) response.Response {
+	// Get the name of the storage volume.
+	volumeName := mux.Vars(r)["name"]
+	// Get the name of the storage pool the volume is supposed to be attached to.
+	poolName := mux.Vars(r)["pool"]
+	// Get the volume type.
+	volumeTypeName := mux.Vars(r)["type"]
+	// Get backup name.
+	backupName := mux.Vars(r)["backupName"]
+
+	// Convert the volume type name to our internal integer representation.
+	volumeType, err := storagePools.VolumeTypeNameToType(volumeTypeName)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Check that the storage volume type is valid.
+	if volumeType != db.StoragePoolVolumeTypeCustom {
+		return response.BadRequest(fmt.Errorf("Invalid storage volume type %q", volumeTypeName))
+	}
+
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp := forwardedResponseIfTargetIsRemote(d, r)
+	if resp != nil {
+		return resp
+	}
+
+	poolID, _, err := d.cluster.GetStoragePool(poolName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp = forwardedResponseIfVolumeIsRemote(d, r, poolID, volumeName, db.StoragePoolVolumeTypeCustom)
+	if resp != nil {
+		return resp
+	}
+
+	fullName := volumeName + shared.SnapshotDelimiter + backupName
+
+	backup, err := storagePoolVolumeBackupLoadByName(d.State(), projectName, poolName, fullName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	remove := func(op *operations.Operation) error {
+		err := backup.Delete()
+		if err != nil {
+			return err
+		}
+
+		return nil
+	}
+
+	resources := map[string][]string{}
+	resources["volume"] = []string{volumeName}
+
+	op, err := operations.OperationCreate(d.State(), projectName, operations.OperationClassTask,
+		db.OperationCustomVolumeBackupRemove, resources, nil, remove, nil, nil)
+	if err != nil {
+		return response.InternalError(err)
+	}
+
+	return operations.OperationResponse(op)
+}
+
+func storagePoolVolumeTypeCustomBackupExportGet(d *Daemon, r *http.Request) response.Response {
+	// Get the name of the storage volume.
+	volumeName := mux.Vars(r)["name"]
+	// Get the name of the storage pool the volume is supposed to be attached to.
+	poolName := mux.Vars(r)["pool"]
+	// Get the volume type.
+	volumeTypeName := mux.Vars(r)["type"]
+	// Get backup name.
+	backupName := mux.Vars(r)["backupName"]
+
+	// Convert the volume type name to our internal integer representation.
+	volumeType, err := storagePools.VolumeTypeNameToType(volumeTypeName)
+	if err != nil {
+		return response.BadRequest(err)
+	}
+
+	// Check that the storage volume type is valid.
+	if volumeType != db.StoragePoolVolumeTypeCustom {
+		return response.BadRequest(fmt.Errorf("Invalid storage volume type %q", volumeTypeName))
+	}
+
+	projectName, err := project.StorageVolumeProject(d.State().Cluster, projectParam(r), db.StoragePoolVolumeTypeCustom)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp := forwardedResponseIfTargetIsRemote(d, r)
+	if resp != nil {
+		return resp
+	}
+
+	poolID, _, err := d.cluster.GetStoragePool(poolName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	resp = forwardedResponseIfVolumeIsRemote(d, r, poolID, volumeName, db.StoragePoolVolumeTypeCustom)
+	if resp != nil {
+		return resp
+	}
+
+	fullName := volumeName + shared.SnapshotDelimiter + backupName
+
+	// Ensure the volume exists
+	_, err = storagePoolVolumeBackupLoadByName(d.State(), projectName, poolName, fullName)
+	if err != nil {
+		return response.SmartError(err)
+	}
+
+	ent := response.FileResponseEntry{
+		Path: shared.VarPath("storage-pools", poolName, "custom-backups", project.StorageVolume(projectName, fullName)),
+	}
+
+	return response.FileResponse(r, []response.FileResponseEntry{ent}, nil, false)
+}
diff --git a/lxd/storage_volumes_utils.go b/lxd/storage_volumes_utils.go
index 9de58c8cdf..aea3525555 100644
--- a/lxd/storage_volumes_utils.go
+++ b/lxd/storage_volumes_utils.go
@@ -3,7 +3,9 @@ package main
 import (
 	"fmt"
 	"path/filepath"
+	"strings"
 
+	"github.com/lxc/lxd/lxd/backup"
 	"github.com/lxc/lxd/lxd/db"
 	"github.com/lxc/lxd/lxd/instance"
 	"github.com/lxc/lxd/lxd/project"
@@ -311,3 +313,17 @@ func profilesUsingPoolVolumeGetNames(db *db.Cluster, volumeName string, volumeTy
 
 	return usedBy, nil
 }
+
+func storagePoolVolumeBackupLoadByName(s *state.State, projectName, poolName, backupName string) (*backup.VolumeBackup, error) {
+	b, err := s.Cluster.GetStoragePoolVolumeBackup(projectName, poolName, backupName)
+	if err != nil {
+		return nil, err
+	}
+
+	volumeName := strings.Split(backupName, "/")[0]
+
+	backup := backup.NewVolume(s, projectName, poolName, volumeName, b.ID, b.Name, b.CreationDate,
+		b.ExpiryDate, b.VolumeOnly, b.OptimizedStorage)
+
+	return backup, nil
+}

From 00e1c2bcbf27e6a66ff08d970f326854b24ebf06 Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 22:13:10 +0200
Subject: [PATCH 13/14] lxc: Add import and export for custom volumes

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 lxc/storage_volume.go | 236 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 236 insertions(+)

diff --git a/lxc/storage_volume.go b/lxc/storage_volume.go
index a5093b5da5..7f53a618e5 100644
--- a/lxc/storage_volume.go
+++ b/lxc/storage_volume.go
@@ -2,6 +2,7 @@ package main
 
 import (
 	"fmt"
+	"io"
 	"io/ioutil"
 	"os"
 	"sort"
@@ -9,6 +10,7 @@ import (
 	"strings"
 	"time"
 
+	"github.com/pkg/errors"
 	"github.com/spf13/cobra"
 	"gopkg.in/yaml.v2"
 
@@ -18,7 +20,9 @@ import (
 	"github.com/lxc/lxd/shared/api"
 	cli "github.com/lxc/lxd/shared/cmd"
 	"github.com/lxc/lxd/shared/i18n"
+	"github.com/lxc/lxd/shared/ioprogress"
 	"github.com/lxc/lxd/shared/termios"
+	"github.com/lxc/lxd/shared/units"
 )
 
 type cmdStorageVolume struct {
@@ -67,10 +71,18 @@ Unless specified through a prefix, all volume operations affect "custom" (user c
 	storageVolumeEditCmd := cmdStorageVolumeEdit{global: c.global, storage: c.storage, storageVolume: c}
 	cmd.AddCommand(storageVolumeEditCmd.Command())
 
+	// Export
+	storageVolumeExportCmd := cmdStorageVolumeExport{global: c.global, storage: c.storage, storageVolume: c}
+	cmd.AddCommand(storageVolumeExportCmd.Command())
+
 	// Get
 	storageVolumeGetCmd := cmdStorageVolumeGet{global: c.global, storage: c.storage, storageVolume: c}
 	cmd.AddCommand(storageVolumeGetCmd.Command())
 
+	// Import
+	storageVolumeImportCmd := cmdStorageVolumeImport{global: c.global, storage: c.storage, storageVolume: c}
+	cmd.AddCommand(storageVolumeImportCmd.Command())
+
 	// List
 	storageVolumeListCmd := cmdStorageVolumeList{global: c.global, storage: c.storage, storageVolume: c}
 	cmd.AddCommand(storageVolumeListCmd.Command())
@@ -1631,3 +1643,227 @@ func (c *cmdStorageVolumeRestore) Run(cmd *cobra.Command, args []string) error {
 
 	return client.UpdateStoragePoolVolume(resource.name, "custom", args[1], req, etag)
 }
+
+// Export
+type cmdStorageVolumeExport struct {
+	global        *cmdGlobal
+	storage       *cmdStorage
+	storageVolume *cmdStorageVolume
+
+	flagVolumeOnly           bool
+	flagOptimizedStorage     bool
+	flagCompressionAlgorithm string
+}
+
+func (c *cmdStorageVolumeExport) Command() *cobra.Command {
+	cmd := &cobra.Command{}
+	cmd.Use = i18n.G("export [<remote>:]<pool> <volume> [<path>]")
+	cmd.Short = i18n.G("Export custom storage volume")
+	cmd.Long = cli.FormatSection(i18n.G("Description"), i18n.G(
+		`Export custom storage volume`))
+
+	cmd.Flags().BoolVar(&c.flagVolumeOnly, "volume-only", false, i18n.G("Export the volume without its snapshots"))
+	cmd.Flags().BoolVar(&c.flagOptimizedStorage, "optimized-storage", false,
+		i18n.G("Use storage driver optimized format (can only be restored on a similar pool)"))
+	cmd.Flags().StringVar(&c.flagCompressionAlgorithm, "compression", "", i18n.G("Define a compression algorithm: for backup or none")+"``")
+	cmd.RunE = c.Run
+
+	return cmd
+}
+
+func (c *cmdStorageVolumeExport) Run(cmd *cobra.Command, args []string) error {
+	conf := c.global.conf
+
+	// Sanity checks
+	exit, err := c.global.CheckArgs(cmd, args, 2, 3)
+	if exit {
+		return err
+	}
+
+	// Connect to LXD
+	remote, name, err := conf.ParseRemote(args[0])
+	if err != nil {
+		return err
+	}
+
+	d, err := conf.GetInstanceServer(remote)
+	if err != nil {
+		return err
+	}
+
+	volumeOnly := c.flagVolumeOnly
+
+	volName, volType := c.storageVolume.parseVolume("custom", args[1])
+	if volType != "custom" {
+		return fmt.Errorf(i18n.G("Only \"custom\" volumes can be exported"))
+	}
+
+	req := api.StoragePoolVolumeBackupsPost{
+		Name:                 "",
+		ExpiresAt:            time.Now().Add(24 * time.Hour),
+		VolumeOnly:           volumeOnly,
+		OptimizedStorage:     c.flagOptimizedStorage,
+		CompressionAlgorithm: c.flagCompressionAlgorithm,
+	}
+
+	op, err := d.CreateStoragePoolVolumeBackup(name, volName, req)
+	if err != nil {
+		return errors.Wrap(err, "Failed to create storage volume backup")
+	}
+
+	// Watch the background operation
+	progress := utils.ProgressRenderer{
+		Format: i18n.G("Backing up storage volume: %s"),
+		Quiet:  c.global.flagQuiet,
+	}
+
+	_, err = op.AddHandler(progress.UpdateOp)
+	if err != nil {
+		progress.Done("")
+		return err
+	}
+
+	// Wait until backup is done
+	err = utils.CancelableWait(op, &progress)
+	if err != nil {
+		progress.Done("")
+		return err
+	}
+	progress.Done("")
+
+	err = op.Wait()
+	if err != nil {
+		return err
+	}
+
+	// Get name of backup
+	backupName := strings.TrimPrefix(op.Get().Resources["backups"][0],
+		"/1.0/backups/")
+
+	defer func() {
+		// Delete backup after we're done
+		op, err = d.DeleteStoragePoolVolumeBackup(name, volName, backupName)
+		if err == nil {
+			op.Wait()
+		}
+	}()
+
+	var targetName string
+	if len(args) > 2 {
+		targetName = args[2]
+	} else {
+		targetName = "backup.tar.gz"
+	}
+
+	target, err := os.Create(shared.HostPath(targetName))
+	if err != nil {
+		return err
+	}
+	defer target.Close()
+
+	// Prepare the download request
+	progress = utils.ProgressRenderer{
+		Format: i18n.G("Exporting the backup: %s"),
+		Quiet:  c.global.flagQuiet,
+	}
+	backupFileRequest := lxd.BackupFileRequest{
+		BackupFile:      io.WriteSeeker(target),
+		ProgressHandler: progress.UpdateProgress,
+	}
+
+	// Export tarball
+	_, err = d.GetStoragePoolVolumeBackupFile(name, volName, backupName, &backupFileRequest)
+	if err != nil {
+		os.Remove(targetName)
+		progress.Done("")
+		return errors.Wrap(err, "Fetch storage volume backup file")
+	}
+
+	progress.Done(i18n.G("Backup exported successfully!"))
+	return nil
+}
+
+// Import
+type cmdStorageVolumeImport struct {
+	global        *cmdGlobal
+	storage       *cmdStorage
+	storageVolume *cmdStorageVolume
+}
+
+func (c *cmdStorageVolumeImport) Command() *cobra.Command {
+	cmd := &cobra.Command{}
+	cmd.Use = i18n.G("import [<remote>:]<pool> <backup file>")
+	cmd.Short = i18n.G("Import custom storage volumes")
+	cmd.Long = cli.FormatSection(i18n.G("Description"), i18n.G(
+		`Import backups of custom volumes including their snapshots.`))
+	cmd.Example = cli.FormatSection("", i18n.G(
+		`lxc storage volume import default backup0.tar.gz
+		Create a new custom volume using backup0.tar.gz as the source.`))
+
+	return cmd
+}
+
+func (c *cmdStorageVolumeImport) Run(cmd *cobra.Command, args []string) error {
+	conf := c.global.conf
+
+	// Sanity checks
+	exit, err := c.global.CheckArgs(cmd, args, 1, 2)
+	if exit {
+		return err
+	}
+
+	// Connect to LXD
+	remote, name, err := conf.ParseRemote(args[0])
+	if err != nil {
+		return err
+	}
+
+	d, err := conf.GetInstanceServer(remote)
+	if err != nil {
+		return err
+	}
+
+	file, err := os.Open(shared.HostPath(args[len(args)-1]))
+	if err != nil {
+		return err
+	}
+	defer file.Close()
+
+	fstat, err := file.Stat()
+	if err != nil {
+		return err
+	}
+
+	progress := utils.ProgressRenderer{
+		Format: i18n.G("Importing custom volume: %s"),
+		Quiet:  c.global.flagQuiet,
+	}
+
+	createArgs := lxd.StoragePoolVolumeBackupArgs{
+		BackupFile: &ioprogress.ProgressReader{
+			ReadCloser: file,
+			Tracker: &ioprogress.ProgressTracker{
+				Length: fstat.Size(),
+				Handler: func(percent int64, speed int64) {
+					progress.UpdateProgress(ioprogress.ProgressData{Text: fmt.Sprintf("%d%% (%s/s)", percent, units.GetByteSizeString(speed, 2))})
+				},
+			},
+		},
+	}
+
+	op, err := d.CreateStoragePoolVolumeFromBackup(name, createArgs)
+	if err != nil {
+		return err
+	}
+
+	// Wait for operation to finish
+	err = utils.CancelableWait(op, &progress)
+	if err != nil {
+		progress.Done("")
+		return err
+	}
+
+	progress.Done("")
+
+	return nil
+}

From 9c4d4c98217f16ef8d81af3df617898c25f6fcdc Mon Sep 17 00:00:00 2001
From: Thomas Hipp <thomas.hipp at canonical.com>
Date: Wed, 2 Sep 2020 23:01:58 +0200
Subject: [PATCH 14/14] test/suites: Test custom volume backups

Signed-off-by: Thomas Hipp <thomas.hipp at canonical.com>
---
 test/suites/backup.sh | 114 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 114 insertions(+)

diff --git a/test/suites/backup.sh b/test/suites/backup.sh
index e006ada601..5292bae8ab 100644
--- a/test/suites/backup.sh
+++ b/test/suites/backup.sh
@@ -413,3 +413,117 @@ test_backup_rename() {
 
   lxc delete --force c2
 }
+
+test_volume_backup_export() {
+  test_volume_backup_export_with_project
+  # test_volume_backup_export_with_project foo
+}
+
+test_volume_backup_export_with_project() {
+  pool="lxdtest-$(basename "${LXD_DIR}")"
+
+  if [ "$#" -ne 0 ]; then
+  # Create a project
+    lxc project create foo
+    lxc project switch foo
+
+    deps/import-busybox --project foo --alias testimage
+
+    # Add a root device to the default profile of the project
+    lxc profile device add default root disk path="/" pool="${pool}"
+  fi
+
+  ensure_import_testimage
+  ensure_has_localhost_remote "${LXD_ADDR}"
+
+  mkdir "${LXD_DIR}/optimized" "${LXD_DIR}/non-optimized"
+  lxd_backend=$(storage_backend "$LXD_DIR")
+
+  # Create test container
+  lxc init testimage c1
+  # Create custom storage volume
+  lxc storage volume create "${pool}" testvol
+  # Attach storage volume to the test container
+  lxc storage volume attach "${pool}" testvol c1 /mnt
+  # Start container
+  lxc start c1
+  # Create file on the custom volume
+  echo foo | lxc file push - c1/mnt/test
+  # Snapshot the custom volume
+  lxc storage volume snapshot "${pool}" testvol
+  # Change the content (the snapshot will contain the old value)
+  echo bar | lxc file push - c1/mnt/test
+
+  # Create backup without snapshots
+
+  if [ "$lxd_backend" = "btrfs" ] || [ "$lxd_backend" = "zfs" ]; then
+    # Create optimized backup
+    lxc storage volume export "${pool}" testvol "${LXD_DIR}/testvol-optimized.tar.gz" --volume-only --optimized-storage
+
+    [ -f "${LXD_DIR}/testvol-optimized.tar.gz" ]
+
+    # Extract backup tarball
+    tar -xzf "${LXD_DIR}/testvol-optimized.tar.gz" -C "${LXD_DIR}/optimized"
+
+    [ -f "${LXD_DIR}/optimized/backup/index.yaml" ]
+    [ -f "${LXD_DIR}/optimized/backup/volume.bin" ]
+    [ ! -d "${LXD_DIR}/optimized/backup/snapshots" ]
+  fi
+
+  # Create non-optimized backup
+  lxc storage volume export "${pool}" testvol "${LXD_DIR}/testvol.tar.gz" --volume-only
+
+  [ -f "${LXD_DIR}/testvol.tar.gz" ]
+
+  # Extract backup tarball
+  tar -xzf "${LXD_DIR}/testvol.tar.gz" -C "${LXD_DIR}/non-optimized"
+
+  # check tarball content
+  [ -f "${LXD_DIR}/non-optimized/backup/index.yaml" ]
+  [ -d "${LXD_DIR}/non-optimized/backup/volume" ]
+  [ ! -d "${LXD_DIR}/non-optimized/backup/snapshots" ]
+
+  ! grep -q -- '- snap0' "${LXD_DIR}/non-optimized/backup/index.yaml" || false
+
+  rm -rf "${LXD_DIR}/non-optimized/"*
+  rm "${LXD_DIR}/testvol.tar.gz"
+
+  # Create backup with snapshots
+
+  if [ "$lxd_backend" = "btrfs" ] || [ "$lxd_backend" = "zfs" ]; then
+    # Create optimized backup
+    lxc storage volume export "${pool}" testvol "${LXD_DIR}/testvol-optimized.tar.gz" --optimized-storage
+
+    [ -f "${LXD_DIR}/testvol-optimized.tar.gz" ]
+
+    # Extract backup tarball
+    tar -xzf "${LXD_DIR}/testvol-optimized.tar.gz" -C "${LXD_DIR}/optimized"
+
+    [ -f "${LXD_DIR}/optimized/backup/index.yaml" ]
+    [ -f "${LXD_DIR}/optimized/backup/volume.bin" ]
+    [ -f "${LXD_DIR}/optimized/backup/snapshots/snap0.bin" ]
+  fi
+
+  # Create non-optimized backup
+  lxc storage volume export "${pool}" testvol "${LXD_DIR}/testvol.tar.gz"
+
+  [ -f "${LXD_DIR}/testvol.tar.gz" ]
+
+  # Extract backup tarball
+  tar -xzf "${LXD_DIR}/testvol.tar.gz" -C "${LXD_DIR}/non-optimized"
+
+  # check tarball content
+  [ -f "${LXD_DIR}/non-optimized/backup/index.yaml" ]
+  [ -d "${LXD_DIR}/non-optimized/backup/volume" ]
+  [ -d "${LXD_DIR}/non-optimized/backup/snapshots/snap0" ]
+
+  grep -q -- '- snap0' "${LXD_DIR}/non-optimized/backup/index.yaml"
+
+  rm -rf "${LXD_DIR}/non-optimized/"*
+
+  # clean up
+  rm -rf "${LXD_DIR}/non-optimized/"* "${LXD_DIR}/optimized/"*
+  lxc storage volume detach "${pool}" testvol c1
+  lxc storage volume rm "${pool}" testvol
+  lxc rm -f c1
+}


More information about the lxc-devel mailing list