[lxc-users] Args for lxd init via script
Fajar A. Nugraha
list at fajar.net
Mon May 22 02:28:28 UTC 2017
On Sun, May 21, 2017 at 10:05 PM, Mark Constable <markc at renta.net> wrote:
> On 5/21/17 11:16 PM, gunnar.wagner wrote:
>> just for my understanding ... you want to monitor disk usage on the
>> LXD host, right?
> Yes but I also want the current disk usage to be available inside the
> container so that, for instance, df returns realistic results.
Have you tried lxd with zfs?
> Using a zfs pool per container works just fine for this purpose but I
> am concerned that having potentially many 100s of zfs pools per server
> may not be very efficient. This sums up what I am after...
Did you mean zfs dataset?
Using separate POOL per container should be possible in newer lxd (e.g. in
xenial-backports). However that would also negate some benefits of using
zfs, as you'd need to have separate block device/loopback files for each
Using a default zfs pool, and having separate DATASET (or to be more
accurate, filesystem) per container, is the default setup. Which would
provide correct disk usage statistic (e.g. for "df" and such). And it's
perfectly normal to have several hundred or thousand dataset (which would
include snapshots as well) on a single pool.
> For example, PHP's disk_total_space() and disk_free_space()
>>> functions do work accurately with a zfs pool and seeing that I am
>>> working towards a LXD plugin for my hosting control panel I really
>>> need disk limits to work similar to a VPS or Xen VM.
>> IOW if I supply 5GB of space to a paying client I need to have a way
> for both them and myself to easily monitor that disk space. It's the
> one thing that has stopped me from using LXD for real. Well that and
> not having an open source PHP control panel that runs on Ubuntu servers.
IIRC ubuntu's roadmap is to integrate lxd (and zfs) into openstack (which
sould have lots of control panel already).
In the mean time, your best bet is probably create your own (possibly based
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the lxc-users