[lxc-users] zfs disk usage for published lxd images
Ron Kelley
rkelleyrtp at gmail.com
Mon May 16 14:21:05 UTC 2016
Thanks for that. Honestly, the only issue I have seen thus far is the out of space issue due to Metadata. This seems to happen frequently on our backup servers (55TB with thousands of snapshots). We have plenty of disk space available, but the Metata space is always >80% full no matter how much I try to remove/clean. Looks like I need to upgrade to the latest 4.6 mainline kernel and see what happens.
>From my experience, btrfs is much better than zfs for the features I need (snapshots, compression, dedup). My systems don't slow down and don't require require nearly as much RAM.
On 5/16/2016 7:46 AM, Tomasz Chmielewski wrote:
> I've been using btrfs quite a lot and it's great technology. There are
> some shortcomings though:
>
> 1) compression only really works with compress-force mount argument
>
> On a system which only stores text logs (receiving remote rsyslog logs),
> I was gaining around 10% with compress=zlib mount argument - not that
> great for text files/logs. With compress-force=zlib, I'm getting over
> 85% compress ratio (i.e. using just 165 GB of disk space to store 1.2 TB
> data). Maybe that's the consequence of receiving log streams, not sure
> (but, compress-force fixed bad compression ratio).
>
>
> 2) the first kernel where I'm not getting out-of-space issues is 4.6
> (which was released yesterday). If you're using a distribution kernel,
> you will probably be seeing out-of-space issues. Quite likely scenario
> to hit out-of-space with a kernel lower than 4.6 is to use a database
> (postgresql, mongo etc.) and to snapshot the volume. Ubuntu users can
> download kernel packages from
> http://kernel.ubuntu.com/~kernel-ppa/mainline/
>
>
> 3) had some really bad experiences with btrfs quotas stability in older
> kernels, and judging by the amount of changes in this area on
> linux-btrfs mailing list, I'd rather wait a few stable kernels than use
> it again
>
>
> 4) if you use databases - you should chattr +C database dir, otherwise,
> performance will suffer. Please remember that chattr +C does not have
> any effect on existing files, so you might need to stop your database,
> copy the files out, chattr +C the database dir, copy the files in
>
>
> Other than that - works fine, snapshots are very useful.
>
> It's hard to me to say what's "more stable" on Linux (btrfs or zfs); my
> bets would be btrfs getting more attention in the coming year, as it's
> getting its remaining bugs fixed.
>
>
> Tomasz Chmielewski
> http://wpkg.org
>
>
>
>
> On 2016-05-16 20:20, Ron Kelley wrote:
>> I tried ZFS on various linux/FreeBSD builds in the past and the
>> performance was aweful. It simply required too much RAM to perform
>> properly. This is why I went the BTRFS route.
>>
>> Maybe I should look at ZFS again on Ubuntu 16.04...
>>
>>
>>
>> On 5/16/2016 6:59 AM, Fajar A. Nugraha wrote:
>>> On Mon, May 16, 2016 at 5:38 PM, Ron Kelley <rkelleyrtp at gmail.com>
>>> wrote:
>>>> For what's worth, I use BTRFS, and it works great.
>>>
>>> Btrfs also works in nested lxd, so if that's your primary use I highly
>>> recommend btrfs.
>>>
>>> Of course, you could also keep using zfs-backed containers, but
>>> manually assign a zvol-formatted-as-btrfs for first-level-container's
>>> /var/lib/lxd.
>>>
>>>> Container copies are almost instant. I can use compression with
>>>> minimal overhead,
>>>
>>> zfs and btrfs are almost identical in that aspect (snapshot/clone, and
>>> lz4 vs lzop in compression time and ratio). However, lz4 (used in zfs)
>>> is MUCH faster at decompression compared to lzop (used in btrfs),
>>> while lzop uses less memory.
>>>
>>>> use quotas to limit container disk space,
>>>
>>> zfs does that too
>>>
>>>> and can schedule a deduplication task via cron to save even more space.
>>>
>>> That is, indeed, only available in btrfs
>>>
>> _______________________________________________
>> lxc-users mailing list
>> lxc-users at lists.linuxcontainers.org
>> http://lists.linuxcontainers.org/listinfo/lxc-users
>
More information about the lxc-users
mailing list