<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sat, Dec 3, 2016 at 7:56 PM, Ron Kelley <span dir="ltr"><<a href="mailto:rkelleyrtp@gmail.com" target="_blank">rkelleyrtp@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div style="color:black">
<div style="color:black">
<p style="margin:0px 0px 1em;color:black">My 0.02</p>
<p style="margin:0px 0px 1em;color:black">We have been using btrfs in
production for more than a year on other projects and about 6mos with
LXD. It has been rock solid. I have multiple LXD servers each
with >20 containers. We have a separate btrfs filesystem (with
compression enabled) to store the LXD containers. I take nightly snapshots
for all containers, and each server probably has 2000 snapshots. The only
issue thus far is the IO hit when deleting lots of snapshots at one
time. You need to delete a few (10 at a time), pause for 60secs, then
delete the next 10.</p></div></div></div></blockquote><div><br></div><div>Ultimately, IMHO it comes down to what you're comfortable with best.</div><div><br></div><div>I like the fact that btrfs can be used in nested lxd, but I didn't like the fact that you can't get "disk usage of one container" in btrfs. My compromise so far was to always use zfs, but assign btrfs-formatted zvol when I need nested lxd.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="color:black"><div style="color:black">
<p style="margin:0px 0px 1em;color:black">I have used ZFS in Linux in the
past and could never get adequate performance - regardless of tuning or
amount of RAM given to ZFS. In fact, I started using ZFS for our
backup server (64TB raw storage with 32GB RAM) but had to move back to XFS
due to severe performance issues. Nothing fancy; I did a by-the-bok
install and enabled compression and snapshots. I tried every tuning option
available (including SSD for L2-ARC). Nothing would improve the
performance.</p></div></div></div></blockquote><div><br></div><div>AFAIK the recommendation is 1GB RAM (for zfs use) for every 1TB of raw disk. That is on top of whatever amount of RAM required by the OS/app. Depending on your load, SLOG might be more useful than L2ARC (in fact, when configured incorrectly, L2ARC can do more harm than good). Testing this is easy enough though: if you experience much better performance with "sync=disabled", then you need SLOG.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="color:black"><div style="color:black">
<p style="margin:0px 0px 1em;color:black">To the OP: are you sure btrfs
is causing your issues? Have you traced the OP activity during the
hiccup moments?</p>
</div>
<div style="color:black"><div><div class="gmail-h5">
<p style="color:black;font-size:10pt;font-family:arial,sans-serif;margin:10pt 0px"></p></div></div></div></div></div></blockquote><div>... hence my earlier recommendation: htop, check syslog for OOM messages.</div><div><br></div><div>@Pierce: Add "iostat -mx 3" to that (especially to monitor IOPS usage), and also Tomasz's advice: don't use a disk image file.</div><div>if your provider doesn't allow additional disk images (or makes it REALLY hard to do so, like many cheap KVM-SSD VPS provider), then I highly recommend you check out EC2: their free tier includes vps with 1GB RAM, and you can easily add additional block devices.<br></div><div><br></div><div>-- </div><div>Fajar</div><div><br></div></div></div></div>