[lxc-users] Container scaling - LXD 2.0

Umberto Nicoletti umberto.nicoletti at gmail.com
Mon May 9 17:42:05 UTC 2016


On Mon, May 9, 2016 at 6:50 PM, Fajar A. Nugraha <list at fajar.net> wrote:

> On Mon, May 9, 2016 at 11:49 PM, Ron Kelley <rkelleyrtp at gmail.com> wrote:
> > Thanks Fajar,
> >
> > Appreciate the pointers.  We have already setup MariaDB with the
> small-instance tuning as well as setup php-fpm using the on-demand option
> as well.  The big issue now is RAM.
> >
> > A brief background:
> > ---------------------
> > A few years back, one of our customers asked us to host a small website
> for them.  As word got out, we starting hosting a few more.  Fast forward a
> few years and we are now hosting > 1300 sites.  We are currently running
> monolithic VMs (2vCPUs 2G RAM) that host about 60-70 sites each, and we are
> looking to move away from these huge VMs to something more scalable and
> secure like LXC.  The downside to this approach is the extra RAM overhead
> since each container will run its own copy of nginx/php-fpm/mariadb (for
> ease of portability).
>
>
> Separate mysql for each site would always require more RAM compared to
> a shared setup. If your goal is to be as resource-efficient as a
> shared hosting setup, then AFAIK there's simply no way to achieve
> that. There's always a tradeoff.
>
>
> >
> > After doing some research, it seems KSM is enabled in the Ubuntu 16
> kernel but is disabled by default.  I will be running some tests over the
> next few days to see if KSM can provide any benefit.
>
> Not only disabled by default, but you won't be able to save any memory
> from a "normal" (i.e. non-KVM) app.
> Not unless you also use ksm_preload
>

FYI: Centos ships with ksmd enabled by default on the Virtualization Host
profile.
I just checked one KVM server: out of ~ 34GB RAM used by 9 VMs KSM reports
a saving of ~ 80MB

Nothing to write home IMHO.

Umberto


> >  As for the 5G RAM question; our proposed model is to run a large VM
> instance (5-8G RAM, 4-6vCPUs) to host the same (or more) sites via LXC
> containers.  We are looking to protect each site from another as well as
> provide more fine-tuned system resources per site (limit RAM/CPU per
> site).  This is our main driver behind LXC.
>
> Here's some other scenarios you can look at, as alternatives:
> - use a big-enoug dedicated server (or a big enough VM, if you're a
> fan of EC2 and friends). There are 4-core servers with 64GB RAM and
> 2x500GB SSDs available for less than $100 / mo, if you want to go the
> low-cost route. Use NO swap (so you could at least ellminate THAT from
> possible IO hogs), and simply use lxd to serve several hundred
> containers on each server. Let each container use whichever core
> available (you'd most likely want to limit each container to use 1
> core at most). Use zfs as lxd storage, and setup automatic
> snapshot+send/receive for remote backup (this one is mostly zfs, not
> really lxd-specific).
>
> - similar to the above, but group the containers into several groups.
> Create a big-enough container for each group, and then use each of
> them as nested lxd (e.g.
> https://insights.ubuntu.com/2016/04/15/lxd-2-0-lxd-in-lxd-812/). That
> way you can group "small and rarely-accessed-sites" together (in the
> same first-level container) and give them large-enough resource pool
> (including being able to use all available cores), and isolate the
> small-number-of-"abusive"-sites on another first-level-container (on
> its own specicif core) so they don't mess your other sites.
>
>
> Those two setup should be more efficient than lxd-on-vms, and you
> might not even need KSM anymore.
>
> --
> Fajar
>
>
> >
> >
> > Thanks again for the info.
> >
> > -Ron
> >
> >
> >
> > On 5/9/2016 12:48 AM, Fajar A. Nugraha wrote:
> >> On Mon, May 9, 2016 at 7:18 AM, Ronald Kelley <rkelleyrtp at gmail.com>
> wrote:
> >>> Greetings all,
> >>>
> >>> I am trying to get some data points on how many containers we can run
> on a single host if all the containers run the same applications (eg:
> Wordpress w/nginx, php-fpm, mysql).  We have a number of LXD 2.0 servers
> running on Ubuntu 16.04 - each server has 5G RAM, 20G Swap, and 4 CPUs.
> >>
> >> When you use lxd you can already "overprovision" (as in, the sum of
> >> "limits.memory" on all running containers can be MUCH greater than
> >> total memory you have). See
> >>
> https://insights.ubuntu.com/2015/06/11/how-many-containers-can-you-run-on-your-machine/
> >> for example.
> >>
> >> I can say that swapping will -- most of the time -- kill performance.
> >> Big time. Often to the point that it'd be hard to even ssh into the
> >> server to "fix" things. Which is why most of my servers are now
> >> swapless. YMMV though.
> >>
> >> Do some experiments, monitor your swap activity (e.g. use "vmstat" to
> >> monitor swap in and swap out), and determine whether swap actually
> >> helps you, or cause more harm than good.
> >>
> >> Also, what's the story with the 5G RAM? Even my NUCs has 32GB RAM
> nowadays.
> >>
> >>> I have read about Kernel Samepage Memory (KSM), and it seems to be
> included in the Ubuntu 16.40 kernel.  So, in theory, we can over provision
> our containers by using KSM.
> >>>
> >>>
> >>> Any pointers?
> >>
> >> I'd actually suggest "try other methods first". For example:
> >> - you can easily save some memory from php-fpm by using "pm =
> >> ondemand" and a small number in "pm.max_children" (e.g. 2).
> >> - use shared mysql instace when possible. If not, use smaller memory
> >> settings, e.g.
> http://www.tocker.ca/2014/03/10/configuring-mysql-to-use-minimal-memory.html
> >>
> >> This entry from openvz should be relevant if you still want to use KSM
> >> for generic applications running inside a container:
> >>
> https://openvz.org/KSM_(kernel_same-page_merging)#Enabling_memory_deduplication_in_applications
> >>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160509/771ca26b/attachment.html>


More information about the lxc-users mailing list