[lxc-users] User input on resource limits for containers
Stéphane Graber
stgraber at ubuntu.com
Wed Aug 5 15:53:46 UTC 2015
Hello,
The LXD team is currently busy working on resource limitations and reporting.
The goal is to design a user friendly experience around CPU, memory and
I/O limits which doesn't require any specific understanding of the
implementation (cgroup knobs, ...).
As we are going through ideas, it would be very useful to us to know how
LXC users are currently using resource limits (lxc.cgroup.*, ...), what's
working for you and what isn't so we can try to improve things as much
as possible.
Here are a few questions to try and get things going. Please don't feel
limited to those though, any feedback is appreciated!
- Are you using resource limits with LXC?
- What kind of resource limits are you setting (cpu, memory, I/O, ...)?
- Are you updating the resource limits of running containers?
- Are you reading the current resource usage of your containers?
- Are you using resource limits only to prevent containers from using
all the host resources or as a way to provide different tier of
containers, some faster than others?
- Would percentage based limits (percentage of the host resources) be
useful to you?
- Are you using the cpuset controller only as a way to limit the number of
CPUs exposed to the container or is pinning to specific physical CPUs
actually important to you?
- Would you be interested in being able to limit network IOps and
bandwidth for a container?
- Is the split between memory, swap and kernel memory useful to you?
- Would you like a way to prevent overprovisioning, causing container
failure if the stated resource limits exceeds what's available on the
host?
Thanks!
--
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20150805/94a0d713/attachment-0001.sig>
More information about the lxc-users
mailing list