[lxc-users] not allowed to change kernel parameters inside container

Stéphane Graber stgraber at ubuntu.com
Sun May 26 01:52:30 UTC 2019


So the missing ones there's really nothing you can do about, though
normally that shouldn't cause sysctl application to fail as it's
somewhat common for systems to have a different set of sysctls.

In this case, it's because the network namespace is filtering some of them.

If your container doesn't need isolated networking, in theory using the
host namespace for the network would cause those to show back up, but
note that sharing network namespace with the host may have some very
weird side effects (such as systemd in the container interacting with
the host's).

On Sat, May 25, 2019 at 09:36:25PM -0400, Saint Michael wrote:
> some things do not work inside the container
>  sysctl -p
> fs.aio-max-nr = 1048576
> fs.aio-max-nr = 655360
> fs.inotify.max_user_instances = 8192
> kernel.pty.max = 16120
> kernel.randomize_va_space = 1
> kernel.shmall = 4294967296
> kernel.shmmax = 990896795648
> net.ipv4.conf.all.arp_announce = 2
> net.ipv4.conf.all.arp_filter = 1
> net.ipv4.conf.all.arp_ignore = 1
> net.ipv4.conf.all.rp_filter = 1
> net.ipv4.conf.default.accept_source_route = 0
> net.ipv4.conf.default.arp_filter = 1
> net.ipv4.conf.default.rp_filter = 1
> net.ipv4.ip_forward = 1
> net.ipv4.ip_local_port_range = 5000 65535
> net.ipv4.ip_nonlocal_bind = 0
> net.ipv4.ip_no_pmtu_disc = 0
> net.ipv4.tcp_tw_reuse = 1
> vm.hugepages_treat_as_movable = 0
> vm.hugetlb_shm_group = 128
> vm.nr_hugepages = 250
> vm.nr_hugepages_mempolicy = 250
> vm.overcommit_memory = 0
> vm.swappiness = 0
> vm.vfs_cache_pressure = 150
> vm.dirty_ratio = 10
> vm.dirty_background_ratio = 5
> kernel.hung_task_timeout_secs = 0
> sysctl: cannot stat /proc/sys/net/core/rmem_max: No such file or directory
> sysctl: cannot stat /proc/sys/net/core/wmem_max: No such file or directory
> sysctl: cannot stat /proc/sys/net/core/rmem_default: No such file or
> directory
> sysctl: cannot stat /proc/sys/net/core/wmem_default: No such file or
> directory
> net.ipv4.tcp_rmem = 10240 87380 10485760
> net.ipv4.tcp_wmem = 10240 87380 10485760
> sysctl: cannot stat /proc/sys/net/ipv4/udp_rmem_min: No such file or
> directory
> sysctl: cannot stat /proc/sys/net/ipv4/udp_wmem_min: No such file or
> directory
> sysctl: cannot stat /proc/sys/net/ipv4/udp_mem: No such file or directory
> sysctl: cannot stat /proc/sys/net/ipv4/tcp_mem: No such file or directory
> sysctl: cannot stat /proc/sys/net/core/optmem_max: No such file or directory
> net.core.somaxconn = 65535
> sysctl: cannot stat /proc/sys/net/core/netdev_max_backlog: No such file or
> directory
> fs.file-max = 500000
> 
> 
> On Sat, May 25, 2019 at 9:28 PM Saint Michael <venefax at gmail.com> wrote:
> 
> > Thanks
> > Finally some help!
> >
> > On Sat, May 25, 2019 at 9:07 PM Stéphane Graber <stgraber at ubuntu.com>
> > wrote:
> >
> >> On Sat, May 25, 2019 at 02:02:59PM -0400, Saint Michael wrote:
> >> > Thanks to all. I am sorry I touched a heated point. For me using
> >> > hard-virtualization for Linux apps is dementia. It should be kept only
> >> for
> >> > Windows VMs.
> >> > For me, the single point of using LXC is to be able to redeploy a
> >> complex
> >> > app from host to host in a few minutes. I use one-host->one-Container.
> >> So
> >> > what is the issue of giving all power to the containers?
> >> >
> >> > On Sat, May 25, 2019 at 1:56 PM jjs - mainphrame <jjs at mainphrame.com>
> >> wrote:
> >> >
> >> > > Given the developers stance, perhaps a temporary workaround is in
> >> order,
> >> > > e.g. ssh-key root login to physical host e.g. "ssh <host> sysctl
> >> > > key=value..."
> >> > >
> >> > > Jake
> >> > >
> >> > > On Mon, May 20, 2019 at 9:25 AM Saint Michael <venefax at gmail.com>
> >> wrote:
> >> > >
> >> > >> I am trying to use sysctl -p inside an LXC container and it says
> >> > >> read only file system
> >> > >> how do I give my container all possible rights?
> >> > >> Right now I have
> >> > >>
> >> > >> lxc.mount.auto = cgroup:mixed
> >> > >> lxc.tty.max = 10
> >> > >> lxc.pty.max = 1024
> >> > >> lxc.cgroup.devices.allow = c 1:3 rwm
> >> > >> lxc.cgroup.devices.allow = c 1:5 rwm
> >> > >> lxc.cgroup.devices.allow = c 5:1 rwm
> >> > >> lxc.cgroup.devices.allow = c 5:0 rwm
> >> > >> lxc.cgroup.devices.allow = c 4:0 rwm
> >> > >> lxc.cgroup.devices.allow = c 4:1 rwm
> >> > >> lxc.cgroup.devices.allow = c 1:9 rwm
> >> > >> lxc.cgroup.devices.allow = c 1:8 rwm
> >> > >> lxc.cgroup.devices.allow = c 136:* rwm
> >> > >> lxc.cgroup.devices.allow = c 5:2 rwm
> >> > >> lxc.cgroup.devices.allow = c 254:0 rwm
> >> > >> lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
> >> > >> lxc.cgroup.devices.allow = b 7:* rwm    # loop*
> >> > >> lxc.cgroup.devices.allow = c 10:229 rwm #fuse
> >> > >> lxc.cgroup.devices.allow = c 10:200 rwm #docker
> >> > >> #lxc.cgroup.memory.limit_in_bytes = 92536870910
> >> > >> lxc.apparmor.profile= unconfined
> >> > >> lxc.cgroup.devices.allow= a
> >> > >> lxc.cap.drop=
> >> > >> lxc.cgroup.devices.deny=
> >> > >> #lxc.mount.auto= proc:rw sys:ro cgroup:ro
> >> > >> lxc.autodev= 1
> >>
> >> Set:
> >>
> >> lxc.mount.auto=
> >> lxc.mount.auto=proc:rw sys:rw cgroup:rw
> >> lxc.apparmor.profile=unconfined
> >>
> >>
> >> This for a privileged container should allow all writes through /proc and
> >> /sys.
> >> As some pointed out, not usually a good idea for a container, but given
> >> it's the only thing on your system, that may be fine.
> >>
> >> --
> >> Stéphane Graber
> >> Ubuntu developer
> >> http://www.ubuntu.com
> >> _______________________________________________
> >> lxc-users mailing list
> >> lxc-users at lists.linuxcontainers.org
> >> http://lists.linuxcontainers.org/listinfo/lxc-users
> >>
> >

> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20190526/b24e6e8e/attachment-0001.sig>


More information about the lxc-users mailing list