[lxc-users] not allowed to change kernel parameters inside container

Saint Michael venefax at gmail.com
Sun May 26 02:15:06 UTC 2019


I am fine with having full interaction with the host. The host does not do
anything, it is like a glove for my app, which uses UDP very intensely,
like 500 Mbits per second. I need to fine-tune all its parameters.



On Sat, May 25, 2019 at 9:52 PM Stéphane Graber <stgraber at ubuntu.com> wrote:

> So the missing ones there's really nothing you can do about, though
> normally that shouldn't cause sysctl application to fail as it's
> somewhat common for systems to have a different set of sysctls.
>
> In this case, it's because the network namespace is filtering some of them.
>
> If your container doesn't need isolated networking, in theory using the
> host namespace for the network would cause those to show back up, but
> note that sharing network namespace with the host may have some very
> weird side effects (such as systemd in the container interacting with
> the host's).
>
> On Sat, May 25, 2019 at 09:36:25PM -0400, Saint Michael wrote:
> > some things do not work inside the container
> >  sysctl -p
> > fs.aio-max-nr = 1048576
> > fs.aio-max-nr = 655360
> > fs.inotify.max_user_instances = 8192
> > kernel.pty.max = 16120
> > kernel.randomize_va_space = 1
> > kernel.shmall = 4294967296
> > kernel.shmmax = 990896795648
> > net.ipv4.conf.all.arp_announce = 2
> > net.ipv4.conf.all.arp_filter = 1
> > net.ipv4.conf.all.arp_ignore = 1
> > net.ipv4.conf.all.rp_filter = 1
> > net.ipv4.conf.default.accept_source_route = 0
> > net.ipv4.conf.default.arp_filter = 1
> > net.ipv4.conf.default.rp_filter = 1
> > net.ipv4.ip_forward = 1
> > net.ipv4.ip_local_port_range = 5000 65535
> > net.ipv4.ip_nonlocal_bind = 0
> > net.ipv4.ip_no_pmtu_disc = 0
> > net.ipv4.tcp_tw_reuse = 1
> > vm.hugepages_treat_as_movable = 0
> > vm.hugetlb_shm_group = 128
> > vm.nr_hugepages = 250
> > vm.nr_hugepages_mempolicy = 250
> > vm.overcommit_memory = 0
> > vm.swappiness = 0
> > vm.vfs_cache_pressure = 150
> > vm.dirty_ratio = 10
> > vm.dirty_background_ratio = 5
> > kernel.hung_task_timeout_secs = 0
> > sysctl: cannot stat /proc/sys/net/core/rmem_max: No such file or
> directory
> > sysctl: cannot stat /proc/sys/net/core/wmem_max: No such file or
> directory
> > sysctl: cannot stat /proc/sys/net/core/rmem_default: No such file or
> > directory
> > sysctl: cannot stat /proc/sys/net/core/wmem_default: No such file or
> > directory
> > net.ipv4.tcp_rmem = 10240 87380 10485760
> > net.ipv4.tcp_wmem = 10240 87380 10485760
> > sysctl: cannot stat /proc/sys/net/ipv4/udp_rmem_min: No such file or
> > directory
> > sysctl: cannot stat /proc/sys/net/ipv4/udp_wmem_min: No such file or
> > directory
> > sysctl: cannot stat /proc/sys/net/ipv4/udp_mem: No such file or directory
> > sysctl: cannot stat /proc/sys/net/ipv4/tcp_mem: No such file or directory
> > sysctl: cannot stat /proc/sys/net/core/optmem_max: No such file or
> directory
> > net.core.somaxconn = 65535
> > sysctl: cannot stat /proc/sys/net/core/netdev_max_backlog: No such file
> or
> > directory
> > fs.file-max = 500000
> >
> >
> > On Sat, May 25, 2019 at 9:28 PM Saint Michael <venefax at gmail.com> wrote:
> >
> > > Thanks
> > > Finally some help!
> > >
> > > On Sat, May 25, 2019 at 9:07 PM Stéphane Graber <stgraber at ubuntu.com>
> > > wrote:
> > >
> > >> On Sat, May 25, 2019 at 02:02:59PM -0400, Saint Michael wrote:
> > >> > Thanks to all. I am sorry I touched a heated point. For me using
> > >> > hard-virtualization for Linux apps is dementia. It should be kept
> only
> > >> for
> > >> > Windows VMs.
> > >> > For me, the single point of using LXC is to be able to redeploy a
> > >> complex
> > >> > app from host to host in a few minutes. I use
> one-host->one-Container.
> > >> So
> > >> > what is the issue of giving all power to the containers?
> > >> >
> > >> > On Sat, May 25, 2019 at 1:56 PM jjs - mainphrame <
> jjs at mainphrame.com>
> > >> wrote:
> > >> >
> > >> > > Given the developers stance, perhaps a temporary workaround is in
> > >> order,
> > >> > > e.g. ssh-key root login to physical host e.g. "ssh <host> sysctl
> > >> > > key=value..."
> > >> > >
> > >> > > Jake
> > >> > >
> > >> > > On Mon, May 20, 2019 at 9:25 AM Saint Michael <venefax at gmail.com>
> > >> wrote:
> > >> > >
> > >> > >> I am trying to use sysctl -p inside an LXC container and it says
> > >> > >> read only file system
> > >> > >> how do I give my container all possible rights?
> > >> > >> Right now I have
> > >> > >>
> > >> > >> lxc.mount.auto = cgroup:mixed
> > >> > >> lxc.tty.max = 10
> > >> > >> lxc.pty.max = 1024
> > >> > >> lxc.cgroup.devices.allow = c 1:3 rwm
> > >> > >> lxc.cgroup.devices.allow = c 1:5 rwm
> > >> > >> lxc.cgroup.devices.allow = c 5:1 rwm
> > >> > >> lxc.cgroup.devices.allow = c 5:0 rwm
> > >> > >> lxc.cgroup.devices.allow = c 4:0 rwm
> > >> > >> lxc.cgroup.devices.allow = c 4:1 rwm
> > >> > >> lxc.cgroup.devices.allow = c 1:9 rwm
> > >> > >> lxc.cgroup.devices.allow = c 1:8 rwm
> > >> > >> lxc.cgroup.devices.allow = c 136:* rwm
> > >> > >> lxc.cgroup.devices.allow = c 5:2 rwm
> > >> > >> lxc.cgroup.devices.allow = c 254:0 rwm
> > >> > >> lxc.cgroup.devices.allow = c 10:137 rwm # loop-control
> > >> > >> lxc.cgroup.devices.allow = b 7:* rwm    # loop*
> > >> > >> lxc.cgroup.devices.allow = c 10:229 rwm #fuse
> > >> > >> lxc.cgroup.devices.allow = c 10:200 rwm #docker
> > >> > >> #lxc.cgroup.memory.limit_in_bytes = 92536870910
> > >> > >> lxc.apparmor.profile= unconfined
> > >> > >> lxc.cgroup.devices.allow= a
> > >> > >> lxc.cap.drop=
> > >> > >> lxc.cgroup.devices.deny=
> > >> > >> #lxc.mount.auto= proc:rw sys:ro cgroup:ro
> > >> > >> lxc.autodev= 1
> > >>
> > >> Set:
> > >>
> > >> lxc.mount.auto=
> > >> lxc.mount.auto=proc:rw sys:rw cgroup:rw
> > >> lxc.apparmor.profile=unconfined
> > >>
> > >>
> > >> This for a privileged container should allow all writes through /proc
> and
> > >> /sys.
> > >> As some pointed out, not usually a good idea for a container, but
> given
> > >> it's the only thing on your system, that may be fine.
> > >>
> > >> --
> > >> Stéphane Graber
> > >> Ubuntu developer
> > >> http://www.ubuntu.com
> > >> _______________________________________________
> > >> lxc-users mailing list
> > >> lxc-users at lists.linuxcontainers.org
> > >> http://lists.linuxcontainers.org/listinfo/lxc-users
> > >>
> > >
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
>
> --
> Stéphane Graber
> Ubuntu developer
> http://www.ubuntu.com
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20190525/3f6dc5d6/attachment.html>


More information about the lxc-users mailing list