[lxc-users] openvpn in container starts manually, but not via init script?

Chaetoo3 Chaetoo3 at protonmail.com
Thu Jun 21 20:00:39 UTC 2018


Hi,

I've been using lxc for a while and it's really great :).  I like it very much.  I have been moving from privileged (P) to unprivileged (!P) containers now, and mostly that is fine.  I had a few initial troubles and I was able to solve most of them by googling.  One persists though:

I have a container that is used to launch openvpn.  I can do this successfully in the container by manually launching the openvpn client, but for some reason it doesn't work through openvpn's init script the way it used to in my !P container.  Has anyone else seen this?  Here is some data about it:

In /etc/openvpn i have a file called default-start.conf.  I can start the VPN successfully by "sudo openvpn default-start.conf".  Won't bore you with all the output, but it works alright, VPN starts, and is 100% functional.  Obvs /dev/net/tun is passed through.

Yet, I cannot start it via the system service, neither on container start, neither by manually (re)starting the service. If I try I get these lines in /var/log/syslog.  I've redacted a little bit:


Jun 21 19:02:49 ubuntu-c1 ovpn-default-start[1507]: OpenVPN 2.4.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 10 2018
Jun 21 19:02:49 ubuntu-c1 ovpn-default-start[1507]: library versions: OpenSSL 1.1.0g  2 Nov 2017, LZO 2.08
Jun 21 19:02:49 ubuntu-c1 systemd[1]: Started OpenVPN connection to default-start.
Jun 21 19:02:49 ubuntu-c1 ovpn-default-start[1507]: WARNING: No server certificate verification method has been enabled.  See http://openvpn.net/howto.html#mitm for more info.
Jun 21 19:02:49 ubuntu-c1 ovpn-default-start[1507]: TCP/UDP: Preserving recently used remote address: [AF_INET]xx.xx.xx.xx:80
Jun 21 19:02:49 ubuntu-c1 ovpn-default-start[1507]: Attempting to establish TCP connection with [AF_INET]xx.xx.xx.xx:80 [nonblock]
Jun 21 19:02:50 ubuntu-c1 ovpn-default-start[1507]: TCP connection established with [AF_INET]xx.xx.xx.xx:80
Jun 21 19:02:50 ubuntu-c1 ovpn-default-start[1507]: TCP_CLIENT link local: (not bound)
Jun 21 19:02:50 ubuntu-c1 ovpn-default-start[1507]: TCP_CLIENT link remote: [AF_INET]xx.xx.xx.xx:80
Jun 21 19:02:50 ubuntu-c1 ovpn-default-start[1507]: WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this

Jun 21 19:02:52 ubuntu-c1 ovpn-default-start[1507]: [PureVPN] Peer Connection Initiated with [AF_INET]xx.xx.xx.xx:80
Jun 21 19:02:53 ubuntu-c1 ovpn-default-start[1507]: TUN/TAP device tun0 opened
Jun 21 19:02:53 ubuntu-c1 ovpn-default-start[1507]: Note: Cannot set tx queue length on tun0: Operation not permitted (errno=1)
Jun 21 19:02:53 ubuntu-c1 ovpn-default-start[1507]: do_ifconfig, tt->did_ifconfig_ipv6_setup=0
Jun 21 19:02:53 ubuntu-c1 ovpn-default-start[1507]: /sbin/ip link set dev tun0 up mtu 1500
Jun 21 19:02:53 ubuntu-c1 ovpn-default-start[1507]: openvpn_execve: unable to fork: Resource temporarily unavailable (errno=11)
Jun 21 19:02:53 ubuntu-c1 ovpn-default-start[1507]: Exiting due to fatal error
Jun 21 19:02:53 ubuntu-c1 networkd-dispatcher[167]: WARNING:Unknown index 3 seen, reloading interface list
Jun 21 19:02:53 ubuntu-c1 NetworkManager[924]: <info>  [1529607773.9836] manager: (tun0): new Tun device (/org/freedesktop/NetworkManager/Devices/4)
Jun 21 19:02:54 ubuntu-c1 systemd[1]: openvpn at default-start.service: Main process exited, code=exited, status=1/FAILURE
Jun 21 19:02:54 ubuntu-c1 systemd[1]: openvpn at default-start.service: Failed with result 'exit-code'.


Maybe this looks troubling?  " openvpn_execve: unable to fork: Resource temporarily unavailable"

Here is my container config, with again minor redaction.  Everything else is working alright, accelerated graphics, web browsers, and so.


# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: -d ubuntu -r bionic -a amd64
# Template script checksum (SHA-1): 5f6cea9c51537459a7ab5f81e2c1eac6a94b5e08
# For additional config options, please look at lxc.container.conf(5)
# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
# Distribution configuration

lxc.include = /usr/share/lxc/config/common.conf

# For Ubuntu 14.04
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional 0 0
lxc.mount.entry = /sys/kernel/security sys/kernel/security none bind,optional 0 0
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry = mqueue dev/mqueue mqueue rw,relatime,create=dir,optional 0 0
lxc.include = /usr/share/lxc/config/userns.conf
lxc.arch = linux64

# Container specific configuration
lxc.idmap = u 0 700000 100000
lxc.idmap = g 0 700000 100000

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:4b:ba:ee

# mounts
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,optional,create=file
lxc.mount.entry = tmpfs tmp tmpfs defaults
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none ro,bind,optional,create=dir
lxc.mount.entry = /home/cuser/.pulse home/cuser/.pulse none bind,optional,create=dir
lxc.mount.entry = /home/cuser/.config/pulse home/cuser/.config/pulse none bind,optional,create=dir

lxc.rootfs.path = overlay:/home/cuser/.local/share/lxc/ubuntu-root/rootfs:/home/cuser/.local/share/lxc/ubuntu-c1/delta0
lxc.uts.name = ubuntu-c1


Versions:

$ lxc-start --version
3.0.1

# host and container are alike:
$ lsb_release -a
LSB Version:    core-9.20170808ubuntu1-noarch:security-9.20170808ubuntu1-noarch
Distributor ID: Ubuntu
Description:    Ubuntu 18.04 LTS
Release:        18.04
Codename:       bionic


This is not a major problem because I can start the vpn by hand.  It is more convenient when the container would start it for me.  openvpn config is copied verbatim from the P container.

Also, another small thing.  I think (?) I had lxc version 3.0.0 when I upgraded to ubuntu 18.04.  It seems like I could freely have both P and !P containers launched at once.  Now (3.0.1) I have some troubles with this.  I can start all the !P containers I want, but once I start one P container, I cannot start any more !P.  (Well... I think that's the trigger.  Need to try a few more times for confidence).  The symptom is:

lxc-start ubuntu-root 20180621191539.470 ERROR    lxc_cgfsng - cgroups/cgfsng.c:cg_legacy_set_data:2199 - Failed to setup limits for the "memory" controller. The controller seems to be unused by "cgfsng" cgroup driver or not enabled on the cgroup hierarchy
lxc-start ubuntu-root 20180621191539.470 ERROR    lxc_start - start.c:lxc_spawn:1676 - Failed to setup cgroup limits for container "ubuntu-root"
lxc-start ubuntu-root 20180621191539.470 ERROR    lxc_container - lxccontainer.c:wait_on_daemonized_start:834 - Received container state "ABORTING" instead of "RUNNING"
lxc-start ubuntu-root 20180621191539.471 ERROR    lxc_start - start.c:__lxc_start:1887 - Failed to spawn container "ubuntu-root"


It persists even if all !P containers stop, and I only know how to get past it with a rebooting.  I googled the symptom and found this thread:  https://github.com/lxc/lxc/issues/1991  but it was talking about apparmor.  I tried disabling apparmor entirely, but that didn't help, and anyway !P is fine providing I don't start P.  Also a minor issue, won't matter once I can get moved 100% over to !P.

My only lament with the !P containers is this:  https://github.com/lxc/lxd/issues/3990 which it sounds like lxc can't do anything about.  I hope that get improved in the other project though, since I miss my ecryptfs in containers...




More information about the lxc-users mailing list