[lxc-users] Running docker inside unprivileged LXC containers

Akshay Karle akshay.a.karle at gmail.com
Wed Jun 10 14:39:18 UTC 2015


>
> You'll need to coordinate between the container and the host to create
> the devices.  This is something I do want to think about, but have not
> yet had time to do so.  It may involve updating Docker to use a service,
> when available, to request devices be created.  This could be a dbus
> service which gets (vetted and) passed through to the host.
>

So running docker on an unprivileged container is definitely something not
possible with the current version of docker at least right? I've never
really used LXC in production and I'm working on a migration from OpenVZ to
LXC. So I'm really new to LXC as well as Docker. Anyway I could help you
out to add support for docker? We may have to change the way the docker
containers are started using the lxc driver. I filed an issue on docker
<https://github.com/docker/docker/issues/13806> but haven't heard back from
them yet.


>
> Quoting Akshay Karle (akshay.a.karle at gmail.com):
> > Hello,
> >
> > I'm currently working on a project that requires to run docker containers
> > inside unprivileged LXC containers. I've managed to run unprivileged
> > containers on an Ubuntu 14.04 host. I've also managed to get the docker
> > daemon running using the LXC driver instead of native docker exec driver.
> > Right now I'm stuck when trying to start a docker container as it
> attempts
> > to create special devices which fails as it doesn't have the permissions
> to
> > do so in the unprivileged container.
> >
> > root at u1:/# sudo docker run hello-world
> > INFO[0006] POST /v1.18/containers/create
> > INFO[0006] +job create()
> > INFO[0006] +job log(create,
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335,
> > hello-world:latest)
> > INFO[0006] -job log(create,
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335,
> > hello-world:latest) = OK (0)
> > INFO[0006] -job create() = OK (0)
> > INFO[0006] POST
> >
> /v1.18/containers/a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335/attach?stderr=1&stdout=1&stream=1
> > INFO[0006] +job
> >
> container_inspect(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > INFO[0006] -job
> >
> container_inspect(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > = OK (0)
> > INFO[0006] +job
> > attach(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > INFO[0006] POST
> >
> /v1.18/containers/a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335/start
> > INFO[0006] +job
> > start(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > INFO[0006] +job
> >
> allocate_interface(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > INFO[0006] -job
> >
> allocate_interface(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > = OK (0)
> > INFO[0006] +job log(start,
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335,
> > hello-world:latest)
> > INFO[0006] -job log(start,
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335,
> > hello-world:latest) = OK (0)
> > INFO[0006] -job
> > attach(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> =
> > OK (0)
> > INFO[0006] +job
> >
> release_interface(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > INFO[0006] -job
> >
> release_interface(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > = OK (0)
> > INFO[0006] +job
> >
> release_interface(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > INFO[0006] -job
> >
> release_interface(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335)
> > = OK (0)
> > INFO[0006] +job log(die,
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335,
> > hello-world:latest)
> > INFO[0006] -job log(die,
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335,
> > hello-world:latest) = OK (0)
> > Cannot start container
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335: mknod
> > /dev/fuse operation not permitted
> > INFO[0006] -job
> > start(a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335) =
> > ERR (1)
> > ERRO[0006] Handler for POST /containers/{name:.*}/start returned error:
> > Cannot start container
> > a4b9f1286eca35e5f6afc62aad466dfa80061086ccf309171941eb70e88a8335: mknod
> > /dev/fuse operation not permitted
> >
> > # uname -a
> > Linux u1 3.13.0-53-generic #89-Ubuntu SMP Wed May 20 10:34:39 UTC 2015
> > x86_64 x86_64 x86_64 GNU/Linux
> >
> > Unprivileged container config:
> > # Template used to create this container:
> > /usr/share/lxc/templates/lxc-download
> > # Parameters passed to the template: -d ubuntu -r trusty -a amd64
> > # For additional config options, please look at lxc.container.conf(5)
> >
> > # Distribution configuration
> > lxc.include = /usr/share/lxc/config/ubuntu.common.conf
> > lxc.include = /usr/share/lxc/config/ubuntu.userns.conf
> > lxc.arch = x86_64
> >
> > LXC version on the host and container: 1.0.7
> >
> > # Container specific configuration
> > lxc.mount.auto = cgroup
> > lxc.aa_profile = unconfined
> > lxc.id_map = u 0 100000 65536
> > lxc.id_map = g 0 100000 65536
> > lxc.rootfs = /home/vagrant/.local/share/lxc/u1/rootfs
> > lxc.utsname = u1
> >
> > # Network configuration
> > lxc.network.type = veth
> > lxc.network.flags = up
> > lxc.network.link = lxcbr0
> > lxc.network.hwaddr = 00:16:3e:53:e6:a2
> >
> > Has anyone had any success in doing this? Any ideas if this is even
> > possible?
>
> > _______________________________________________
> > lxc-users mailing list
> > lxc-users at lists.linuxcontainers.org
> > http://lists.linuxcontainers.org/listinfo/lxc-users
>
> _______________________________________________
> lxc-users mailing list
> lxc-users at lists.linuxcontainers.org
> http://lists.linuxcontainers.org/listinfo/lxc-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20150610/32984d22/attachment-0001.html>


More information about the lxc-users mailing list