[lxc-users] LXD Based Container For Desktop Applications - Some Success - Help
rob e
redgerhoo at yahoo.com.au
Thu Jul 21 02:27:10 UTC 2016
I'm trying to use an LXD based container to run desktop applications on
my standard desktop, in much the same way as this
https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/
So far I can run an application in a Xephyr screen but not on the host
desktop (ultimate aim)
For a Xephyr screen
1) Install Xephyr
2) run Xephyr with "Xephyr -a -br -noreset -name xephyr_screen_101
-title Browse_Danger -screen 1800x1080 :101"
3) log into the container and run the program, directing output to
display 101 ie.
a) lxc exec <container-name> bash
b) DISPLAY=:101 firefox
4) Firefox will duly appear on the Xephyr screen
To run this from outside the container
1) Create shell program inside the container, containing just the
command in 3b ie.
#!/bin/bash
DISPLAY=:101 firefox
2) Make the program executable ie. chmod ug+x <shell-program-above> and
possibly change ownership
2) Execute with
lxc exec <container-name> su <user name> -- <shell-program-above>
The minimum config required to make this work seems to be
withname: <container-name>
profiles:
- default
config:
raw.lxc: lxc.aa_profile=lxc-container-default-with-mounting
devices:
root:
path: /
type: disk
x11-unix:
path: /tmp/.X11-unix
source: /tmp/.X11-unix
type: disk
ephemeral: false
by adding a few more mounts we can get a full desktop for user = <user>
to run in Xephyr eg.
devices:
dri:
path: /dev/dri
source: /dev/dri
type: disk
iceauthority-<user>:
path: /home/<user>/.ICEauthority
source: /home/<user>/.ICEauthority
type: disk
root:
path: /
type: disk
x11-unix:
path: /tmp/.X11-unix
source: /tmp/.X11-unix
type: disk
xauthority-<user>:
path: /home/<user>/.Xauthority
source: /home/<user>/.Xauthority
type: disk
ephemeral: false
Obviously we needed to have added <user> first and ensure the home
directory was created (should be when using "adduser") and then run the
desktop whilst logged in as <user>
No matter what I do I cannot get a program to display on the host screen
eg. DISPLAY=:0 firefox. This return an error message
$ DISPLAY=:0 firefox
No protocol specified
Failed to connect to Mir: Failed to connect to server socket: No
such file or directory
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :0
These messages turn up in the host dmesg
* 508499.335953] audit: type=1400 audit(1469067007.731:3225):
apparmor="STATUS" operation="profile_load" profile="unconfined"
name="lxd-xenial-browse-danger-test_</var/lib/lxd>" pid=29368
comm="apparmor_parser"
* [508499.342613] device vethO9YPDN entered promiscuous mode
* [508499.342650] IPv6: ADDRCONF(NETDEV_UP): vethO9YPDN: link is not ready
* [508499.385405] eth0: renamed from vethE393S7
* [508499.408826] IPv6: ADDRCONF(NETDEV_CHANGE): vethO9YPDN: link
becomes ready
* [508499.408877] lxcbr0: port 4(vethO9YPDN) entered forwarding state
* [508499.408886] lxcbr0: port 4(vethO9YPDN) entered forwarding state
* [508499.438414] audit: type=1400 audit(1469067007.835:3226):
apparmor="DENIED" operation="mount" info="failed type match"
error=-13 profile="lxc-container-default-with-mounting"
name="/sys/fs/cgroup/systemd/" pid=29377 comm="systemd"
fstype="cgroup" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
* [508499.438529] audit: type=1400 audit(1469067007.835:3227):
apparmor="DENIED" operation="mount" info="failed type match"
error=-13 profile="lxc-container-default-with-mounting"
name="/sys/fs/cgroup/systemd/" pid=29377 comm="systemd"
fstype="cgroup" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
* [508514.445340] lxcbr0: port 4(vethO9YPDN) entered forwarding state
and from the host Syslog
* Jul 21 12:10:07 virt-host kernel: [508499.438414] audit: type=1400
audit(1469067007.835:3226): apparmor="DENIED" operation="mount"
info="failed type match" error=-13
profile="lxc-container-default-with-mounting"
name="/sys/fs/cgroup/systemd/" pid=29377 comm="systemd"
fstype="cgroup" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
* Jul 21 12:10:07 virt-host kernel: [508499.438529] audit: type=1400
audit(1469067007.835:3227): apparmor="DENIED" operation="mount"
info="failed type match" error=-13
profile="lxc-container-default-with-mounting"
name="/sys/fs/cgroup/systemd/" pid=29377 comm="systemd"
fstype="cgroup" srcname="cgroup" flags="rw, nosuid, nodev, noexec"
If I then change the apparmor profile to unconfined and re-run, I see
the following in the host dmesg
* [508796.382044] audit: type=1400 audit(1469067304.766:3230):
apparmor="DENIED" operation="open" profile="/usr/sbin/cupsd"
name="/etc/ld.so.preload" pid=5259 comm="cupsd" requested_mask="r"
denied_mask="r" fsuid=0 ouid=0
* [508796.395527] audit: type=1400 audit(1469067304.782:3231):
apparmor="DENIED" operation="open" profile="/usr/sbin/cupsd"
name="/etc/ld.so.preload" pid=5266 comm="cups-exec"
requested_mask="r" denied_mask="r" fsuid=0 ouid=0
* [508796.395578] audit: type=1400 audit(1469067304.782:3232):
apparmor="DENIED" operation="open" profile="/usr/sbin/cupsd"
name="/etc/ld.so.preload" pid=5265 comm="cups-exec"
requested_mask="r" denied_mask="r" fsuid=0 ouid=0
* [508796.395778] audit: type=1400 audit(1469067304.782:3233):
apparmor="DENIED" operation="open" profile="/usr/sbin/cupsd"
name="/etc/ld.so.preload" pid=5265 comm="dbus" requested_mask="r"
denied_mask="r" fsuid=7 ouid=0
* [508796.398616] audit: type=1400 audit(1469067304.782:3234):
apparmor="DENIED" operation="open" profile="/usr/sbin/cupsd"
name="/etc/ld.so.preload" pid=5266 comm="dbus" requested_mask="r"
denied_mask="r" fsuid=7 ouid=0
It's worth noting that the legacy LXC approach outlined in the first
link above still works on this host (so I have a legacy style lxc
container which works). The legacy style config is notably different in
its id maps
* lxc.id_map = u 0 100000 1000
* lxc.id_map = g 0 100000 1000
* lxc.id_map = u 1000 1000 1
* lxc.id_map = g 1000 1000 1
* lxc.id_map = u 1001 101001 64535
* lxc.id_map = g 1001 101001 64535
vs the default.
If I try to use the above map, the container won't start. I can use the
following map (created a new profile and then created the test container
using that profile + default) and it will start, but doesn't address
the access problem
* lxc.id_map = u 400000 1000 1
* lxc.id_map = g 400000 1000 1
I also tried adding
* lxc.id_map = u 1001 401001 64535
* lxc.id_map = g 1001 401001 64535
But that didn't help and the 1000 1000 mapping prevented the container
from starting
DOES ANYONE HAVE ANY INSIGHTS SUGGESTIONS ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160721/7c54e61c/attachment.html>
More information about the lxc-users
mailing list