[lxc-users] Desktop Environment in LXD

Ron Kelley rkelleyrtp at gmail.com
Sat Jun 18 11:18:57 UTC 2016


Perhaps your best option is to open a support ticket with Canonical?  I am sure someone (Stephen, etc) would be happy to help you over the phone.



On Jun 18, 2016, at 12:05 AM, Rahul Rawail <rhlrawail at gmail.com> wrote:

Thanks for your answer, but again can someone who knows inside out of this, help us understand the options and concept over the phone please.

On Sat, Jun 18, 2016 at 1:04 PM, Saint Michael <venefax at gmail.com <mailto:venefax at gmail.com>> wrote:
I did this long ago but only using XVNC on the containers. It works, but performance is bad, since you have many Xservers and many Xvnc servers.
I don't think you can share the same graphics hardware from multiple containers. That would be possible only a very powerful card made by Nvidia, designed specifically to have 3D computing on virtual machines.


On Fri, Jun 17, 2016 at 9:32 PM, Rahul Rawail <rhlrawail at gmail.com <mailto:rhlrawail at gmail.com>> wrote:
Thanks Fajar for your answers, I still have some questions, please help:

> Thanks Simos for your answer, Just few questions and they may be dumb
> questions, if LXD is running on top of a host OS and host machine has
> graphic card I thought that it will be able to give it a call  and I
> understand that since LXD still uses core functions of host OS hence if I
> will create 100 containers then all of them will have access to all the host
> hardware including video and audio.


Yes, and no. Depending on what you want, and how you setup the containers.

One way to do it is to give containers access to the hardware
directly, often meaning that only one of them can use the hardware at
the same time.


----> The reason I asked this question was to understand that in LXD presentation they said that LXD has the capability to replace all existing VM's as they run complete OS but if you can't put a DE on it then its not of much use.
Sorry for this but when you said Yes and No what do you mean, I guess Yes means as you explained "to give containers access to the hardware directly, often meaning that only one of them can use the hardware at the same time." I understand that but like VM do we have the capability to put the drivers again in container or have virtual drivers so that all containers can use hardware in parallel rather than only one using it at any one point of time.If they are replacement for VM then they should work like VM, am i wrong with my expectation?


>
> I have tried  https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/ <https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/>


That's another way to do it: give containers access to host resources
(e.g. X, audio) via unix sockets by setting bind mounts manually

---------> what will happen in this case, will they all work in parallel and have access to all the hardware of host machine at the same time??? 


> Also, I thought that container should be able to use host os X server and
> there should not be any need of another X server.

correct for the second way.

-------> I am assuming all containers have access to same X server in parallel and the hardware in parallel.



> Can the container desktop environment be called from another remote machine
> for remote access.


... and that's the third way. Treat containers like another headless
server, and setup remote GUI access to it appropriately. My favorite
is xrdp, but vnc or x2go should work as well (I haven't tested sound
though, didn't need it)

Note that if you need copy/paste and file transfer support in xrdp,
your containers need to be privileged with access to /dev/fuse
enabled. If you don't need that feature, the default unpriv container
is fine.

-------------> I have not tried this but I am assuming if I am not able to get the DE up in container due to "xf86OpenConsole: Cannot open /dev/tty0 (no such file found) " error then xrdp or x2go are just going to show me the terminal on the client side and not desktops, am I right with my assumption?

All I want to do in the first stage is bring up an LXD container and then bring up a new window on the current desktop like any other VM and have another desktop environment in it for container and I should be able to do this for every container and hence have multiple desktops on my current desktop with all having access to host hardware in parallel, at the same time maintaining the same bare metal performance without adding any performance overhead. Will xrdp or x2go still reap the same benefit as LXD or add performance overhead. Next stage is then I want to take it to remote client which you already explained before, is there any other option, why I am asking this because I read somewhere that LXD by default have the ability to connect to another LXD or LXD server.

One last request to you or to anyone, if possible can someone please give their half an hour to one hour over the phone (we will call) just to help us out please, we have been struggling for weeks and asking for help everywhere and the most help has come out of this forum, your expertise in this area and one hour of time can save our weeks effort in future and help us make decision that if this is the right way to go for us. We will be highly indebted. Please share your number, you can send direct emails with number.

On Sat, Jun 18, 2016 at 6:48 AM, Fajar A. Nugraha <list at fajar.net <mailto:list at fajar.net>> wrote:
On Fri, Jun 17, 2016 at 9:44 PM, Rahul Rawail <rhlrawail at gmail.com <mailto:rhlrawail at gmail.com>> wrote:
> Thanks Simos for your answer, Just few questions and they may be dumb
> questions, if LXD is running on top of a host OS and host machine has
> graphic card I thought that it will be able to give it a call  and I
> understand that since LXD still uses core functions of host OS hence if I
> will create 100 containers then all of them will have access to all the host
> hardware including video and audio.


Yes, and no. Depending on what you want, and how you setup the containers.

One way to do it is to give containers access to the hardware
directly, often meaning that only one of them can use the hardware at
the same time.

>
> I have tried  https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/ <https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/>


That's another way to do it: give containers access to host resources
(e.g. X, audio) via unix sockets by setting bind mounts manually


> Also, I thought that container should be able to use host os X server and
> there should not be any need of another X server.

correct for the second way.

> Can the container desktop environment be called from another remote machine
> for remote access.


... and that's the third way. Treat containers like another headless
server, and setup remote GUI access to it appropriately. My favorite
is xrdp, but vnc or x2go should work as well (I haven't tested sound
though, didn't need it)

Note that if you need copy/paste and file transfer support in xrdp,
your containers need to be privileged with access to /dev/fuse
enabled. If you don't need that feature, the default unpriv container
is fine.

--
Fajar
_______________________________________________
lxc-users mailing list
lxc-users at lists.linuxcontainers.org <mailto:lxc-users at lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users <http://lists.linuxcontainers.org/listinfo/lxc-users>

_______________________________________________
lxc-users mailing list
lxc-users at lists.linuxcontainers.org <mailto:lxc-users at lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users <http://lists.linuxcontainers.org/listinfo/lxc-users>


_______________________________________________
lxc-users mailing list
lxc-users at lists.linuxcontainers.org <mailto:lxc-users at lists.linuxcontainers.org>
http://lists.linuxcontainers.org/listinfo/lxc-users <http://lists.linuxcontainers.org/listinfo/lxc-users>

_______________________________________________
lxc-users mailing list
lxc-users at lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20160618/dc69a888/attachment-0001.html>


More information about the lxc-users mailing list