[Lxc-users] How to share a dual nvidia cards between two LXC

Guillaume Thouvenin guillaume.thouvenin at polymtl.ca
Sun Mar 24 11:08:20 UTC 2013


Hello,

  I have a card with two nvidia GPUs. Currently I'm using it in one 
LXC. I compiled the nvidia drivers from their our official web site in 
the container. I created /dev/nvidia0, /dev/nvidia1 and /dev/nvidiactl 
devices into the container. From the container I can start an X server 
on :0. Then I'm using TurboVNC and virtualGL to use the 3D graphics 
capabilities of the card.

  As I have a two GPUs I'd like to dedicate one GPU to a container and 
the other one to other container. My approach is to compile the nvidia 
drivers in both containers, create /dev/nvidia0, /dev/nvidiactl into 
one container and /dev/nvidia1, /dev/nvidiactl into the other 
container. Then I should be able to start an X server in both 
containers. The main problem I have is that both containers try to use 
display :0 even if I start one with xinit -display :2

So I'd like to know if this approach seems doable and if people that 
already achieve this can share the configuration about cgroups, tty and 
nvidia device.

Currently I'm using:

lxc1 config:
lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0 used for X
lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1 used for TurboVNC
lxc.cgroup.devices.allow = c 195:* rwm # nvidia device

Xorg is configured to use nvidia0

lxc2 config:
lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2 used for X (not working yet)
lxc.cgroup.devices.allow = c 4:3 rwm # /dev/tty3 used for TurboVNC
lxc.cgroup.devices.allow = c 195:* rwm # nvidia device

Xorg is configured to use nvidia1


Regards,
Guillaume






More information about the lxc-users mailing list