Turbovnc and VirtualGL inside of a nvidia-docker-container

1,576 views
Skip to first unread message

Joscha Knobloch

unread,
Nov 3, 2017, 9:30:23 AM11/3/17
to virtual...@googlegroups.com

Hey,

we need to run a Webbrowser inside of a Nvidia-docker container.
Therefore GL is needed (or it is going to be very slow).

The Solution with VirtualGL looks like it could work but I was not yet able to get GL-support inside of the docker-container.

The newest Nvidia-driver is installed on the host-system. A "nvidia-smi" tells me the docker-container has also recognized the GPU and has the newest driver installed.
Inside of the container I installed the mate-desktop-environment, virtualGL and Turbovnc. I can connect to the container via VNC and use it, but GL is missing. Typing "vglrun glxgears" returns:
Xlib: Extension "GLX" is mission on display ":0". Error: couldn't´t get a RGB, Double-buffered visual

"vglrun glxinfo" does kind of tell the same story besides printing the same message ~10 times.

After finding your opengl-branch I build my setup on top of it, but I still can´t get it to work.

Do I need to setup VirtualGl somehow?
Do I maybe need to install Xvfb? (I read about that somewhere but I didn't´t understand why it was necessary and how it would be used.)

I hope you can help me with this.

Best Regards
Joscha Knobloch

DRC

unread,
Nov 3, 2017, 12:31:08 PM11/3/17
to virtual...@googlegroups.com
It sounds as if the 3D X server is not using the nVidia drivers.

/opt/VirtualGL/bin/glxinfo -display :0 -c

should show "NVIDIA Corporation" as the client and server GLX vendor, as
well as the OpenGL vendor. Double check /etc/X11/xorg.conf and make
sure that "Driver" is set to "nvidia" (in the "Device" section.) If
not, or if xorg.conf doesn't exist, then refer to nVidia's instructions
(/usr/share/doc/NVIDIA_GLX-1.0/README.txt) for more info on how to
configure it. If xorg.conf is configured properly, then check
/var/log/Xorg.0.log for any error messages from the nVidia driver that
may indicate why the GLX extension isn't being initialized.

In short, this isn't a VirtualGL issue. It's a problem with the nVidia
driver configuration. Once that is working properly, VirtualGL should
work. In general, you should always verify that OpenGL is working
properly (and is accelerated) on the 3D X server before attempting to
use VirtualGL.

Joscha Knobloch

unread,
Nov 3, 2017, 2:16:44 PM11/3/17
to virtual...@googlegroups.com

Hey DRC,


thank you for your help. I think you are right.

The Problem (i guess) is that i am running headless inside of docker.

I don't have a xorg.conf yet but i also don't know what i should write in there because i don't have a display connected to the machine. 


The Nvidia-readme file is HUGE.
Can you tell me where i should take a look, to find out if i have access to the GPU inside my container and what i need to setup to let VirtualGL use the GPU?



Best Regards
Joscha Knobloch


Von: virtual...@googlegroups.com <virtual...@googlegroups.com> im Auftrag von DRC <d...@virtualgl.org>
Gesendet: Freitag, 3. November 2017 17:31:05
An: virtual...@googlegroups.com
Betreff: Re: [VirtualGL-Users] Turbovnc and VirtualGL inside of a nvidia-docker-container
 
--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/5ce9d881-d595-09e9-6883-f96f1943ae49%40virtualgl.org.
For more options, visit https://groups.google.com/d/optout.

DRC

unread,
Nov 3, 2017, 2:43:16 PM11/3/17
to virtual...@googlegroups.com
Headless operation should be possible as long as the drivers are working
correctly. The first thing I would check is the output of 'lspci'.
That will tell you whether you can see the GPU inside of the Docker
container. For instance, this is how my nVidia GPU appears in the lspci
output:

02:00.0 VGA compatible controller: NVIDIA Corporation GK104GL [Quadro
K5000] (rev a1)

If you aren't seeing the GPU inside of the Docker container, then you're
dead in the water until you can fix that issue. Unfortunately I have no
advice regarding how to access a GPU from within a Docker container, as
I wasn't even aware that that was possible.

If and when you can see the GPU, then follow the instructions here:
https://virtualgl.org/Documentation/HeadlessNV
to create xorg.conf and enable it for headless operation.

On 11/3/17 1:16 PM, Joscha Knobloch wrote:
> Hey DRC,
>
>
> thank you for your help. I think you are right.
>
> The Problem (i guess) is that i am running headless inside of docker.
>
> I don't have a xorg.conf yet but i also don't know what i should write
> in there because i don't have a display connected to the machine.
>
>
> The Nvidia-readme file is HUGE.
> Can you tell me where i should take a look, to find out if i have access
> to the GPU inside my container and what i need to setup to let VirtualGL
> use the GPU?
>
>
>
> Best Regards
> Joscha Knobloch
>
> ------------------------------------------------------------------------
> *Von:* virtual...@googlegroups.com
> *Gesendet:* Freitag, 3. November 2017 17:31:05
> *An:* virtual...@googlegroups.com
> *Betreff:* Re: [VirtualGL-Users] Turbovnc and VirtualGL inside of a
> <mailto:virtualgl-use...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/virtualgl-users/5c2ffe6e46b3494686e19d24e15636f7%40brm.de
> <https://groups.google.com/d/msgid/virtualgl-users/5c2ffe6e46b3494686e19d24e15636f7%40brm.de?utm_medium=email&utm_source=footer>.

Joscha Knobloch

unread,
Nov 10, 2017, 9:29:12 AM11/10/17
to virtual...@googlegroups.com

Hey everyone,

i think i need a little bit of help configuring virtualGl and xorg.

In my Setup i have a PC with a Nvidia-GPU. On The Debian9 Hostsystem i have installed docker, nvidia-docker and the latest nvidia-driver.

I have build a docker-container that should later run a TurboVNC-server using VirtualGL in split-rendering-mode with the GPU.

The GPU is shown in the container. This is the Output of lspci:
root@d4e2e94fa0b7:/# lspci
00:00.0 Host bridge: Intel Corporation 82Q33 Express DRAM Controller (rev 02)
00:01.0 PCI bridge: Intel Corporation 82Q33 Express PCI Express Root Port (rev 02)
00:03.0 Communication controller: Intel Corporation 82Q33 Express MEI Controller (rev 02)
00:19.0 Ethernet controller: Intel Corporation 82566DM-2 Gigabit Network Connection (rev 02)
00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02)
00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02)
00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02)
00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02)
00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02)
00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02)
00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02)
00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02)
00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02)
00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02)
00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92)
00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02)
00:1f.2 IDE interface: Intel Corporation 82801IB (ICH9) 2 port SATA Controller [IDE mode] (rev 02)
00:1f.5 IDE interface: Intel Corporation 82801I (ICH9 Family) 2 port SATA Controller [IDE mode] (rev 02)
01:00.0 VGA compatible controller: NVIDIA Corporation Device 128b (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK208 HDMI/DP Audio Controller (rev a1)



And this is the Output of nvidia-smi:

root@d4e2e94fa0b7:/# nvidia-smi
Fri Nov 10 14:02:45 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 710      Off  | 00000000:01:00.0 N/A |                  N/A |
| 50%   42C    P8    N/A /  N/A |     62MiB /   980MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+



I can start up the VNC-server with "/opt/Turbovnc/bin/vncserver :1" This works and i con connect to it. Running "vglrun glxgears within the VNC-session unfortunatly returns:
[VGL] ERROR: could not open display :0.



To configure xorg i used nvidia-xconfig and vglserver_config. My xorg.conf now looks like this:
(unfortunatly i was not able to get nvidia-xconfig inside of the container so i ran it on the host-system and copyed it into the container after changing the BusID.)
root@d4e2e94fa0b7:/# cat /etc/X11/xorg.conf
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig:  version 384.90  (buildmeister@swio-display-x86-rhel47-05)  Tue Sep 19 18:13:03 PDT 2017
Section "DRI"
    Mode 0666
EndSection


Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0"
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
EndSection

Section "Files"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"

    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Unknown"
    HorizSync       28.0 - 33.0
    VertRefresh     43.0 - 72.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GT 710"
    BusID          "PCI:1:0:0"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "UseDisplayDevice" "None"
    SubSection     "Display"
        Virtual     1920 1080
        Depth       24
    EndSubSection
EndSection




I have to admit. I am kind of clueless.
This [Page](https://virtualgl.org/Documentation/HeadlessNV) suggests installing virtualGL after the fact. I am trying this now but i will have to rebuild the docker-image for that, which can take a while.



I hope you can point my in the right direction.

Best Regards
joscha Knobloch


Gesendet: Freitag, 3. November 2017 19:43:14
An: virtual...@googlegroups.com
Betreff: Re: AW: [VirtualGL-Users] Turbovnc and VirtualGL inside of a nvidia-docker-container
 

DRC

unread,
Nov 10, 2017, 10:48:53 AM11/10/17
to virtual...@googlegroups.com
Continuing where we left off on the discussion forums
(https://sourceforge.net/p/virtualgl/discussion/401860/thread/32008777/?limit=25)
...

As mentioned in the User's Guide:
https://cdn.rawgit.com/VirtualGL/virtualgl/master/doc/index.html#hd006002
(under "Sanity Check"), you can use glxinfo to verify whether OpenGL is
working on the 3D X server. glxinfo works just as well on headless X
servers, since the program isn't actually displaying anything.

You can check whether the X server is running by logging into the Docker
container interactively and doing `ps -e | grep X`. If it isn't
running, then look at /var/log/Xorg.0.log, which will contain any error
messages generated during X startup.

Ubuntu 14.04 doesn't suffer from the GDM bug, because Ubuntu uses
LightDM, so you should be OK there.

For a Docker container, which is inherently a single-user environment,
you probably want to configure 3D X server access using
vglserver_config +s +f +t
This will open up 3D X server access to all users of the machine, thus
avoiding the hassle of restricting access to the vglusers group (which
would require adding the Docker root user to that group.)

If you are only running one TurboVNC Server instance, then passing :1 to
vncserver is innocuous. But if you tried to do that twice, it would
fail the second time. It would also fail if, for whatever reason,
something else was using port 5901 or if a previous instance did not
shut down properly.

On 11/10/17 8:29 AM, Joscha Knobloch wrote:
> Hey everyone,
>
> i think i need a little bit of help configuring virtualGl and xorg.
>
> In my Setup i have a PC with a Nvidia-GPU. On The Debian9 Hostsystem i
> have installed docker, nvidia-docker and the latest nvidia-driver.
>
> I have build a docker-container that should later run a TurboVNC-server
> using VirtualGL in split-rendering-mode with the GPU.
>
> The GPU is shown in the container. This is the Output of lspci:
> /root@d4e2e94fa0b7:/# lspci
> (rev a1)/
>
>
> And this is the Output of nvidia-smi:
>
> /root@d4e2e94fa0b7:/# nvidia-smi
> Fri Nov 10 14:02:45 2017
> +-----------------------------------------------------------------------------+
> | NVIDIA-SMI 384.90 Driver Version:
> 384.90 |
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile
> Uncorr. ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util
> Compute M. |
> |===============================+======================+======================|
> | 0 GeForce GT 710 Off | 00000000:01:00.0 N/A
> | N/A |
> | 50% 42C P8 N/A / N/A | 62MiB / 980MiB | N/A
> Default |
> +-------------------------------+----------------------+----------------------+
>
>
> +-----------------------------------------------------------------------------+
> | Processes: GPU
> Memory |
> | GPU PID Type Process name
> Usage |
> |=============================================================================|
> | 0 Not
> Supported |
> +-----------------------------------------------------------------------------+/
>
>
> I can start up the VNC-server with "/opt/Turbovnc/bin/vncserver :1" This
> works and i con connect to it. Running "vglrun glxgears within the
> VNC-session unfortunatly returns:
> /[VGL] ERROR: could not open display :0./
>
>
>
> To configure xorg i used nvidia-xconfig and vglserver_config. My
> xorg.conf now looks like this:
> (unfortunatly i was not able to get nvidia-xconfig inside of the
> container so i ran it on the host-system and copyed it into the
> container after changing the BusID.)
> /root@d4e2e94fa0b7:/# cat /etc/X11/xorg.conf
> EndSection/
>
>
>
> I have to admit. I am kind of clueless.
> This [Page](https://virtualgl.org/Documentation/HeadlessNV) suggests
> installing virtualGL after the fact. I am trying this now but i will
> have to rebuild the docker-image for that, which can take a while.
>
>
>
> I hope you can point my in the right direction.
>
> Best Regards
> joscha Knobloch
>
> ------------------------------------------------------------------------
> *Von:* virtual...@googlegroups.com
> <virtual...@googlegroups.com> im Auftrag von DRC <d...@virtualgl.org>
> *Gesendet:* Freitag, 3. November 2017 19:43:14
> *An:* virtual...@googlegroups.com
> *Betreff:* Re: AW: [VirtualGL-Users] Turbovnc and VirtualGL inside of a
> https://groups.google.com/d/msgid/virtualgl-users/cf7aff4ce7924c0eae52f6678e6309db%40brm.de
> <https://groups.google.com/d/msgid/virtualgl-users/cf7aff4ce7924c0eae52f6678e6309db%40brm.de?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages