Specify which card to use with VirtualGL

250 views
Skip to first unread message

trumee

unread,
Dec 8, 2019, 3:42:52 AM12/8/19
to VirtualGL User Discussion/Support
Hello,

I have a Nvidia 1030 and Nvidia P2000 cards on a headless server. I only want to use Nvidia 1030 with VirtualGL. Unfortunately on running vglserver_config, the script changes permissions for both the cards. This causes the CUDA to stop working for me,

# nvidia-container-cli --load-kmods info
nvidia-container-cli: initialization error: cuda error: no cuda-capable device is detected

I want to only use VirtualGL with the first card. How do i do that?


# nvidia-smi
Sun Dec  8 14:10:37 2019      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.36       Driver Version: 440.36       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 1030     Off  | 00000000:82:00.0 Off |                  N/A |
| 20%   43C    P8    N/A /  30W |     33MiB /  2001MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Quadro P2000        Off  | 00000000:83:00.0 Off |                  N/A |
| 50%   41C    P8     5W /  75W |      2MiB /  5059MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                              
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     27154      G   /usr/lib/Xorg                                 32MiB |
+-----------------------------------------------------------------------------+

Thanks

trumee

unread,
Dec 8, 2019, 5:02:36 AM12/8/19
to VirtualGL User Discussion/Support
Just to clarify, i want to pass the second card (P2000) to an LXD container. Thus I dont want VirtualGL to touch this card.

DRC

unread,
Dec 8, 2019, 11:06:46 AM12/8/19
to virtual...@googlegroups.com
vglserver_config gives you the option of granting GPU device permissions to only members of the vglusers group or to all users of the system. It sounds as if you might have chosen to grant device permissions only to vglusers, but the account from which you’re attempting to use CUDA is not a member of that group. For that use case, you should probably grant device permissions to all users of the system. I don’t have any other explanation. Nothing else that vglserver_config does should have any effect on CUDA.

On Dec 8, 2019, at 4:02 AM, trumee <raj...@gmail.com> wrote:

Just to clarify, i want to pass the second card (P2000) to an LXD container. Thus I dont want VirtualGL to touch this card.

--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/virtualgl-users/ed565d1d-253e-4b78-aa7a-63a43903cf35%40googlegroups.com.

trumee

unread,
Dec 9, 2019, 8:02:28 AM12/9/19
to VirtualGL User Discussion/Support
Sorry, I wasn't clear before. I have two devices and thus have /dev/nvidia0 and /dev/nvidia1. I want vglserver_config to put /dev/nvidia0 in vglserver group but leave /dev/nvidia1 alone.

DRC

unread,
Jun 26, 2020, 12:11:34 PM6/26/20
to virtual...@googlegroups.com
Sorry for the delay.  I looked into this, and unfortunately, I don't see
a way to accomplish what you want to accomplish.  It is possible for
vglserver_config to be more selective about DRI devices, but with nVidia
devices, the permissions are assigned based on rules that VirtualGL
specifies in /etc/modprobe.d/virtualgl.conf, using nVidia-specific
driver directives.  Those driver directives don't appear to allow
permissions to be set per-device.

Jason Edgecombe

unread,
Jun 29, 2020, 8:42:29 AM6/29/20
to virtual...@googlegroups.com
Hello,

Can you set the device permissions after boot in a systemd service, a startup script, or an @boot cron job and have it work?

Sincerely,
Jason
---------------------------------------------------------------------------
Jason Edgecombe | Linux Administrator
UNC Charlotte | The William States Lee College of Engineering
9201 University City Blvd. | Charlotte, NC 28223-0001
Phone: 704-687-1943
jwed...@uncc.edu | http://engr.uncc.edu |  Facebook
---------------------------------------------------------------------------
If you are not the intended recipient of this transmission or a person responsible for delivering it to the intended recipient, any disclosure, copying, distribution, or other use of any of the information in this transmission is strictly prohibited. If you have received this transmission in error, please notify me immediately by reply e-mail or by telephone at
704-687-1943.  Thank you.


--
You received this message because you are subscribed to the Google Groups "VirtualGL User Discussion/Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to virtualgl-use...@googlegroups.com.

DRC

unread,
Jun 30, 2020, 5:19:13 PM6/30/20
to virtual...@googlegroups.com

The problem is that the device permissions are ultimately controlled by the kernel module.  That means that they'll be reset whenever the module is loaded or unloaded, which can happen whenever the 3D X server starts or someone logs in/out locally.  I'm not sure if there is a way to reliably override that.  The only reasonable approach seems to be to grant system-wide permission to the devices while restricting the 3D X server to vglusers only.  That's a pretty common way of configuring VirtualGL, actually.

Jason Edgecombe

unread,
Jul 1, 2020, 8:19:47 AM7/1/20
to virtual...@googlegroups.com
Hmmm, that is a problem. My only other idea is a custom udev rule.
---------------------------------------------------------------------------
Jason Edgecombe | Linux Administrator
UNC Charlotte | The William States Lee College of Engineering
9201 University City Blvd. | Charlotte, NC 28223-0001
Phone: 704-687-1943
jwed...@uncc.edu | http://engr.uncc.edu |  Facebook
---------------------------------------------------------------------------
If you are not the intended recipient of this transmission or a person responsible for delivering it to the intended recipient, any disclosure, copying, distribution, or other use of any of the information in this transmission is strictly prohibited. If you have received this transmission in error, please notify me immediately by reply e-mail or by telephone at
704-687-1943.  Thank you.

DRC

unread,
Jul 1, 2020, 10:11:26 AM7/1/20
to virtual...@googlegroups.com

Try it, and let me know if you are able to make it work.  I was not.

DRC

Reply all
Reply to author
Forward
0 new messages