Containerization of NICE-DCV...

432 views
Skip to first unread message

Richard Powell

unread,
Feb 19, 2018, 3:30:48 PM2/19/18
to singularity
Hello all, I've hit a road block on attempts to containerize NICE-DCV alongside my NVidia drivers.  Because my cluster is "strategically stuck" at RHEL6.4, I'm hoping to use a RHEL6.9 container to offer ANSYS v18.1 with 3D NVidia/Nice-enabled graphics.  I was at least successful at centralizing the matching version of our K2 NVidia driver to an nfs mount point and get successful output from nvidia-smi output as follows:
Singularity rhel69_ansys182:/scratch/sandboxes_temp> nvidia-smi
Mon Feb 19 15:07:56 2018      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.45.18              Driver Version: 361.45.18                 |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K2             Off  | 0000:05:00.0     Off |                  Off |
| N/A   29C    P8    17W / 117W |     28MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GRID K2             Off  | 0000:06:00.0     Off |                  Off |
| N/A   27C    P8    17W / 117W |     98MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GRID K2             Off  | 0000:84:00.0     Off |                  Off |
| N/A   30C    P8    17W / 117W |     33MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GRID K2             Off  | 0000:85:00.0     Off |                  Off |
| N/A   28C    P8    17W / 117W |     33MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                              
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

I then attempted to install NICE-DCV server inside this same RHEL6.9 container.  The container is hosted on a RHEL6.4 interactive node which has a working installation of this dame NVidia driver installed, plus a working installation of NICE-DCV server.  After installation of NICE in the container, I set some bind points in singularity.conf as follows with directories associated with NICE-DCV underlined below in hopes to capture my NICE license server and other files associated with NICE.:

Section from my singularity.conf file ....
# BIND PATH: [STRING]
# DEFAULT: Undefined
# Define a list of files/directories that should be made available from within
# the container. The file or directory must exist within the container on
# which to attach to. you can specify a different source and destination
# path (respectively) with a colon; otherwise source and dest are the same.
#bind path = /etc/singularity/default-nsswitch.conf:/etc/nsswitch.conf
bind path = /opt/nice
bind path = /etc/vnc
bind path = /var/lib/dcv
bind path = /usr/lib64
bind path = /etc/localtime
bind path = /etc/hosts
bind path = /scratch
bind path = /nfs/system
bind path = /nfs/prod/users
bind path = /nfs/home

When I enter my sandbox as root and attempt to enable dcv as follows, it fails with inability to find 32bit OpenGL library...
Singularity rhel69_ansys182:/scratch/sandboxes_temp> dcvadmin enable
ERROR: cannot find system 32 bit OpenGL library.

Since dcv must be enabled by root, am I chasing a false hope that NICE can work within a container for non-root container users?
Non-root user in container...
Singularity rhel69_ansys182:/scratch/sandboxes_temp> dcvadmin enable
ERROR: Only root can enable DCV.

Looking below at glxinfo inside container on the left and outside the container on the right, shows that my Nice opengl is not working properly inside the container...


Has anyone in this Singularity user group had success a getting NVidia/Nice enabled graphics to work in a container?

Thanks for any input to help.  I've reached out to the vendor of NICE also, but no helpful information has been provided yet from them.

Richard
Auto Generated Inline Image 1

Will Furnass

unread,
Feb 19, 2018, 3:52:26 PM2/19/18
to singu...@lbl.gov
Hi,

Do you have 32-bit mesa/opengl libs installed inside your container? 

Not sure if it's helpful but here are some notes on how I installed Abaqus + VirtualGL in a Singularity container:

http://learningpatterns.me/posts-output/2018-01-30-abaqus-singularity/

Cheers,

Will



--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



--
Dr Will Furnass | Research Software Engineer
Dept of Computer Science | University of Sheffield
+44 (0) 114 22 21903 | http://rse.shef.ac.uk
@willfurnass | http://learningpatterns.me
Works for Insigneo.org: Mon, Tues, Fri

Richard Powell

unread,
Feb 19, 2018, 4:04:21 PM2/19/18
to singularity
Many thanks Will...I'll check this out.

Richard
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Gabe Turner

unread,
Feb 19, 2018, 5:56:17 PM2/19/18
to singu...@lbl.gov
Yes, NICE-DCV can work in a Singularity container, but, as Will noted, you will need all of the requisite libraries installed within the container. I'm pretty sure that binding /usr/lib64 won't work, as I think that /usr is masked. And even if it did work, you wouldn't want your RHEL 6.9 container using the /usr/lib64 from RHEL6.4, as that could potentially cause all manner of problems.

Try these in the Include: parameter of your bootstrap file:

mesa-libGL mesa-libGL.i686 mesa-libGLU mesa-libGLU.i686 mesa-libEGL.i686 mesa-libEGL mesa-libGL-devel mesa-libGLU-devel mesa-dri-drivers mesa-dri-drivers.i686 mesa-dri1-drivers mesa-dri1-drivers.i686 libjpeg-turbo openssh-clients openssl-libs.x86_64 openssl-l
ibs.i686 xorg-x11-drv-nvidia libffi libffi.i686

That's what I've got for my Singularity image in which I need to run an app that can use DCV.

Gabe


To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.

Richard Powell

unread,
Feb 19, 2018, 7:16:10 PM2/19/18
to singu...@lbl.gov
Thanks Gabe, I agree that binding lib64 is not a proper strategy.  I will check on these include files and also await information from the vendor.  Intially, I tried a Nice install to a container without bindings and then started binding to reverse engineer the Nice install.  I'm hopeful the vendor can give details that'll all hardware acceleration using the nice opengl libraries.  Thanks for the input.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

John Hearns

unread,
Feb 20, 2018, 2:13:44 AM2/20/18
to singu...@lbl.gov
Richard,
   I have done a bit of work with NICE DCV in the past. Not in containers I must admit. Regarding bindings,
remember that DCV 'works' by substituting the OpenGL library. So you will have to have the DCV library inside your container.
That's what the dcv on / dcv off command does - it 'swaps in' the library.
I apologies if this remark does not add anything to this discussion.

ps. Containerized DCV - awesome!

pps. A completely off the wall request, and I guess I should ask NICE about this one!
Would DCV work on a system without an NVIDIA card?  I guess not as it needs H264 encoding, and the instructions are very firm on having the Nvidia drivers!

Richard Powell

unread,
Feb 20, 2018, 3:31:58 PM2/20/18
to singularity
John, thus far I'm not having success with attempt to get NICE containerized.  I am hampered with inability to use the -nv switch in Singularity because my kernel of the host OS, RHEL6.4, does not support PR_SET_NO_NEW_PRIVS.  Therefore, I'm forced to attempt extraction of NVidia drivers inside the container.  On my attempts to do so, something is amiss with my 32 bit NVidia drivers because attempts to install NICE inside the container report:

Step no: 2 of 5 | System Check

--------------------------------------------------------------------------------

Checking Operating System... Ok

Checking Xorg server........ Ok

Checking NVIDIA card........ Skipping NVIDIA check because lspci is missing.

Checking NVIDIA driver...... Failed


I desperately need some help on how to replicate what the singularity "-nv" switch is doing so I can manually supply the driver within my RHEL6.9 container, hosted on my RHEL6.4 server.  Inability to containerize NICE-DCV may push me away from Singularity.  Very frustrated on this topic...
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

Richard Powell

unread,
Feb 21, 2018, 4:26:46 PM2/21/18
to singularity
Success on this effort now!  If you have a need to containerize NICE-DCV, perhaps this can help out.  See attached information.  If I left out something huge, or you find a smoother path, feel free to let me know.

Thanks Everyone
Richard
Singularity_Google_group.pdf

Richard Powell

unread,
Feb 22, 2018, 1:08:43 PM2/22/18
to singularity
An omission from my document in the previous post...The step 3 item in post-recipe section mentions make_links.sh and make_links_32bit.sh both require you to first export NVID_VER= to match your NVidia driver version...I left out this export command.  Attached is rev1 with this addition..  In case anyone's interested, I'll be keeping this string up to date over time with application testing next on my list.
Singularity_Google_group_rev1.pdf

Richard Powell

unread,
Feb 22, 2018, 1:12:42 PM2/22/18
to singularity
A quick shout out to Dario La Porta at AWS Professional Services for his excellent vendor support to help me containerize NICE-DCV 2016!  A great thing to see vendors adopt support tasks to further the Singularity cause...

Richard Powell


On Monday, February 19, 2018 at 3:30:48 PM UTC-5, Richard Powell wrote:
Reply all
Reply to author
Forward
0 new messages