Hello,
Thanks for the suggestion on using Nvidia. An Nvidia card worked much better.
In case anyone needs them, here are my notes on getting VirtualGL to work:
Thanks,
Jason
==============================================================================
## These instructions are my rough notes. I still need to do a wipe and rebuild to verify and clean up the instructions.
## These instructions are tailored for installing Virtgual GL on a server running RHEL 7.6 with two graphics cards. Xorg DISPLAY :0 is an integrated Matrox graphics card that is not suitable for use with VirtualGL. The second video card is an Nvidia M4000. We set up a second Xorg instance running on DISPLAY :1 and use that for VirtualGL. Display :1 is running NOTHING, not even lightdm/gdm.
# My efforts with an AMD card failed. The 18.50 version of the AMD proprietary driver caused the kernel to crash 5 minutes after boot. Using the RedHat-supplied drivers would not work with Virtual GL.
# all commands should be run as root. for the most part, these should just be copy and paste into a command-line (bash)
# lightdm is probably not needed. The server can run the multi-user target without a GUI.
yum -y install lightdm
systemctl stop gdm
systemctl disable gdm
systemctl enable lightdm
systemctl start lightdm
systemctl isolate multi-user.target
# We need both 32bit and 64bit virtualGL if the application is 32bit.
rpm -e VirtualGL --allmatches
yum -y localinstall /tmp/VirtualGL-2.6.1.x86_64.rpm
yum -y localinstall /tmp/VirtualGL-2.6.1.i386.rpm
##
# install the nvidia driver from elrepo
# Remove ocl-id because it conflicts with the nvidia driver because of OpenCL
yum -y remove ocl-icd.x86_64
nvidia-detect | grep -qs kmod && yum -y install $(nvidia-detect)
# lammps will be reinstalled by puppet on the next puppet run
# reboot for the nvidia drives to detect everything.
reboot
# Grab the bus ID for the nvidia card. this should similar to "PCI:1:0:0". lspci can provide the same info, but the format is slighly different. nvidia-xconfig shuold provide a format that is compatible with "Xorg -isolateDevice"
bus=$(nvidia-xconfig --query-gpu-info|grep PCI|awk '{print $4}')
####
### Configure a new Xorg server on Display :1. Configuration is two-stage, because nvidia-xconfig assumes DISPLAY :0 and doesn't allow you to override it on the command line.
Xorg :1 -configure -isolateDevice $bus