I give the argument --use-gl=egl to Chromium so it gets the GPU acceleration in headless mode.
I ran it in various environments that has GPU and it all worked fine with high fps. (e.g 60 on MacOS, 50~55 on Ubuntu on ec2 g4dn ...)
But it doesn't work in AWS Codebuild LINUX_GPU_CONTAINER environment. It seems to fall back to default software renderer SwiftRenderer judging by it's poor fps rate 1~2. It's the same number I get from other environment when I use SwiftRenderer.
Codebuild LINUX_GPU_CONTAINER's got Tesla gpu with the driver
[Container] 2020/11/27 11:01:20 Running command nvidia-smi 392 Fri Nov 27 11:01:20 2020 393 +-----------------------------------------------------------------------------+ 394 | NVIDIA-SMI 418.87.00 Driver Version: 418.87.00 CUDA Version: N/A | 395 |-------------------------------+----------------------+----------------------+ 396 | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | 397 | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | 398 |===============================+======================+======================| 399 | 0 Tesla V100-SXM2... On | 00000000:00:1B.0 Off | 0 | 400 | N/A 37C P0 37W / 300W | 0MiB / 16130MiB | 0% Default | 401 +-------------------------------+----------------------+----------------------+ 402 | 1 Tesla V100-SXM2... On | 00000000:00:1C.0 Off | 0 | 403 | N/A 39C P0 36W / 300W | 0MiB / 16130MiB | 0% Default | 404 +-------------------------------+----------------------+----------------------+ 405 | 2 Tesla V100-SXM2... On | 00000000:00:1D.0 Off | 0 | 406 | N/A 39C P0 38W / 300W | 0MiB / 16130MiB | 0% Default | 407 +-------------------------------+----------------------+----------------------+ 408 | 3 Tesla V100-SXM2... On | 00000000:00:1E.0 Off | 0 | 409 | N/A 40C P0 42W / 300W | 0MiB / 16130MiB | 0% Default | 410 +-------------------------------+----------------------+----------------------+ 411 412 +-----------------------------------------------------------------------------+ 413 | Processes: GPU Memory | 414 | GPU PID Type Process name Usage | 415 |=============================================================================| 416 | No running processes found | 417 +-----------------------------------------------------------------------------+
I suspect that chromium should link to NVIDIA's egl implementation that is shipped with NVIDIA driver instead of mesa or angle's... but I don't know how to test it.
How can I enable the gpu acceleration?
[Container] 2020/11/30 16:14:19 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2020/11/30 16:14:19 Phase context status code: Message:
[Container] 2020/11/30 16:14:19 Entering phase BUILD
[Container] 2020/11/30 16:14:19 Running command dbus-run-session -- bash -c 'DBUS_SYSTEM_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" node index.js'
[1130/161419.945622:INFO:cpu_info.cc(53)] Available number of cores: 32
[1130/161419.945660:INFO:cpu_info.cc(53)] Available number of cores: 32
[1130/161419.945692:VERBOSE1:zygote_main_linux.cc(217)] ZygoteMain: initializing 0 fork delegates
[1130/161419.945718:VERBOSE1:zygote_main_linux.cc(217)] ZygoteMain: initializing 0 fork delegates
[1130/161419.994646:VERBOSE1:webrtc_internals.cc(118)] Could not get the download directory.
[1130/161419.999335:ERROR:gl_surface_egl.cc(772)] EGL Driver message (Critical) eglInitialize: xcb_connect failed
[1130/161419.999454:ERROR:gl_surface_egl.cc(772)] EGL Driver message (Critical) eglInitialize: xcb_connect failed
[1130/161419.999517:ERROR:gl_surface_egl.cc(772)] EGL Driver message (Critical) eglInitialize: xcb_connect failed
[1130/161419.999580:ERROR:gl_surface_egl.cc(772)] EGL Driver message (Critical) eglInitialize: xcb_connect failed
[1130/161419.999636:ERROR:gl_surface_egl.cc(772)] EGL Driver message (Error) eglInitialize: eglInitialize
[1130/161419.999692:ERROR:gl_surface_egl.cc(1313)] eglInitialize Default failed with error EGL_NOT_INITIALIZED
[1130/161419.999752:ERROR:gl_initializer_linux_x11.cc(160)] GLSurfaceEGL::InitializeOneOff failed.
[1130/161419.999818:VERBOSE1:gpu_init.cc(361)] gl::init::InitializeGLNoExtensionsOneOff failed
[1130/161420.000941:VERBOSE1:device_data_manager_x11.cc(216)] X Input extension not available
[1130/161420.001793:VERBOSE2:vaapi_video_encode_accelerator.cc(240)] VaapiVideoEncodeAccelerator():
[1130/161420.001886:ERROR:viz_main_impl.cc(150)] Exiting GPU process due to errors during initialization
[1130/161420.001904:VERBOSE2:vaapi_video_encode_accelerator.cc(966)] DestroyTask():
[1130/161420.002048:VERBOSE2:vaapi_video_encode_accelerator.cc(256)] ~VaapiVideoEncodeAccelerator():
[1130/161420.004241:VERBOSE2:thread_state.cc(415)] [state:0x55d4ad9a2100] ScheduleGCIfNeeded
[1130/161420.010979:VERBOSE2:thread_state.cc(415)] [state:0x55d4ad9a2100] ScheduleGCIfNeeded
[1130/161420.011165:VERBOSE2:thread_state.cc(415)] [state:0x55d4ad9a2100] ScheduleGCIfNeeded
[1130/161420.011326:VERBOSE1:va_stubs.cc(779)] dlopen(libva.so.2) failed.
[1130/161420.011416:VERBOSE1:va_stubs.cc(780)] dlerror() says:
libva.so.2: cannot open shared object file: No such file or directory