Hi all,
Thank you for the work you're all doing to make ML compilers more accessible and composable. It means I can add support for all sorts of devices and compilers to
my project with minimal effort.
I'm currently adding CUDA support via
this PJRT plugin target.
Are there docs anywhere to explain what setup is required for CUDA devices? For example I'm uncertain what to use for `create_options`. The tests indicate `visible_devices: {0}` but I'm seeing
> "no supported devices found for platform CUDA"
Without that argument, I see
> "Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR"
I'd like to figure out as much myself as possible, so if there are docs then please do send them my way. BTW my setup is working for the corresponding CPU target.
Thanks,
Joel