I am considering building a Linux ML workstation. I was wondering if I bought a CUDA-supported GPU, would I be able to use this extra compute power for AutoML? I see the compatibility with H2O4GPU, Driverless AI, and Deep Water online but not much for AutoML. If anyone has had success with this or knows the limitations I would appreciate the advice.
Thanks!