I created a virtual machine with pre-installed PyTorch for this purpose, and I see it's possible to
manually add GPUs or change the number of CPUs/RAM later.
But what is the intended way for a cloud ML solution to
scale automatically?
Is this (only) possible with the solutions listed under "ARTIFICIAL INTELLIGENCE", such as "AI Platform (Unified)"?
Or are there yet other solutions I've overlooked?
I don't suppose this should be done with Cloud Run...? (I can't imagine how to say via a Dockerfile that there are supposed to be GPUs attached).
Do I understand it correctly that the AI Platform / the other "ARTIFICIAL INTELLIGENCE" solutions are the way to go for scalable, standard tasks like image classification, but custom pre-/postprocessing code is currently a beta feature, so if it doesn't work, one should set up a virtual machine instance on the Compute Engine instead?
Thanks!
Sorry if this is not quite the right group - AI doesn't have its own, general group.