HiI'm curious if it's possible to alter https://github.com/tensorflow/models/blob/master/swivel/swivel.py to be distributed. I understand that this is meant to be used with a GPU but I'm wondering if it's practical to use a cluster of machines with many vCPUs.
I notice that these 2 locations are calling for the CPU explicitly, presumably this is meant to be on a single machine with a GPU:
Would it be straightforward to simply use the existing code and substitute the tf.Session instance from a distributed cluster, via tf.train.ClusterSpec or is there a better way?