Hi,
As far as know, caffe supports multiple GPUs on training.
After some research, unfortunatelly, I didn't find a way to set multiple GPUs in the pycaffe interface for training.
I know we can use multiple GPU over the command line
caffe train -solver examples/mnist/lenet_solver.prototxt -gpu 0,1
But my data comes from a memory_data_layer, so, this isn't a feasible solution for me.
Does someone know how to enable multiple GPUs on the pycaffe API, in the training phase?
Thaks a lot!