Hi,
I am running caffe on a tesla K80 however my understanding is that the K80 has 2 GPU's inside each with 12GB memory. This seems consistent with the output I get from caff/build/tools/caffe device_query for two GPU id's.
My question is, is there any way I can get caffe to utilise the memory on both cards (i.e. the full 24GB) whilst training one model, so that I can increase the batch size?