How are multiple gpus utilized in Caffe?

636 views
Skip to first unread message

Shisho Sama

unread,
Dec 21, 2016, 2:16:28 PM12/21/16
to Caffe Users
I want to know how Caffe utilizes multiple GPUs so that I can decide to upgrade to a new more powerful card or just buy the same card and run on SLI.
for example am I better off buying one TitanX 12 GB , or two GTX 1080 8 GB ?
If I go SLI the 1080s, will my effective memory get doubled? I mean can I run a network which takes 12 or more GB of vram using them? Or am I left with only 8 GB ? Again how is memory utilized in such scenarios ? What would happen if two different cards are installed (both NVIDIA) ? Does caffe utilize the memory available the same? (suppose one 980 and one 970!)

Patrick McNeil

unread,
Dec 22, 2016, 9:19:59 AM12/22/16
to Caffe Users
I don't believe Caffe supports SLI mode.  The two GPUs are treated as separate cards.

When you run Caffe and add the '-gpu' flag (assuming you are using the command line), you can specify which GPU to use (-gpu 0 or -gpu 1 for example).  You can also specify multiple GPUs (-gpu 0,1,3) including using all GPUs (-gpu all). 

When you execute using multiple GPUs, Caffe will execute the training across all of the GPUs and then merge the training updates across the models.  This is effectively doubling (or more if you have more than 2 GPUs) the batch size for each iteration.

In my case, I started with a NVIDIA GTX 970 (4GB card) and then upgraded to a NVIDIA GTX Titan X (Maxwell version with 12 GB) because my models were too large to fit in the GTX 970.  I can run some of the smaller models across both cards (even though they are not the same) as long as the model will fully fit into the 4GB of the smaller card.  Using the standard ImageNet model, I could execute across both cards and cut my training time in half.

If I recall correctly, other frameworks (TensorFlow and maybe the Microsoft CNTK) support splitting a model among different nodes to effectively increase the available GPU memory like what you are describing.  Although I haven't personally tried either one, I understand you can define on a per-layer basis where the layer executes. 

Patrick

Shisho Sama

unread,
Dec 22, 2016, 9:34:31 AM12/22/16
to Caffe Users
Thanks alot @Patrik, this is very helpful and it also cleared things up for me!
Reply all
Reply to author
Forward
0 new messages