How are multiple gpus utilized in tensorflow?

1,256 views
Skip to first unread message

Shisho Sama

unread,
Dec 22, 2016, 9:45:00 AM12/22/16
to Discuss
I want to know how Tensorflow utilizes multiple GPUs so that I can decide to upgrade to a new more powerful card or just buy the same card and run on SLI.
for example am I better off buying one TitanX 12 GB , or two GTX 1080 8 GB ?
If I go SLI the 1080s, will my effective memory get doubled? I mean can I run a network which takes 12 or more GB of vram using them? Or am I left with only 8 GB ?
Again how is memory utilized in such scenarios ? What would happen if two different cards are installed (both NVIDIA) ? Does tensorflow utilize the memory available the same way? (suppose one 980 and one 970!)

kan.l...@gmail.com

unread,
Dec 24, 2016, 8:19:24 PM12/24/16
to Discuss
basically, you need assign your device for tf by yourself if you want fully utilize all your resource. by default, tf will pickup gpu:0 if possible. here are the details https://www.tensorflow.org/how_tos/using_gpu/#using_a_single_gpu_on_a_multi-gpu_system

Shisho Sama

unread,
Dec 25, 2016, 6:47:00 AM12/25/16
to Discuss, kan.l...@gmail.com
Thanks, I'll have a look at it
Reply all
Reply to author
Forward
0 new messages