Use only one GPU pycaffe

173 views
Skip to first unread message

mau

unread,
May 28, 2017, 2:22:27 PM5/28/17
to Caffe Users
Hi all! :)

i write my code with purpose to use only one of four available GPUs, but when i run python code, my process is distributed in all gpus.

it's some rows of code:

caffe.set_mode_gpu()
caffe.set_device(1)

how can i set my code for use only one gpu?

Thanks

Jonathan R. Williford

unread,
May 29, 2017, 3:22:22 AM5/29/17
to mau, Caffe Users
What does nvidia_smi show when your code is training and when it is not running? Does specifying different GPUs do anything (ie. 0)?

Jonathan

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users+unsubscribe@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/1a2071b6-23ac-4603-badf-d97a749d7593%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

mau

unread,
May 29, 2017, 4:21:06 AM5/29/17
to Caffe Users
nvidia-smi shows it's PID process (in all GPUs), even if i specified other gpus (like 0, 2, ...). Effectively, when training is finished, process is still active.

mau

unread,
May 29, 2017, 12:16:36 PM5/29/17
to Caffe Users
Below, output of nvidia-smi:

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 361.93.02              Driver Version: 361.93.02                 |

|-------------------------------+----------------------+----------------------+

| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |

| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |

|===============================+======================+======================|

|   0  Tesla M40           On   | 0000:03:00.0     Off |                    0 |

|  0%   40C    P0    66W / 250W |    299MiB / 11448MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

|   1  Tesla M40           On   | 0000:04:00.0     Off |                    0 |

|  0%   35C    P0   143W / 250W |    468MiB / 11448MiB |     73%      Default |

+-------------------------------+----------------------+----------------------+

|   2  Tesla M40           On   | 0000:82:00.0     Off |                    0 |

|  0%   26C    P0    65W / 250W |    114MiB / 11448MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

|   3  Tesla M40           On   | 0000:83:00.0     Off |                    0 |

|  0%   26C    P0    65W / 250W |    114MiB / 11448MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

                                                                               

+-----------------------------------------------------------------------------+

| Processes:                                                       GPU Memory |

|  GPU       PID  Type  Process name                               Usage      |

|=============================================================================|

|    0      1811    C   /usr/bin/python                                106MiB |

|    0     98306    C   /usr/bin/python                                188MiB |

|    1     98306    C   /usr/bin/python                                466MiB |

|    2     98306    C   /usr/bin/python                                112MiB |

|    3     98306    C   /usr/bin/python                                112MiB |

+-----------------------------------------------------------------------------+ 

where process 98306 is IPython call with my caffe method.




Il giorno domenica 28 maggio 2017 20:22:27 UTC+2, mau ha scritto:

Jonathan R. Williford

unread,
May 29, 2017, 2:30:30 PM5/29/17
to mau, Caffe Users
All of the computations are being performed on the selected GPU. I don't believe the Python interface allows utilizing multiple GPUs yet. It seems to be a bug that it is loading the data into the memory of all of GPUs. Can you post an issue on Github and add your model definitions and files (or a smaller version that reproduces the problem)? I would include the nvidia-smi output.

Jonathan

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users+unsubscribe@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.

mau

unread,
May 29, 2017, 2:41:46 PM5/29/17
to Caffe Users
I thought same thing! 
in fact, i've posted my problem in the issue page of caffe:
https://github.com/BVLC/caffe/issues/5661

The strange thing is that i can reproduce bug with only four rows of code:

import caffe
def foo():
caffe.set_mode_gpu()
caffe.set_device( 1 )

no matter what do i do or my purpose, the problem is caffe.set_mode_gpu()....


Il giorno domenica 28 maggio 2017 20:22:27 UTC+2, mau ha scritto:

Przemek D

unread,
May 30, 2017, 2:05:31 AM5/30/17
to Caffe Users
Isn't that a known thing that as long as you didn't compile in a CPU-only mode, Caffe always loads some libraries and utilises a bit of a GPU (all of them?) every time, even if you're using set_mode_cpu()? This is how I understand Evan's post from the other day.

mau

unread,
May 30, 2017, 5:05:27 AM5/30/17
to Caffe Users
it seems to be reasonable, but i expect to read at least a warning in the documentation...
Looking in this way, why this does not happens with command line calls?


Il giorno domenica 28 maggio 2017 20:22:27 UTC+2, mau ha scritto:
Reply all
Reply to author
Forward
0 new messages