Using only one gpu among several

111 views
Skip to first unread message

bob jones

unread,
Aug 27, 2015, 12:25:44 PM8/27/15
to torch7
I have two gpus and two programs I want to run. I would like to have each program run on its own gpu . If I call cutorch.setDevice(N1) for program 1 and cutorch.setDevice(N2) for program 2, will this ensure each program will run on its own gpu. If not, how do I do this?

alban desmaison

unread,
Aug 27, 2015, 12:50:16 PM8/27/15
to torch7
Yes it will work but the libraries will be initialized on both devices.
You can use the env variable CUDA_VISIBLE_DEVICES so the process will only see one GPU:

In one command line

export CUDA_VISIBLE_DEVICES=0

th script1.lua


In another one

export CUDA_VISIBLE_DEVICES=1

th script2.lua


To use both, you can set 

export CUDA_VISIBLE_DEVICES=0,1

Greg Heinrich

unread,
Oct 1, 2015, 10:55:27 AM10/1/15
to torch7
Hi Alban,
I am curious to know why the libraries are initialised on all devices? I noticed that a fair amount of memory is allocated on all GPUs, even if only one is used to run the main computations (through a call to cutorch.setDevice() ). Is it possible to refrain from allocating memory on all GPUs programmatically without changing the environment variables?

Thanks!

alban desmaison

unread,
Oct 2, 2015, 1:29:46 PM10/2/15
to torch7
Its due to the design in the CUDA runtime I think. When you initialize CUDA it initialize on all visible devices. 
Reply all
Reply to author
Forward
0 new messages