Utilization of GPU for training on ImageNet

36 views
Skip to first unread message

harman

unread,
Jun 1, 2017, 12:56:42 PM6/1/17
to Caffe Users
I have created LMDB of ImageNet following the tutorial as given here. I also have a Tesla K20c 4GB GPU. I am using the architecture of SqueezeNet as given here. The problem I am facing is as follows: 
1. When I train the model with batch_size of any size say 4, 8, 64, 128 or 256 the amount GPU taken for the computation is constant at 267 MB (Although my GPU size is 4 GB) . The time taken for one iteration is very high ( ~46 seconds for batch size of 128. ). Is there any way to accelerate the speed of training ?

Output of nvidia-smi is as follows: (As I said before, this value is constant for any batch-size, the process which I'm running is './caffe')

p.Paul

unread,
Jun 2, 2017, 4:13:48 AM6/2/17
to Caffe Users

What about using multiple GPUs.

Porschen Hund

unread,
Jun 4, 2017, 9:53:00 PM6/4/17
to Caffe Users
If you check volatile GPU util is 0 % meaning you are not using GPU at all. I have a similar problem. The volatile GPU util  should be as high around 100% if your GPU is used properly.
Reply all
Reply to author
Forward
0 new messages