[faster-rcnn] Out of memory when using default net(VGG16)

419 views
Skip to first unread message

Rimphy Darmanegara

unread,
May 14, 2016, 1:21:56 AM5/14/16
to Caffe Users
Hello everyone, sorry if this is a repeated question.

I'm trying to run the tools/demo.py from faster-rcnn. It works fine when using option --net=zf and --cpu
But when run without any options (default net=VGG16) I get the following error msg:

F0514 12:10:08.283615 16873 syncedmem.cpp:56] Check failed: error == cudaSuccess (2 vs. 0)  out of memory
*** Check failure stack trace: ***
Aborted (core dumped)

I'm using one NVIDIA GTX-980 with 4GB of memory. CMIIW, In the paper said that it run on K40, which I've checked, has 12GB of memory.
Is there any way/config to get this program to work using my GTX-980 ?

Any hint is greatly appreciated.

Thank you.

dejian...@gmail.com

unread,
May 15, 2016, 10:23:56 AM5/15/16
to Caffe Users
This is quite easy, just both decrease your batch_size in your train.prototxt and test.prototxt. BTW, it seems new version of caffe made those two text file into one file named train_val.prototxt.

Rimphy Darmanegara

unread,
May 19, 2016, 12:52:46 PM5/19/16
to Caffe Users
Looks like the configuration is in *.pt file and not *.prototxt.
From the file py-faster-rcnn/models/pascal_voc/VGG16/faster_rcnn_alt_opt/faster_rcnn_test.pt looks like this:

name: "VGG_ILSVRC_16_layers"
input: "data"
input_shape {
dim: 1
dim: 3
dim: 224
dim: 224
}
...

The dim: 1 is the batch_size, right? 
So, any suggestion on what to change to those values ?
Reply all
Reply to author
Forward
0 new messages