Hi all,
I am trying to train a fully convolutional network for semantic segmentation on the Pascal VOC dataset. To do so, I use the provided python data layer to feed the images and labels into the network. When I try to start training using python with the commands
import caffe
caffe.set_device(0)
caffe.set_mode_gpu()
solver = caffe.get_solver("solver.prototxt")
solver.solve()
I get the error:
F0727 23:27:14.297534 6628 net.cpp:141] Check failed: param_size <= num_param_blobs (0 vs. -805306368) Too many params specified for layer data
If I try to start training again it shows the same error message with a different negative number. If I keep on launching the python commands over and over again, eventually the training process starts. I have my doubts that I can trust the results the solver returns. The problem appears in CPU as well as GPU mode. Trying to start the training process directly via the command line ('caffe train - solver ...' ) does not work at all when there is a python layer in the network.
I'm using Ubuntu 17.10 with CUDA 9.0 + cuDNN v7 and Python version 3.6.5. Caffe is installed from the repositories.
Any help would be much appreciated. Thanks.