I am using the latest main branch caffe.
However when I try this same datalayer with a different network (
alexnet also from shelamer's fcn) I hit an error
$ /root/caffe/build/tools/caffe train -solver solver.prototxt -weights fcn-alexnet-pascal.caffemodel -gpu 0
...
[libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.NetParameter: 375:7: Message type "caffe.LayerParameter" has no field named "layer".
my pythonlayer (which again works with the voc net) looks like
layer {
name: "data"
type: "Python"
top: "data"
top: "label"
python_param {
module: "jrlayers"
layer: "JrDataLayer"
param_str: "{\'images_dir\': \'/root/imgdbs/train/\', \'labels_dir\': \'/root/imgdbs/labels/',\'seed\': 1337, \'split\': \'train\', \'imagesfile\':\'trainimages.txt\', \'mean\': (120, 120, 120)}"
}
I took out the phase-train and phase-test directives and split the original train_val.prototxt into train.proto and val.proto in an attempt to make this all look as much like the working net as possible, to no avail.
If I try to avoid the python layer using lmdb I hit a different issue involving the size of the labels vs. the output
Anyway since the only thing I am changing is the weights file , is there info in the weightfile concerning what type of layers it wants? seems unlikely but I cant think of why else this wouldnt work
A somewhat related question, is there any difference between 'finetuning' ie starting from a weightsfile a-la
caffe train -solver solver.prototxt -weights fcn-alexnet-pascal.caffemodel
and 'resuming training' as in
caffe train --solver=solver.prototxt --snapshot=train_iter_10000.solverstate