Hmmm... not actually.
I understand your point that caffe does not "test" the model, it just runs the model as is. And also caffe test is for train_val not deploy.
However, running the model always needs some input data, otherwise the forward propagation cannot be calculated. For train_val, input data is the lmdb dataset, assigned in the Data layer in train_val.prototxt. Something like:
layer {
type: "Data"
...
data_param {
source: "examples/mnist/mnist_train_lmdb"
...
}
}
There is no question here so far.
A deploy.prototxt probably looks like:
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 1 dim: 28 dim: 28 } }
}
There is no param pointing to lmdb or other sources that can be fed into the network.
The question comes when I intentionally use deploy.prototxt instead of train_val.prototxt.
Even though I understand that caffe test should be using train_val.prototxt and has nothing to do with deploy.prototxt after your previous explains, it did not explain why caffe test is able to generate some mysterious loss values if I intentionally use deploy.prototxt, which specifies no source in it. How can the forward propagation compute loss without input data specified? If there are actually some inputs, where are they?
Przemek D於 2018年3月1日星期四 UTC+8下午7時52分31秒寫道: