http://caffe.berkeleyvision.org/tutorial/loss.html states that any layer can contribute to the overall loss by adding the field "
loss_weight: 1" to the layer definition. However, when I try adding this field to the layers in the train_val.prototxt file for the BVLC Reference CaffeNet, the network initializes layers such as
conv1_conv1_0_split (which is does not initialize normally) and finally fails with the error "Duplicate blobs produced by multiple sources." How is the overall loss computed? How do I make each layer of the network contribute to the loss? Is there a better way to accomplish this without using the
loss_weight parameter?
Joe