Multiple loss and tasks, trainng accuracy much less than individual

14 views
Skip to first unread message

dusa

unread,
Mar 21, 2017, 4:57:29 PM3/21/17
to Caffe Users
I have trained two separate models on different modals of the same consequent image frames, one is RGB, other is a modal image of the same frame (e.g. depth image). I train one network with 3 labels and one with 14 labels (sort of a sub category of the first). Then using net surgery I pull the activations of these models and use a new prototxt that takes in both data inputs and both sets of labels, the body is the same as the previous models trained only in parallel and I have stopped their learning since I use the net_surgery to transfer the last activation layers, then I just normally define a task loss and accuracy using RGB and 3 label set -and- another task loss and accuracy for the modal image and 14 label set. I have read when there are multiple losses, caffe simply averages them. However, while training the accuracy is simply terrible for the 14 label set (in fact never goes beyond 0.4 and is mostly under 0.4 and keeps fluctuating). For the task with 3 labels it is just fine. This is awkward because while training individually, it was around 75%. I have checked to see if the labels were passes wrongly since I have made changes to data layer to support multiple inputs and labels, but that seems fine. What is wrong?
Reply all
Reply to author
Forward
0 new messages