You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
What is the best way to train a network to solve a classification and regression task simultaneously and deal with missing labels in some training data?
Miquel Martí
unread,
Nov 30, 2016, 10:21:33 PM11/30/16
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
El dijous, 1 desembre de 2016 2:29:20 UTC+9, Filipe Trocado Ferreira va escriure:
Filipe Trocado Ferreira
unread,
Dec 1, 2016, 6:04:39 PM12/1/16
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
Thanks for the response. I'm aware of the concept. My particular question is how to deal with missing labels. For example, how can I ignore some labels in Euclidean Loss?
Miquel Martí
unread,
Dec 6, 2016, 1:04:50 AM12/6/16
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Caffe Users
I think you will have to modify the Euclidean Loss layer yourself to give zero loss when there is no label available. I don't think it is possible to do it with the current implementation of the loss layers. You might also want to implement the asynchronous weight update idea of the paper to avoid problems when few or no samples for a given task were included in a mini-batch.
El divendres, 2 desembre de 2016 8:04:39 UTC+9, Filipe Trocado Ferreira va escriure: