hamming distance loss layer

82 views
Skip to first unread message

ido ariav

unread,
Nov 12, 2017, 4:36:40 AM11/12/17
to Caffe Users
Hi all,
I'm currently working on an FCN to perform semantic segmentation of an input image into 256 different categories.
These categories are actually 8bit binary representations of some problem we are dealing with.
Currently, I use a softmax loss but the problem is that the loss is the same whether I predicted '200' or '101' instead of '100' for instance. However, I wish my loss to take into account the hamming distance between the binary representations of the class numbers. meaning - I wish the loss of predicting '1' (00000001) when the true label is '0' (00000000) to be lower than the loss for predicting '256' (11111111) when the label is '0' (00000000). much like how euclidean loss captures the distance between labels and predictions but I want to replace the l2 norm with hamming distance.
anyone has an idea how to quickly implement this in Caffe?
or if I can manipulate some existing loss layer for that purpose that would be even better.

thanks!
Reply all
Reply to author
Forward
0 new messages