Classification accuracy for floating point vector classes

105 views
Skip to first unread message

Brent Komer

unread,
Mar 10, 2016, 3:13:46 PM3/10/16
to Caffe Users
Hi,

I'm trying to do classification with a convnet where my output labels are floating point vectors (roughly 200 dimensions). The net is considered correct if the vector it returns is closer in distance to the correct label vector than to any other label vector. I'm fairly new to Caffe, so I'm not sure how to set this up to produce an accuracy score. For computing the loss, I am using a EuclideanLoss layer between the network output and the label vector, which I think is correct. For accuracy it seems more difficult, I haven't been able to find a type of layer that does what I want (does one exist?). The simplest way I can think of to do with is to have a classifier that checks which label vector is closest, and outputs 'correct' or 'incorrect' and reports an accuracy from that somehow. How would I do this in Caffe?

Thanks!

Bharat Bhusan Sau

unread,
Mar 10, 2016, 6:17:24 PM3/10/16
to Caffe Users
It seems to me you are trying to do regression with convnets. Your target value is not label vector. So using accuracy layer here does not make sense.  You may use same structure for train and test prototxt. Output of the convnet may be euclidean loss (Layer type:  "EuclideanLoss").

Jan

unread,
Mar 11, 2016, 3:40:50 AM3/11/16
to Caffe Users
Mhm, implementing that with caffe will be at least cumbersome: There is currently no layer to compute that accuracy, and implementing one won't really work, since at runtime that layer does only have access to the labels in the current batch, not all labels, so you cannot compute your accuracy. The "easiest" way to implement that would probably be by a wrapper script in python, that feeds all samples to test through the (trained) net, saves the predictions, then loads all available label vectors by hand and computes your accuracy score. As I said, not very nice.

On the other hand, this concept of accuracy seems very strange to me: Since the training and test sets are only samples of a theoretically very large population, it does not really make sense to compare the individuals in these subsets with each other, ignoring that there might be other individuals with even better fitting labels, which just not happen to be in the train/test sets.

Jan

henok sahilu

unread,
Mar 17, 2016, 9:41:14 PM3/17/16
to Caffe Users
use euclideanloss for accuracy as well. Meaning replace the accuracy layer with this:

layer {
  name: "losstest"
  type: "EuclideanLoss"
  bottom: "ip2"
  bottom: "label"
  top: "losstest"
  include {
    phase: TEST
Reply all
Reply to author
Forward
0 new messages