Is there a mean I/U layer?

1,017 views
Skip to first unread message

Eran Swears

unread,
Jan 8, 2016, 11:02:37 AM1/8/16
to Caffe Users
I'm currently using the accuracy layer to access the performance of my model. Is there a similar layer to get the mean I/U and other metrics that are reported in papers such as top 1 and top 5 error rates?

Evan Shelhamer

unread,
Apr 17, 2016, 1:20:53 AM4/17/16
to Eran Swears, Caffe Users
For intersection-over-union and other semantic segmentation metrics you can try the `score.py` module in the reference FCN code: http://fcn.berkeleyvision.org

Evan Shelhamer





On Fri, Jan 8, 2016 at 8:02 AM, Eran Swears <eran....@kitware.com> wrote:
I'm currently using the accuracy layer to access the performance of my model. Is there a similar layer to get the mean I/U and other metrics that are reported in papers such as top 1 and top 5 error rates?

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/2539f266-782f-409f-98b0-153779575847%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Eran Swears

unread,
Apr 17, 2016, 8:43:37 AM4/17/16
to Caffe Users
Thanks. I'll take a look at it when I get back to working on the FCN. I'm currently working on the Faster-RCNN.
Message has been deleted

Jeremy Rutman

unread,
May 19, 2016, 2:46:30 PM5/19/16
to Caffe Users

You can run this straight from a caffemodel (without a solverstate) like this


import caffe
import score
net 
= caffe.Net('val.prototxt','voc8.5_snapshots/train_iter_57025.caffemodel,caffe.TEST)
score.do_seg_tests(net,666,None,range(1,n_test_examples)

(The 2nd argument is iteration #, the third is for save_format, set it somehow to get saved results - whatever goes there has a .format attribute)


You get output like


016-05-19 11:23:06.143133 Iteration 666 loss 0.949618636966
>>> 2016-05-19 11:42:36.737028 Iteration 666 overall accuracy 0.7707446375

>>> 2016-05-19 11:42:36.737095 Iteration 666 mean accuracy 0.0616112569969

>>> 2016-05-19 11:42:36.737365 Iteration 666 mean IU 0.0472266612742

>>> 2016-05-19 11:42:36.737458 Iteration 666 fwavacc 0.605455103405

overall acc - correct pixels/tot pixels
mean acc - avg over class accuracies (correct pixels of class/tot pixels of class)
mean IU - avg over class IU
fwavacc - frequency weighted av. accuracy

these are generated thru a confusion matrix which is incredibly-elegantly calculated using
def fast_hist(a, b, n):
 k 
= (>= 0) & (< n)
 
return np.bincount(* a[k].astype(int) + b[k], minlength=n**2).reshape(n, n)
Reply all
Reply to author
Forward
0 new messages