I want to use a trained convnet to predict labels for every pixel of an image. I have already seen this notebook example:
http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/net_surgery.ipynbThe problem with this notebook example is that it doesn't generate an output for every window. It just gives an 8x8 prediction map instead of 451x451 prediction map.
I am also aware of FCN in the model zoo:
https://github.com/longjon/caffe/tree/futureBut I was looking for a simpler solution than to try to figure out how to use that branch. Besides, I don't code to train. I just want to do inference.
Is there a way to run a trained convnet densely in an image in a way faster than the naive approach, ideally just using the python interface and just using the caffe master branch?