Asking advice for deconvnet

983 views
Skip to first unread message

Dong Ki Kim

unread,
Nov 26, 2014, 7:54:33 PM11/26/14
to caffe...@googlegroups.com

Hello. 
To understand what features CNN learned, I am trying to implement "deconvnet" described in the following paper: http://www.matthewzeiler.com/pubs/arxive2013/arxive2013.pdf.
The paper visualized learned feature by process described in the below image: 

















Is there any function that already does deconvnet in Caffe? Or could you give me some advice for implementing it?
Thank you for your help!

Jason Yosinski

unread,
Dec 4, 2014, 2:27:24 PM12/4/14
to Dong Ki Kim, caffe...@googlegroups.com
Hi Dong Ki,

I don't believe any deconv functionality is currently implemented in Caffe. However, if you just want to visualize features as in [1] (that is, using the forward filters in reverse instead of learning the backwards filters), you basically just want to do backprop, except at the relu layers you should apply the relu again instead of multiplying by the relu derivative (0 or 1). For more info, see Section 4 of [2].

I think the easiest way to implement this would be to make a Layer::Deconv_cpu method that just calls Layer::Backward_cpu for conv layers and pooling layers. For the relu layers it should take the relu instead of calling Layer::Backward_cpu.

jason

[1] Zeiler and Fergus, 2013, "Visualizing and Understanding Convolutional Networks"
[2] Simonyan et al, 2014, "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps"



---------------------------
Jason Yosinski, Cornell Computer Science Ph.D. student
http://yosinski.com/    +1.719.440.1357

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/e7fe4865-709c-4461-97ac-2d9c3a7087cf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Caffe User

unread,
Jan 15, 2015, 11:51:47 PM1/15/15
to caffe...@googlegroups.com
Dear Jason Yosinski,

I appreciate your valuable advice!
I am currently working on it, and I will post results!

Respectfully yours,

Dong Ki Kim 

Bjarke Felbo

unread,
Feb 9, 2015, 3:58:55 PM2/9/15
to caffe...@googlegroups.com
Hi Dong Ki Kim,

Have you had any luck with your implementation?

Cheers,
Bjarke

Caffe User

unread,
Feb 9, 2015, 5:06:38 PM2/9/15
to caffe...@googlegroups.com
Hello, Bjarke.
No, I could not implement deconvnet yet.
I am still working on it!

Best, 

Dong Ki Kim

D Rossi

unread,
Feb 12, 2015, 9:24:44 PM2/12/15
to caffe...@googlegroups.com
If the only difference between backprop and Deconvnets occur at relu layers, couldn't we just add a parameter to the relu layer? Would adding a boolean parameter to the relu layer and then creating a new .prototxt file which set this parameter for the relu layers accomplish the same thing? The boolean parameter could be checked inside ReLULayer::Backward_cpu() and if set, call ReLULayer::Forward_cpu(). To visualize a network, one could run the network forward, and then do a backward pass to generate all the gradients.

vpestret

unread,
Feb 13, 2015, 6:37:59 AM2/13/15
to caffe...@googlegroups.com
There is deconvolution layer implemented and seems to be merged https://github.com/BVLC/caffe/pull/1615 but it is useless except for making some masks but not for  restoration of what CNN does really see.

Caffe User

unread,
Feb 18, 2015, 9:21:20 AM2/18/15
to caffe...@googlegroups.com
I noticed that the author of the above paper has free Matlab toolbox for deconvnet at his homepage (http://www.matthewzeiler.com/software/).
Have anyone imported learned CNN features by Caffe to the Matlab toolbox to visualize?

Sincerely, 

Dong Ki Kim 

Sudeep Reddy Gaddam

unread,
Aug 20, 2015, 9:07:20 PM8/20/15
to Caffe Users, dk...@cornell.edu
 Is there any other tool on top of cafe which can provide similar to top9 patches that maximally activate in each feature map at each layer as Matt Zeiler has mentioned in his video?


vpestret

unread,
Aug 21, 2015, 4:51:09 AM8/21/15
to Caffe Users, dk...@cornell.edu
The more interesting question here, did this (at least patch) visualization help anybody? I mean gathering real result and not for a report writing.

shruti sneha

unread,
Dec 16, 2016, 12:29:52 AM12/16/16
to Caffe Users, dk...@cornell.edu, vladimir....@itseez.com
Hey, actually I have been also working on this same concept. I got some of the links mentioned above along with " https://github.com/guruucsd/CNN_visualization/blob/master/urban_tribe/caffe/deconv_demo.ipynb ", these guys have implemented the visualization by using the concept of deconvolution. One can take their reference for the visualization of their own dataset (even I am also working on this), But one need to do various modifications in which somewhere I have been stuck. Hope this would help you all guys. If you succeed, the please do ping me :)

Shruti
Reply all
Reply to author
Forward
0 new messages