Upconvolution / Deconvolution in Keras?

7,972 views
Skip to first unread message

ben.ma...@gmail.com

unread,
Apr 5, 2016, 6:16:36 PM4/5/16
to Keras-users
Different papers write about deconvolution / upconvolution used for segmentation tasks. That is, as far as I understand: Instead of mapping multiple activations to one output, upconvolution maps one input activation to multiple outputs. So it's basically backwards convolution. I'm wondering how I can make use of this or implement such a layer in Keras? Right now as a substitution I use a combination of Upsampling2D(size = (4,4)) followed by Convolution2D(K, 3, 3, subsampling = (2,2)) but I'm not sure if this is as good as the deconvolution from the papers.


kai...@gmail.com

unread,
May 11, 2016, 11:45:38 AM5/11/16
to Keras-users, ben.ma...@gmail.com
Any news on that issue?

François Chollet

unread,
May 11, 2016, 12:11:33 PM5/11/16
to kai...@gmail.com, Keras-users, ben.ma...@gmail.com
Convolution2D can act as a deconvolution layer. I don't understand what you are asking?

On 11 May 2016 at 08:45, <kai...@gmail.com> wrote:
Any news on that issue?

--
You received this message because you are subscribed to the Google Groups "Keras-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to keras-users...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/keras-users/34a34656-6c21-4ebe-a5ac-42fd644625c3%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

ben.ma...@gmail.com

unread,
May 11, 2016, 1:00:18 PM5/11/16
to Keras-users, kai...@gmail.com, ben.ma...@gmail.com
You gave that same response in the past I think, but people are still asking. Can you clarify your answer with an example? How can Convolutional2D achieve to increase the output size instead of decreasing (due to subsampling)?

François Chollet

unread,
May 11, 2016, 1:56:23 PM5/11/16
to ben.ma...@gmail.com, Keras-users, kai...@gmail.com
Just use combinations of UpSampling2D and Convolution2D as appropriate. That's how you would build a convolutional autoencoder, for instance.

Christian S. Perone

unread,
May 11, 2016, 2:31:37 PM5/11/16
to François Chollet, ben.ma...@gmail.com, Keras-users, kai...@gmail.com
If by deconvolution we're talking about transposed convolution (also called fractionally strided convolution), then using the Convolution2D emulate a transposed convolution will have a performance penalty [1]. There are functions implemented in Theano and used by Lasagne [2] and also in TensorFlow [3] that can be used. However, it is not clear to me on how the UpSampling2D can be used to emulate transposed convolution, to my understanding to emulate it, it will require padding only; for instance, the transpose of convolving a 3x3 kernel over a 4x4 input using strid 1 is equivalent to the convolve of a 3x3 kernel on a 2x2 input padded with a 2x2 border of zeros using unit strides (see [1] for this example). However I'm not expert on transposed convolutions.



For more options, visit https://groups.google.com/d/optout.



--
"Forgive, O Lord, my little jokes on Thee, and I'll forgive Thy great big joke on me."

Ernst

unread,
May 12, 2016, 7:34:02 PM5/12/16
to Keras-users, francois...@gmail.com, ben.ma...@gmail.com, kai...@gmail.com

Tensor Flow has a 2D Convolution_Transpose you can use to build a custom layer for Keras.

kai...@gmail.com

unread,
May 17, 2016, 1:00:46 PM5/17/16
to Keras-users, francois...@gmail.com, ben.ma...@gmail.com, kai...@gmail.com
As far as I understood [1], the choice of whether to use zero-padding only to implement the transposed convolution actually depends on which kind of transposed convolution you require: you described it for transposed convolution on an unpadded input, and unit stride. UpSample2D and ZeroPadding2D can be used jointly depending on the particular case to produce the input map that then gets convolved in the next step. So instead of introducing zeros between the original units (pixels), UpSample2D just replicates them, i.e. does nearest neighbor upsampling. The padding can then be handled using ZeroPadding2D. Finally, the convolution filters can be learned on that input. 
The performance penalty they are referring to in [1] results from extra computations required by enlarging the input with zeros, which can be implemented more efficiently. However, using UpSample2D does not produce these zero patterns, so the output will most likely differ from the 'performant' way of doing it, and the equivalence from [1] do not hold anymore. I guess, TensorFlow implements it more efficiently, but I haven't looked at the code yet.

shrutis...@gmail.com

unread,
Mar 13, 2017, 12:39:52 AM3/13/17
to Keras-users, ben.ma...@gmail.com
Keras already provides Deconvolution2D for the deconvolution purpose. Kindly refer this link, this may help you out :) 

tharun...@gmail.com

unread,
Jun 26, 2017, 12:28:18 PM6/26/17
to Keras-users, ben.ma...@gmail.com
Reply all
Reply to author
Forward
0 new messages