My question is about the relationship between the operation of convolution transpose and that of convolution. It is somewhat naïve, so sorry in advance, but it's also very simple.
Let's forget about padding and subsampling. Also, let's work with arrays of size N+1 whose entries are written x(i), 0<= i <= N, the index i being understood modulo N.
As I understand it, convolution transpose is defined as the linear transpose of a convolusion operator. If we convolve, for example, an array x=(x(i)) with a filter f=(f(i)), then we get for the convolved array:
y(i) = f * x(i) = sum_j f(j) x(i-j) = sum_j f(i-j)x(j) = (M.x)(i) with M(i,j) = f(i-j).
It follows that the transpose of this linear operation is, denoting Mt the transpose of M (i.e. Mt(i,j)=M(j,i)):
x(i)= (Mt).y(i) = sum_j M(j,i) y(j) = sum_j f(j-i) y(j) = f^ * y(i),
that is a convolution with the filter f^ obtained from f by mirror symmetry, i.e. f^(i)=f(-i)=f(N-i). Of course this is well known.
My question is simply, why a specific module for the convolution transpose is needed, and why not implement it as another convolution?
Thanks in advance for your answers.
Jacques Boutet de Monvel