Hi all,
For my application I'm trying to implement a convolution layer (say "X") in caffe that has the same filter as another convolution layer (say "Y"), but rotated 180 degrees. So X should not be able to learn and just take its kernel for the convolution by rotating the kernel from Y by 180 degrees. I've tried doing something like
solver.net.params["deconv1"][0].data[...] = solver.net.params["conv1"][0].data.transpose(0,1,3,2)
as one would do for deconvnets, but since the full network (including both X and Y) still needs to be trained this will not work here.
I've managed to get the layer X to share variables with Y, but I cannot figure out how to rotate the kernel. Any ideas on how to do this without rewriting the conv layer code?
Thanks,
Niek