If I have an image of dimension 4 x 100 in RGB, this would be a 3 x 4 x 100 input in Lasagne. Usually, a convolutional network on an image would apply filters moving in both spatial dimensions (eg. with filter size of 3 x 2 x 10) using Conv2DLayer.
However, if I wanted to use filters of dimension 3 x 4 x 10 and only move across the image in the long dimension, would this be a 1D convolution, since the filter only moves in 1 dimension along the image, or would it still be a 2D convolution (I suppose in this case the stride must be set to 1...)? Would I use the Conv1D or Conv2D layers? It's kind of like having a 1D input (time series/signal) but 2 channel dimensions.
If I changed/flattened the input to stack the RGB channels as a 12 x 100 input and applied 1D convolution with 12 x 10 filters, would this in someway be equivalent?