hi,
i'm interested in working with 1d data in keras (waveforms and frequency spectra) similar to studies i've done with lasagne[1].
when i use the convolution and pooling layers following the example of mnist_cnn, but in 1d instead of 2d, i get unexpected layer output sizes.[2]
with both, the input shape looks correct:
2D: Initial input shape: (None, 1, 28, 28)
1D: Initial input shape: (None, 1, 28)
but after the same convolution, they look very different:
2D: Convolution2D (convolution2d) (None, 32, 28, 28)
1D: Convolution1D (convolution1d) (None, 1, 32)
while the 2D convolution has resulted in (batch_size, nb_filters, nb_conv, nb_conv), the 1D convolution results in (batch_size, ?, nb_filters).
then after a max pooling operation it gets even stranger:
2D: MaxPooling2D (maxpooling2d) (None, 32, 14, 14)
1D: MaxPooling1D (maxpooling1d) (None, 0, 32)
since it seems like the 2d implementation is correct, should i stick with 2d convolutions for now and just use 1 for the row size? or would this be inefficient?
thanks!
kyle