Let me jump into a minimum problem demonstration.
Let us start with an input that is a simple time series and try to build an autoencoder that simply fourier transforms then untransforms our data.
If we try to do this:
inputs = Input(shape=(MAXLEN,1), name='main_input')
x = tf.spectral.rfft(inputs)
decoded = Lambda(tf.spectral.irfft)(x)
Then the third line throws an error when entered:
>> ValueError: Tensor conversion requested dtype complex64 for Tensor with dtype float32
You see, the output of tf.spectral.irfft is float32 but it looks like Lambda thinks it is complex64?? (Complex64 is the input x from the previous step)
We can fix that error at model entry time with:
inputs = Input(shape=(MAXLEN,1), name='main_input')
x = tf.spectral.rfft(inputs)
decoded = Lambda(tf.cast(tf.spectral.irfft(x),dtype=tf.float32)))
This is accepted at input time but then when we try to build the model:
autoencoder = Model(inputs, decoded)
It generates the error:
TypeError: Output tensors to a Model must be Keras tensors. Found: <keras.layers.core.Lambda object at 0x7f24f0f7bbe0>
Which I guess is reasonable and was the reason I didn't want to cast it in the first place.
Main question: how do I successfully wrap the tf.spectral.irfft function which outputs float32 ?
More general question for learning:
Let's assume I actually want to do something between the rfft and the irfft, how can I cast those imaginary numbers into absolute values without breaking keras so I can apply various convolutions and the like?