How to use tf.spectral fourier functions in keras

1,268 views
Skip to first unread message

zkru...@gmail.com

unread,
Apr 24, 2018, 5:23:32 PM4/24/18
to Keras-users
Let me jump into a minimum problem demonstration.

Let us start with an input that is a simple time series and try to build an autoencoder that simply fourier transforms then untransforms our data. 

If we try to do this:

inputs = Input(shape=(MAXLEN,1), name='main_input')
x = tf.spectral.rfft(inputs)
decoded = Lambda(tf.spectral.irfft)(x)

Then the third line throws an error when entered:
>> ValueError: Tensor conversion requested dtype complex64 for Tensor with dtype float32

You see, the output of tf.spectral.irfft is float32 but it looks like Lambda thinks it is complex64?? (Complex64 is the input x from the previous step)

We can fix that error at model entry time with:
inputs = Input(shape=(MAXLEN,1), name='main_input')
x = tf.spectral.rfft(inputs)
decoded = Lambda(tf.cast(tf.spectral.irfft(x),dtype=tf.float32)))

This is accepted at input time but then when we try to build the model:
autoencoder = Model(inputs, decoded)

It generates the error:
TypeError: Output tensors to a Model must be Keras tensors. Found: <keras.layers.core.Lambda object at 0x7f24f0f7bbe0>

Which I guess is reasonable and was the reason I didn't want to cast it in the first place.

Main question: how do I successfully wrap the tf.spectral.irfft function which outputs float32 ?

More general question for learning:
Let's assume I actually want to do something between the rfft and the irfft, how can I cast those imaginary numbers into absolute values without breaking keras so I can apply various convolutions and the like?

leah....@neteera.com

unread,
Apr 30, 2018, 10:11:02 AM4/30/18
to Keras-users
this line work for me
Lambda(lambda v: tf.cast(tf.spectral.fft(tf.cast(v,dtype=tf.complex64)),tf.float32))(x)

zkru...@gmail.com

unread,
Apr 30, 2018, 7:50:10 PM4/30/18
to Keras-users
Interesting. I get an AttributeError: 'Node' object has no attribute 'output_masks'

Before trying to debug that can you actually put layers (e.g. a Conv1D layer) afterwards and have it work? I have been shown another path (using lambdas) that works but where I can't put any trainable layers in between (from stack overflow):
inputs = K.Input(shape=(10, 8), name='main_input')
x = K.layers.Lambda(tf.spectral.rfft)(inputs)
decoded = K.layers.Lambda(tf.spectral.irfft)(x)
model = K.Model(inputs, decoded)
output = model(tf.ones([10, 8]))
with tf.Session():
  print(output.eval())

That will run but if you try to put any trainable layers in between the rfft and irfft it complains that the type is complex (yes even though it is coming out of rfft and even if you tf.cast it). 

If your technique allows the use of training layers I'd love to hear more about what surrounds that line of yours and what version you're using. If not then we have a couple leads including the above, but so far I haven't seen anything that allows an fft followed by trainable layers before an ifft

Thanks for looking!

leah....@neteera.com

unread,
May 3, 2018, 2:49:20 AM5/3/18
to Keras-users
I have convd1 layer before fft line and Dense layer after ...
i think your error connection to your keras version maybe try to upgrade..
i will add my code later

leah....@neteera.com

unread,
May 3, 2018, 3:32:24 AM5/3/18
to Keras-users
This is model that use your lines i Hope its help

inputs = Input(shape=(10, 8), name='main_input')
x = Lambda(lambda v: tf.to_float(tf.spectral.rfft(v)))(inputs)
x = Conv1D(filters=5, kernel_size=3, activation='relu', padding='same')(x)
x = Lambda(lambda v: tf.to_float(tf.spectral.irfft(tf.cast(v, dtype=tf.complex64))))(x)
x = Flatten()(x)
output = Dense(1)(x)
model = Model(inputs, output)
model.summary()

zkru...@gmail.com

unread,
May 3, 2018, 2:29:01 PM5/3/18
to Keras-users
That runs and is training! Thank you! I guess I need to be more agressive with lambdas inside Lambdas in the future :)
Reply all
Reply to author
Forward
Message has been deleted
Message has been deleted
0 new messages