Hello everyone,
I tried to implement a minimal example for an auto encoder:
https://gist.github.com/see--/a6e6aba893786234344dac15a3459cc4The
hidden layer is as big as the input layer. Therefore, I expected the
weights to become the identity and biases to become zero. Both in the
encoder and the decoder. However,
I don't get these results. I get almost zero loss for input data in
[-5, 5] but a higher loss, if I test on a different interval. Do you
have any ideas why this happens?
Furthermore, if I manually set
the weights to the expected values (i.e. identity for weights and zero
for biases) training doesn't affect them much (the loss is already 0).
Kind regards
Steffen