I'm trying to implement contractive autoencoder using examples from keras and
https://github.com/wiseodd/hipsternet/blob/master/adhoc/autoencoders.pyI'm not sure if I am using the layer name correctly for stacked autoencoder when defining loss function. Below is an example similar to what I am trying to do, just an answer relating to if I am doing it wrong vs right would suffice. Thanks so much for your time.
input_img = Input(shape=(784,))
encoded = Dense(128, activation='relu',name='encoded')(input_img)
encoded = Dense(64, activation='relu')(encoded)
encoded = Dense(32, activation='relu')(encoded)
decoded = Dense(64, activation='relu')(encoded)
decoded = Dense(128, activation='relu')(decoded)
decoded = Dense(784, activation='sigmoid')(decoded)
autoencoder = Model(input=input_img, output=decoded)
autoencoder.compile(optimizer='adadelta', loss='contractive_loss')
autoencoder.fit(x_train, x_train,
nb_epoch=100,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))
import keras.backend as K
def contractive_loss(y_pred, y_true):
mse = K.mean(K.square(y_true - y_pred), axis=1)
W = K.variable(value=autoencoder.get_layer('encoded').get_weights()[0]) # N x N_hidden
W = K.transpose(W) # N_hidden x N
h = model.get_layer('encoded').output
dh = h * (1 - h) # N_batch x N_hidden
# N_batch x N_hidden * N_hidden x 1 = N_batch x 1
contractive = lam * K.sum(dh**2 * K.sum(W**2, axis=1), axis=1)
return mse + contractive