I am trying to use autoencoder for dimensionality reduction of small images I have (34x34). Images are scaled to 0,1 range and are binary so only values are 0,1. After training I want to extract middle layer with smallest amount of neurons to treat it as my dimensionally reduced reprecentation.
I tried almost every combination of activation functions, loss,optimizer and they all are not converging which means I am doing something fundamentally wrong here.... Loss stays the same from epoch 1 until the end.
X = np.asarray(data)
encoder = containers.Sequential([Dense(512,input_dim=1156,
activation='sigmoid'),
Dense(300,input_dim=512,
activation='sigmoid'),
Dense(100,input_dim=300,
activation='sigmoid')])
decoder = containers.Sequential([Dense(300,input_dim=100),
Dense(512,input_dim=300),
Dense(1156,input_dim=512)])
autoencoder = Sequential()
autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder,
output_reconstruction=False))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.0, nesterov=True)
autoencoder.compile(loss = 'mse',
optimizer = sgd)
autoencoder.fit(X,X,batch_size=20,nb_epoch=1000,
show_accuracy=False,verbose=2)
(trainscore, trainaccuracy) = autoencoder.evaluate(X, X,
batch_size=500, show_accuracy=True)
print("Training Score: " + str(trainscore))
print("Training Accuracy: " + str(trainaccuracy))