Hi all,
I am trying to perform time series classification. It works fine if I try to classify something simple as sin() function, whether next step value will be positive or negative based on previous steps. However when I try to do the same based on S&P 500 time series (tried both on normalized absolute values and on rdiff), from the first or second epoch val_acc settles on a value and it never changes anymore. Then all predictions, no matter of the input data, stay within the same very close range, basically the same, e.g. 0.46## with differences only in the third or forth decimal. You'd think there is something wrong with the input S&P 500 data (e.g. history lengs, etc), but when I implemented regression version of this model, it worked fine - training proceeded as expected, the model tried to give a seemingly reasonable prediction, etc. Considering that val_acc gets stuck to the same value and then everything is classified with the same value, I think I am not doing something right (but it does work for simple sin()!). Any thoughts? My model is below.
Thanks!
model = Sequential()
model.add(LSTM(h, input_shape=(ex, features), return_sequences=False, activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dropout(dropout))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
hist = model.fit(X_train_mat, Y_train_mat, nb_epoch=e, batch_size=b, validation_split=0.1)