val_acc does not change in LSTM time series classification

1,162 views
Skip to first unread message

DSA

unread,
May 26, 2016, 9:53:12 PM5/26/16
to Keras-users
Hi all,

I am trying to perform time series classification. It works fine if I try to classify something simple as sin() function, whether next step value will be positive or negative based on previous steps. However when I try to do the same based on S&P 500 time series (tried both on normalized absolute values and on rdiff), from the first or second epoch val_acc settles on a value and it never changes anymore. Then all predictions, no matter of the input data, stay within the same very close range, basically the same, e.g. 0.46## with differences only in the third or forth decimal. You'd think there is something wrong with the input S&P 500 data (e.g. history lengs, etc), but when I implemented regression version of this model, it worked fine - training proceeded as expected, the model tried to give a seemingly reasonable prediction, etc. Considering that val_acc gets stuck to the same value and then everything is classified with the same value, I think I am not doing something right (but it does work for simple sin()!). Any thoughts? My model is below.

Thanks!

model = Sequential()
model.add(LSTM(h, input_shape=(ex, features), return_sequences=False, activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dropout(dropout))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
hist = model.fit(X_train_mat, Y_train_mat, nb_epoch=e, batch_size=b, validation_split=0.1)

DSA

unread,
Jun 8, 2016, 7:59:03 PM6/8/16
to Keras-users
Does anybody have any ideas why val_acc doesn't change during training> Other training parameters seem to change as expected (example below). Any advice is much appreciated. Thanks!

Epoch 2816/10000
 50/472 [==>...........................] - ETA: 0s - loss: 0.6281 - acc: 0.6800Epoch 02815: val_acc did not improve
472/472 [==============================] - 0s - loss: 0.5151 - acc: 0.7648 - val_loss: 1.2978 - val_acc: 0.4151
Epoch 2817/10000
 50/472 [==>...........................] - ETA: 0s - loss: 0.4406 - acc: 0.8600Epoch 02816: val_acc did not improve
472/472 [==============================] - 0s - loss: 0.5179 - acc: 0.7479 - val_loss: 1.2844 - val_acc: 0.4151
Epoch 2818/10000
 50/472 [==>...........................] - ETA: 0s - loss: 0.5385 - acc: 0.7400Epoch 02817: val_acc did not improve
472/472 [==============================] - 0s - loss: 0.5100 - acc: 0.7585 - val_loss: 1.2699 - val_acc: 0.4151
 

DSA

unread,
Jun 9, 2016, 1:16:10 PM6/9/16
to Keras-users
OK. I've figured it out. I had too little training data (about 330 sequences). Once I moved to a dataset where I could generate 550+ sequences, val_acc started to move. That said, I still don't understand why previously loss, acc and val_loss would change, but val_acc stayed constant. Is this a bug in Keras? Also, is there a better discussion forum to ask general questions like this?
Reply all
Reply to author
Forward
0 new messages