Epoch 1/10 3320/3320 [==============================] - 76s - loss: 0.3359 - acc: 0.8611 - val_loss: 0.2075 - val_acc: 0.9233 Epoch 2/10 3320/3320 [==============================] - 69s - loss: 0.1972 - acc: 0.9211 - val_loss: 0.1289 - val_acc: 0.9433 Epoch 3/10 3320/3320 [==============================] - 69s - loss: 0.1320 - acc: 0.9500 - val_loss: 0.0745 - val_acc: 0.9833 Epoch 4/10 3320/3320 [==============================] - 69s - loss: 0.0938 - acc: 0.9675 - val_loss: 0.0661 - val_acc: 0.9800 Epoch 5/10 3320/3320 [==============================] - 69s - loss: 0.0613 - acc: 0.9771 - val_loss: 0.0403 - val_acc: 0.9933 Epoch 6/10 3320/3320 [==============================] - 69s - loss: 0.0453 - acc: 0.9840 - val_loss: 0.0353 - val_acc: 0.9900
Your help is appreciated.Thanks.
Hi,
We cannot really tell you why, as we don't have your data or network configurations. As for speculations, this could be due that the train/test set split is not random and the train set is harder than the test set.
Try making random splits of your data to make train/test sets and see if there is still such an odd pattern.
Also note that the differente is quite small, so a couple of hard train samples could create this bias.
--
You received this message because you are subscribed to the Google Groups "Keras-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to keras-users+unsubscribe@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/keras-users/7e02af58-6dbe-403b-8b91-93fd93c7a3ff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.