Hello!
Im fine tuning a vgg16 model using my custom dataset and im getting different values for val_loss and val_acc during training vs after, i'm wondering if this is expected behaviour?
My code is very similar to the fine tuning example from fchollet:
During training i save the weights with best val loss, which in this case was:
Epoch 12/20
7940/7940 [==============================] - 1062s - loss: 0.0662 - acc: 0.9874 - val_loss: 0.0640 - val_acc: 0.9804
Afterwards i run evaluate_generator with the same validation data and i get these results:
[0.081565297748011883, 0.97784491440080568]
Has anyone experienced the same problem or have any idea what might cause this?