Validation loss of a model is not stable compared to training loss

182 views
Skip to first unread message

Mohammad Yahya

unread,
Dec 2, 2022, 11:21:10 AM12/2/22
to Keras-users

Below I have a model trained in Keras and the loss of both the training dataset (blue) and validation dataset (orange) are shown. From my understanding, the ideal case is that both validation and training loss should converge and stabilize in order to tell that the model does not underfit or overfit. But I am not sure about the model below. What you can tell from its loss, please?

1.png

In addition, this is the accuracy of the model:

1.png

geantbrun

unread,
Dec 2, 2022, 11:30:31 AM12/2/22
to Keras-users
I don't know what you call here accuracy here but in theory, it should decrease with the epochs. Try to play with learning rate maybe?

geantbrun

unread,
Dec 2, 2022, 11:30:53 AM12/2/22
to Keras-users
*increase* sorry

Lance Norskog

unread,
Dec 2, 2022, 9:13:55 PM12/2/22
to geantbrun, Keras-users
The accuracy graph looks like 'validation loss' and 'validation accuracy'. Are you sure you have the right numbers in these plots?

Also- I see this kind of wildly changing accuracy when I use a simple Dropout layer in a CNN. SpacialDropout is "smart" about CNNs and might work better for you.

--
You received this message because you are subscribed to the Google Groups "Keras-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to keras-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/keras-users/6b21b38e-7e2b-4b6b-a524-7f3474c4958cn%40googlegroups.com.


--
Lance Norskog
lance....@gmail.com
Redwood City, CA
Reply all
Reply to author
Forward
0 new messages