Hi kishen,
Thank you for your reply.
I also think there is something wrong with the training. What I don't understand is why the accuracy didn't decrease while validation loss increased, since high loss means bad performance on the validation dataset.
I tried to change the validation dataset, but got still similar results.
When I decreased the learning rate, the loss unstability got less obvious. It totally disappeared when the learning rate was decreased to 10^-11 or 10^-12, but the accuracy decreased correspondingly.
I am using the softmax loss layer.
在 2017年2月12日星期日 UTC+9上午2:41:19,kishen suraj P写道: