Dear Prof. Dr. Marquardt and all,
I'm doing now an ML task (for signals from detectors) that contains an autoencoder part. Here is the nice autoencoder diagram (plotted using this nice online tool:
https://alexlenail.me/NN-SVG/LeNet.html):
Nothing special, just Cov/CovTranspose1D and AvgPooling/UpSampling layers.
And model training history (600 epochs in total):
Note that
y-axes
are in log-scale (in both cases). Thus validation loss fluctuations aren't extremely/relatively large.
But still, are such fluctuations really significant, do I need to worry about them?
Also, how can the best
moment to stop the training process be found in such a case? And should I look at all at losses on log-scale?
I will be grateful for any comments/hints :)
Regards,
Michail.