I have problem with learning. After 20 000 iterations the loss function starts to raise.
I tried to shuffle the images, decrease the learning rate to 0.002 and 0.001. I tried to set the batch processing size to 512 and 256 and now I am trying the bvlc_alexnet model instaed of bvlc_reference_caffenet model, but so far it looks that loss function begin to raise again.
Any idea how I can solve this issue? Is there problem with size of source images? Or the bvlc_reference_caffenet model is not good for this type of classification?
I am using images from test group to test the neural network classification and I am comparing results for each snapshot.
For example I get results like this:
64000 Iterations, lr_rate = 0.001, shuffled images, bvlc_reference_caffenet model
Wrong
Wrong False: 146 // false negative
Wrong True: 573 // false positive
Wrong < 0.6: 125 // accuracy 0.5 - 0.6
Wrong < 0.7: 88 // accuracy 0.6 - 0.7
Wrong < 0.8: 85
Wrong < 0.9: 102
Wrong < 1.0: 319 // accuracy 0.9 - 1.0
----------------------------------------
Average values
Correct: 0.958699976074
Wrong: 0.816941141419
Wrong True: 0.827005588571
Wrong False: 0.777441633074
----------------------------------------
Final results
Failed: 0
Correct: 5026 // classified correctly
Wrong: 719 // wrong classification
Total: 5745
This means that 146 nudity pictures are classified as "ok" and 573 "ok" images are classified as nudity images. More than 12% of images are classified wrong. I would like to get error less than 5% or at least less than 10%. Is it possible?
I would appreciate any information or recommendation about this problem.