Hello all,
My objective is "Classiifcation on medical images using deep conv. networks"
I'm quite puzzle about the results which I obtained from classification of medical image of my own dataset after a long run on CPU not GPU. In details, I have 50 medical images. I split them in to two sets 1. training set(25 images with labels) 2.Validation set(25 images with labels). I chose caffe framework for this process.
Step 1 : created own lmdb database with that training and validation images
Step 2 : Modified train_*.sh, *_solver.prototxt, *_train_test.sh(learning rate, switch to CPU, Layers(conv, Relu,Loss layer added), 10000 iterations)
Step 3 : ran train_*.sh
In CPU mode, it ran around 35 hours for completion.
My doubts are follows :
1. Starting onwards(0th to 10000) it shows the Testing accuracy as 24%(no fluctuation at any point)
2. Training loss getting change but Testing loss keep steady and very little changes.
Appreciate that any advice/helps from experts.
Thanks,
DeepLover