Why the accuracy of CNN is like that ?

68 views
Skip to first unread message

jaba marwen

unread,
May 12, 2017, 12:01:22 PM5/12/17
to Caffe Users
Hi every one ,

I have a data set of 6000 image of three objects (Lamborghini ,cylinder head and a piece of plane ) link to view the dataset. I split my data set 5000 for training  and 1000 for testing.The architecture of CNN is modified version of AlexNet . In fact , I have modified num_out of fc6 and fc7 to 1000. Also , the batch size of training is 50 and for test is 20.

I have the following parameters of solver :
net: "/home/jaba/caffe/data/diota_model/train_val.prototxt"
test_iter
: 100
test_interval
: 500
base_lr
: 0.001
lr_policy
: "step"
gamma
: 0.1
stepsize
: 500
display
: 500
max_iter
: 4000
momentum
: 0.5
weight_decay
: 0.0005
snapshot
: 100
snapshot_prefix
: "/home/jaba/caffe/data/diota_model/snap_shot_model"
solver_mode
: GPU

But after training for 4000 iterations , I have some weird accuracy values :
- after 0 iterations , accuracy is 0
- after 500 iterations  , accuracy is 1
- after 1000 iterations , accuracy is 0
- after 1500 iterations , accuracy is  0.489
- after 2000 iterations , accuracy is 0.4885
- after 2500 iterations , accuracy is 0.4885
- after 3000 iterations ,accuracy is 0.489
- after 3500 iterations , accuracy is 0.489
- after 4000 iterations  , accuracy is 0.489

So , why the accuracy is like that ?
I wonder if the training from scratch of modified AlexNet is not suitable for my use case (Classifying the three objects ) ? Should I use a ConvNet as fixed feature extractor and train a linear classifier on my dataset ? Any suggestions

I have attached log file of training and train_val.prototxt
Thanks
log1.log
train_val.prototxt

Hieu Do Trung

unread,
May 15, 2017, 6:56:04 AM5/15/17
to Caffe Users
You should get much better result by fine-tuning the net, not training from scratch.
For testing 1000 images and batch size of 20, you should set test_iter to (in solver) to 50, not 100.

jaba marwen

unread,
May 17, 2017, 9:03:20 AM5/17/17
to Caffe Users
I have tried fine-tuning but  the accuracy has not increased a lot . In fact , I finetuned AlexNet . I have changed the name of the fc6, fc7, fc8 and I fixed the learning to 0.1 for conv layers .
I have got 51.4 % the highest accuracy value .

I have attached solver.prototxt , train_val.prototxt and log file .So, what you suggest for a good fine-tuning ?

log.log
solver.prototxt
train_val.prototxt

Hieu Do Trung

unread,
May 17, 2017, 11:29:19 PM5/17/17
to Caffe Users
I usually take the train_val.prototxt from Flickr fine-tuning example then modify it to adapt my need.
For new layer, you should set higher learning multiply rates (in your case, from fc6->fc8).
In Flickr example, they set fc8_changed as (10 and 20, yours are 0.1 and 2).
(multiply those numbers with base learning rate in solver (0.001) would yield 0.01, which is typical for learning from scratch).

param {
lr_mult: 10
decay_mult: 1
}
param {
lr_mult: 20
decay_mult: 0
}

jaba marwen

unread,
May 18, 2017, 10:06:28 AM5/18/17
to Hieu Do Trung, Caffe Users
Thank you again for your answers. I have changed train_val.prototxt like Flicker-finetuning example , I have renamed only the fc8 and set lr_mult as you said. But , I have accuracy results as always 51.4 % .

So , I am wondering , did I have those results because my data set (lamborghini and piece of plane and a piece of engine images  ) is not very similar to Imagenet data set  ?

Should I try fine-tuning another CNN like VGG or Googlenet ?

Should I use a pre-trained Model for feature extraction and train a linear classifier on top of the model  because my dataset is very small and very different from Imagenet dataset ?

I 'm happy to hear any suggestions and I have attached log file  , solver and train_val.prototxt


--
You received this message because you are subscribed to a topic in the Google Groups "Caffe Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/caffe-users/Rs03KhdiKxM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to caffe-users+unsubscribe@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/de0588ab-6a52-41bb-8f53-2e18e4db66ca%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

log.log
solver.prototxt
train_val.prototxt
Message has been deleted

PDV

unread,
May 20, 2017, 12:52:24 AM5/20/17
to Caffe Users
according to your log, your training loss already dropped to zero after 100 iterations. You got overfitting. Get more sample images, or try augmentation.
Reply all
Reply to author
Forward
0 new messages