Greetings,
Thanks for developing such an amazing tool for deep learning. I'm using Caffe and finetuning the imagenet model for other task. However, I encountered some problems in finetuning. More specifically, I cannot improve the performance by finetuning. Hence, I want to analyze the problem by setting blobs_lr = 0 in all layers except softmax layer, because my program uses only pool 5 features.
Out of my expectation, the performance was worse rather than unchanged, compared to the one using pre-trained and untuned model. I don't know what happened during the finetuning process. I'll be very grateful if there's any suggestion.
Thank you.