Caffe giving same output for any input test image

1,174 views
Skip to first unread message

Michael Davies

unread,
Feb 23, 2015, 11:17:12 PM2/23/15
to caffe...@googlegroups.com
Hello,

I am an undergraduate researcher at Iowa State and I am getting into machine learning for my research. I have been learning on Caffe for a while now and have been working on a problem involving multi-label accuracy for images. The problem I have run into is that although my neural net shows loss is decreasing (I.e. appears to train fine), When I test any image on it I receive the same output prediction.

A little more detail on my network: I am using a convolutional network with two convolution layers and two pooling layers. I also have one fully connected layer then one output layer (Very similar structure to LeNet):

Data --> Conv1 --> Pool1 --> Conv2 --> Pool2 --> IP1 --> Softmax --> output --> Euclidean Loss

N-Dimensional Label --> Softmax --> Euclidean Loss

I have tried two different methods of reading in label data: the first was writing the labels into a separate database then having a separate data input layer for them. 

After seeing no success with that I modified Caffe to accept multiple labels (A small hack of the code base to change datum to hold a string of bytes instead of a single value). After testing to make sure my modifications correctly read in data for each image I tested my net and am still seeing the same results (All predictions are the same).

In both cases I use a Euclidean Loss layer to provide a euclidean distance between the output my net produces and the desired output label, and as I previously mentioned, during the training process the loss does appropriately decrease.

My question to you (the community) is: is there something I am not doing correctly? Or do you have any suggestions I could try to see some different results? 

Please keep in mind I am still on the learning curve for Machine Learning and Caffe, so I don't pretend to know all the things that could be causing this. I appreciate your feedback!

Thanks,
Mike

Axel Angel

unread,
Feb 24, 2015, 2:50:48 PM2/24/15
to caffe...@googlegroups.com
Hello Mike,

I cannot say for sure, I don't have enough experience to pinpoint your problem with certainty but can you verify that:
 * Your input is correctly scaled? If not, then your network will be "under-sentitive" or "over-sentitive" to your data thus you'll see the same output for every input
 * Are you sure your deploy and training prototxt files are matching? Different architectures in these files can create your problem (I had) or you can have random output for the same input every time you start your program.
 * Verify your images is of the correct format: dimensions, colors, preprocessing maybe? You can use the testset directly from the lmdb, see if it persists?

Good luck.

Luwei Yang

unread,
Mar 5, 2015, 9:17:22 AM3/5/15
to caffe...@googlegroups.com
Hi,I also meet the same problem, have you tried decrease the Learning rate? what is your learning rate?

shawn sun

unread,
Jun 23, 2016, 12:08:33 PM6/23/16
to Caffe Users
Hello, I begin to do multi-label regression on Caffe these days. And I changed the code to make it accept multi-label, I have check it a lot of time, and make sure the label is 4D(the coordinates of two points). But when train the net, the loss(EuclideanLoss) is very high, and keep increasing util Nan. Can you give some advice? Thank you



在 2015年2月24日星期二 UTC+8下午12:17:12,Michael Davies写道:

zhi chai

unread,
Mar 27, 2017, 10:55:32 AM3/27/17
to Caffe Users
Because you need to modify the EuclideanLoss.cpp, they only divide the number of images instead of the whole dimensions.
Reply all
Reply to author
Forward
0 new messages