wrong results using trained caffe net from c++

70 views
Skip to first unread message

unitready

unread,
Jul 25, 2016, 7:05:11 AM7/25/16
to Caffe Users
Hello,

I tried to use my trained caffe net with my data from C++. I implemented standart caffe example classification.cpp for deploy. In train/test phase with python scripts the net achieved accuracy = 0.93, but now when I went to deploy I got some strange results. I have two classes:
  • environment
  • object

and I need to get the prob of object detection. I believed that the results will be presented in the form of two probs in Softmax output blob if the net have two outputs in FC-layer (prob1 + prob2 == 1.0f), but the result is puzzling. In output vector I get two identical values for every image. Here are input and output layers:


layer {
 name
: "data"
 top
: "data"
 type
: "Input"
 input_param
{ shape: { dim: 1 dim: 3 dim: 227 dim: 227 }}
}
layer
{
 name
: "fc6"
 top
: "fc6"
 type
: "InnerProduct"
 bottom
: "drop5"
 inner_product_param
{
 num_output
: 2
 weight_filler
{
 type
: "xavier"
 std
: 0.1
 
}
 
}
}
layer
{
 name
: "prob"
 top
: "prob"
 type
: "Softmax"
 bottom
: "fc6"
}

My C++ code sample for the regular use:

Blob<float>* input_layer = m_net->input_blobs()[0];
input_layer
->Reshape(1, m_numChannels, m_inputGeometry.height, m_inputGeometry.width);
m_net
->Reshape();
std
::vector<cv::Mat> input_channels;
Blob<float>* input_layer = m_net->input_blobs()[0];
int width = input_layer->width();
int height = input_layer->height();
float* input_data = input_layer->mutable_cpu_data();

for(int i = 0; i < input_layer->channels(); ++i){
   cv
::Mat channel(height, width, CV_32FC1, input_data);
   input_channels
->push_back(channel);
   input_data
+= width * height;
}

cv
::split(image_float, *input_channels);
m_net
->Forward();
Blob<float>* output_layer = m_net->output_blobs()[0];
const float* begin = output_layer->cpu_data();
const float* end = begin + output_layer->channels();
QVector<float> output = QVector<float>(end - begin, *begin)

In addition, the results are similar to random (and duplicated for each class), the smallest probability value is magic 0.443142. This value is often found in the output vector. What am I doing wrong?

mprl

unread,
Jul 25, 2016, 11:05:51 AM7/25/16
to Caffe Users
Are you sure you are doing the same preprocessing during training and classification ?
I had the same problem because i forgot to scale my images between 0 and 1 in my classification code.

unitready

unread,
Jul 27, 2016, 10:03:07 AM7/27/16
to Caffe Users
Yes, I scale images with

cv::Mat image_float;
image
.convertTo(image_float, CV_32FC3);



понедельник, 25 июля 2016 г., 18:05:51 UTC+3 пользователь mprl написал:
Reply all
Reply to author
Forward
0 new messages