classification.cpp
for deploy. In train/test phase with python scripts the net achieved
accuracy = 0.93, but now when I went to deploy I got some strange
results. I have two classes:and I need to get the prob of object detection. I believed that the results will be presented in the form of two probs in Softmax
output blob if the net have two outputs in FC-layer (prob1 + prob2 ==
1.0f), but the result is puzzling. In output vector I get two identical values for every image. Here are input and output layers:
layer {
name: "data"
top: "data"
type: "Input"
input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 }}
}
layer {
name: "fc6"
top: "fc6"
type: "InnerProduct"
bottom: "drop5"
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
std: 0.1
}
}
}
layer {
name: "prob"
top: "prob"
type: "Softmax"
bottom: "fc6"
}
My C++ code sample for the regular use:
Blob<float>* input_layer = m_net->input_blobs()[0];
input_layer->Reshape(1, m_numChannels, m_inputGeometry.height, m_inputGeometry.width);
m_net->Reshape();
std::vector<cv::Mat> input_channels;
Blob<float>* input_layer = m_net->input_blobs()[0];
int width = input_layer->width();
int height = input_layer->height();
float* input_data = input_layer->mutable_cpu_data();
for(int i = 0; i < input_layer->channels(); ++i){
cv::Mat channel(height, width, CV_32FC1, input_data);
input_channels->push_back(channel);
input_data += width * height;
}
cv::split(image_float, *input_channels);
m_net->Forward();
Blob<float>* output_layer = m_net->output_blobs()[0];
const float* begin = output_layer->cpu_data();
const float* end = begin + output_layer->channels();
QVector<float> output = QVector<float>(end - begin, *begin)
In addition, the results are similar to random (and duplicated for each class), the smallest probability value is magic 0.443142. This value is often found in the output vector. What am I doing wrong?
cv::Mat image_float;
image.convertTo(image_float, CV_32FC3);