Hi everyone,
I need to use special loss layer which can do some judgment to exclude some data. For example, when label is (-1,-1), I set the predict results to (-1,-1) so that (-1,-1) label will not be contribute to loss. Since I use GPU, I need modify both euclidean_loss_layer.cpp and
euclidean_loss_layer.cu.
BUT, after debug, I simply add some code in Reshape of euclidean_loss_layer.cpp like:
template <typename Dtype>
void EuclideanLossLayer<Dtype>::Reshape(
const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
LossLayer<Dtype>::Reshape(bottom, top);
CHECK_EQ(bottom[0]->count(1), bottom[1]->count(1))
<< "Inputs must have the same dimension.";
diff_.ReshapeLike(*bottom[0]);
///////////////////////////////////////////////
////////////////@201603////////////////////////
int outer_num_ = bottom[0]->count(0,1); // set iter number.
int inner_num_ = bottom[0]->count(1);
CHECK_EQ(outer_num_ * inner_num_, bottom[1]->count())
<<"(N,C,H,W)";
const Dtype* label = bottom[1]->cpu_data(); // get label
int label_value[2];
for (int i=0; i< outer_num_; ++i){
for(int j =0; j < inner_num_; j++){
label_value[j]= static_cast<int>(label[i * inner_num_ + j]);
}
if ((label_value[0] == -1) && (label_value[1] == -1)){ // if label is (-1,-1), set bottom[0] to (-1,-1)
bottom[0]->mutable_cpu_data()[i * inner_num_] = -1.0; // I want to change bottom[0]'s value
bottom[0]->mutable_cpu_data()[i * inner_num_ + 1] = -1.0; // maybe wrong here.
}
}
///////////////////add above/////////////////////
}
bottom[0] is regression_layer's output(256 * 2), and bottom[1] is the label(256 * 2 * 1 * 1).
I train the net with GPU mode, and I think every loop it will do Reshape first and then run the .cu file.
So it seems correct to just modify Reshape in .cpp.
Am I wrong? Any suggestions will be appreciated.