Write a new wrapper for the convolutional layer

37 views
Skip to first unread message

Huynh Vu

unread,
Jul 20, 2017, 5:56:45 AM7/20/17
to Caffe Users
Hi everyone,

I'm writing a new player (MsgPassLayer) which functions like the convolutional (conv.) layer and test it using the MNIST environment. This new layerer just init the convolutional parameters and calling convolution forward and backward.
However, the it is run using MNIST example, the accuracy is quite low. I don't know how to break the code down to detect the error.
Hope to get your support.

Following is  the implementation of the new layer.

template <typename Dtype>
void MsgPassLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
         
const vector<Blob<Dtype>*>& top) {
     CHECK_EQ
(4, bottom[0]->num_axes()) << "Input must have 4 axes, "
         
<< "corresponding to (num, channels, height, width)";

     
// Conv layer configuration
      conv_bottom_vec_
.clear();
      conv_bottom_vec_
.push_back(bottom[0]);
      conv_top_vec_
.clear();
     
//conv_top_vec_.push_back(&out_conv_);
      conv_top_vec_
.push_back(top[0]);

     
LayerParameter conv_param;

      conv_param
.mutable_convolution_param()->set_num_output(50);
      conv_param
.mutable_convolution_param()->set_kernel_size(5);
      conv_param
.mutable_convolution_param()->set_stride(1);

     
FillerParameter* weight_filler_ptr = new FillerParameter();
      weight_filler_ptr
->set_type("xavier");
      conv_param
.mutable_convolution_param()->set_allocated_weight_filler(weight_filler_ptr);

     
FillerParameter* bias_filler_ptr= new FillerParameter();
      bias_filler_ptr
->set_type("constant");
      conv_param
.mutable_convolution_param()->set_allocated_bias_filler(bias_filler_ptr);

      conv_layer_
.reset(new ConvolutionLayer<Dtype>(conv_param));
      conv_layer_
->SetUp(conv_bottom_vec_, conv_top_vec_);
}

template <typename Dtype>
void MsgPassLayer<Dtype>::Forward_gpu(const vector<Blob<Dtype>*>& bottom,
     
const vector<Blob<Dtype>*>& top) {
    conv_layer_
->Forward(conv_bottom_vec_, conv_top_vec_);
}
template <typename Dtype>
void MsgPassLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,
     
const vector<Blob<Dtype>*>& top) {};
template <typename Dtype>
void MsgPassLayer<Dtype>::Backward_gpu(const vector<Blob<Dtype>*>& top,
     
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
  conv_layer_
->Backward(conv_top_vec_, propagate_down, conv_bottom_vec_);
}
INSTANTIATE_CLASS
(MsgPassLayer);
REGISTER_LAYER_CLASS
(MsgPass);



mohsen zarrindel

unread,
Aug 27, 2017, 9:54:24 AM8/27/17
to Caffe Users
you can insert bellow code :
import pdb
pdb.set_trace()
so when you run your python code ( for example from CMD) , it has breaked.

در پنجشنبه 20 ژوئیهٔ 2017، ساعت 14:26:45 (UTC+4:30)، Huynh Vu نوشته:

Jonathan R. Williford

unread,
Aug 27, 2017, 3:18:18 PM8/27/17
to Caffe Users
Have you been able to solve this Huynh?

In addition to what Mohsen said:

To test the forward pass you can also use a Python Notebook and the PyCaffe interface to look at the output blob. I would use the DummyData layer to pass it really simple test cases (for example that are 3x3).

Automatic numeric gradient checking doesn't seem to be trivially available for Python layer. See: https://github.com/BVLC/caffe/issues/3903. You should, however, check this, either manually or via a method like was used in the above link. I tend to assume that this is done incorrectly until proven otherwise.

Best,
Jonathan


--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users+unsubscribe@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/29243acb-8a7a-461e-b7e3-8ef0fc4662b0%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages