newLayer does not name a type

389 views
Skip to first unread message

João Abrantes

unread,
Jun 7, 2016, 7:37:49 AM6/7/16
to Caffe Users
I am trying to create a new ConvolutionLayer called MaskConvolutionLayer. 
These were my steps:

went to caffe/src/caffe/proto/caffe.proto and added a new engine parameter called MASK:

message ConvolutionParameter {
[...]
 
enum Engine {
    DEFAULT
= 0;
    CAFFE
= 1;
    CUDNN
= 2;
    MASK
= 3;
}



Then I modified the `caffe/src/caffe/layer_factory.cpp`:

#include "caffe/layers/conv_layer_mask.hpp"

[...]


template <typename Dtype>
shared_ptr
<Layer<Dtype> > GetConvolutionLayer(const LayerParameter& param) {
[...]
 
if (engine == ConvolutionParameter_Engine_CAFFE) {
   
return shared_ptr<Layer<Dtype> >(new ConvolutionLayer<Dtype>(param));
 
} else if (engine == ConvolutionParameter_Engine_MASK) {
   
return shared_ptr<Layer<Dtype> >(new MaskConvolutionLayer<Dtype>(param));
#ifdef USE_CUDNN
 
}
[...]

I created the file conv_layer_mask.hpp and conv_layer_mask.cpp. When I try to compile caffe I get the following error:



src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetConvolutionLayer(const caffe::LayerParameter&)’:
src
/caffe/layer_factory.cpp:62:37: error: expected primary-expression before ‘(’ token
     
return shared_ptr<Layer<Dtype> >(new MaskConvolutionLayer<Dtype>(param));
                                     
^
src
/caffe/layer_factory.cpp:62:42: error: MaskConvolutionLayer does not name a type
     
return shared_ptr<Layer<Dtype> >(new MaskConvolutionLayer<Dtype>(param));
                                         
^
src
/caffe/layer_factory.cpp:62:68: error: expected primary-expression before ‘>’ token
     
return shared_ptr<Layer<Dtype> >(new MaskConvolutionLayer<Dtype>(param));
                                                                   
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetTanHLayer(const caffe::LayerParameter&) [with Dtype = double]’:
src
/caffe/layer_factory.cpp:240:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetTanHLayer(const caffe::LayerParameter&) [with Dtype = float]’:
src
/caffe/layer_factory.cpp:240:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetSoftmaxLayer(const caffe::LayerParameter&) [with Dtype = double]’:
src
/caffe/layer_factory.cpp:217:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetSoftmaxLayer(const caffe::LayerParameter&) [with Dtype = float]’:
src
/caffe/layer_factory.cpp:217:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetSigmoidLayer(const caffe::LayerParameter&) [with Dtype = double]’:
src
/caffe/layer_factory.cpp:194:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetSigmoidLayer(const caffe::LayerParameter&) [with Dtype = float]’:
src
/caffe/layer_factory.cpp:194:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetReLULayer(const caffe::LayerParameter&) [with Dtype = double]’:
src
/caffe/layer_factory.cpp:171:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetReLULayer(const caffe::LayerParameter&) [with Dtype = float]’:
src
/caffe/layer_factory.cpp:171:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetLRNLayer(const caffe::LayerParameter&) [with Dtype = double]’:
src
/caffe/layer_factory.cpp:148:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetLRNLayer(const caffe::LayerParameter&) [with Dtype = float]’:
src
/caffe/layer_factory.cpp:148:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetPoolingLayer(const caffe::LayerParameter&) [with Dtype = double]’:
src
/caffe/layer_factory.cpp:111:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
src
/caffe/layer_factory.cpp: In function boost::shared_ptr<caffe::Layer<Dtype> > caffe::GetPoolingLayer(const caffe::LayerParameter&) [with Dtype = float]’:
src
/caffe/layer_factory.cpp:111:1: warning: control reaches end of non-void function [-Wreturn-type]
 
}
 
^
Makefile:572: recipe for target '.build_release/src/caffe/layer_factory.o' failed
make
: *** [.build_release/src/caffe/layer_factory.o] Error 1

Any help?

Clément Fuji Tsang

unread,
Jun 7, 2016, 9:44:47 AM6/7/16
to Caffe Users
Well as we can't see your implementation of the layer it's hard to say, but my guess is in your constructor, maybe check if MaskConvolutionLayer inherits of Layer ?

João Abrantes

unread,
Jun 7, 2016, 10:20:08 AM6/7/16
to Caffe Users
My conv_layer_mask.hpp and conv_layer_mask.cpp are basically just a copy paste of conv_layer.hpp and conv_layer.cpp. With some slightly changes on the Backward_cpu function:

cpp file:

#include <vector>


#include "caffe/layers/conv_layer_mask.hpp"


namespace caffe {


template <typename Dtype>
void MaskConvolutionLayer<Dtype>::compute_output_shape() {
 
const int* kernel_shape_data = this->kernel_shape_.cpu_data();
 
const int* stride_data = this->stride_.cpu_data();
 
const int* pad_data = this->pad_.cpu_data();
 
const int* dilation_data = this->dilation_.cpu_data();
 
this->output_shape_.clear();
 
for (int i = 0; i < this->num_spatial_axes_; ++i) {
   
// i + 1 to skip channel axis
   
const int input_dim = this->input_shape(i + 1);
   
const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
   
const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)
       
/ stride_data[i] + 1;
   
this->output_shape_.push_back(output_dim);
 
}
}


template <typename Dtype>
void MaskConvolutionLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
     
const vector<Blob<Dtype>*>& top) {
 
const Dtype* weight = this->blobs_[0]->cpu_data();
 
for (int i = 0; i < bottom.size(); ++i) {
   
const Dtype* bottom_data = bottom[i]->cpu_data();
   
Dtype* top_data = top[i]->mutable_cpu_data();
   
for (int n = 0; n < this->num_; ++n) {
     
this->forward_cpu_gemm(bottom_data + n * this->bottom_dim_, weight,
          top_data
+ n * this->top_dim_);
     
if (this->bias_term_) {
       
const Dtype* bias = this->blobs_[1]->cpu_data();
       
this->forward_cpu_bias(top_data + n * this->top_dim_, bias);
     
}
   
}
 
}
}


template <typename Dtype>
void MaskConvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
     
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
 
const Dtype* weight = this->blobs_[0]->cpu_data();
 
const Dtype* mask = this->blobs_[2]->gpu_data();
 
Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();
 
for (int i = 0; i < top.size(); ++i) {
   
const Dtype* top_diff = top[i]->cpu_diff();
   
const Dtype* bottom_data = bottom[i]->cpu_data();
   
Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();
   
// Bias gradient, if necessary.
   
if (this->bias_term_ && this->param_propagate_down_[1]) {
     
Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();
     
for (int n = 0; n < this->num_; ++n) {
       
this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);
     
}
   
}
   
if (this->param_propagate_down_[0] || propagate_down[i]) {
     
for (int n = 0; n < this->num_; ++n) {
       
// gradient w.r.t. weight. Note that we will accumulate diffs.
       
if (this->param_propagate_down_[0]) {
         
this->weight_cpu_gemm(bottom_data + n * this->bottom_dim_,
              top_diff
+ n * this->top_dim_, weight_diff);
       
}
       
// gradient w.r.t. bottom data, if necessary.
       
if (propagate_down[i]) {
         
this->backward_cpu_gemm(top_diff + n * this->top_dim_, weight,
              bottom_diff
+ n * this->bottom_dim_);
       
}
     
}
   
}
 
}
 
const int count = this->blobs_[2]->count();
  caffe_mul
(count, mask, weight_diff, weight_diff);
}


#ifdef CPU_ONLY
STUB_GPU
(MaskConvolutionLayer);
#endif


INSTANTIATE_CLASS
(MaskConvolutionLayer);
}  // namespace caffe

Clément Fuji Tsang

unread,
Jun 7, 2016, 10:36:04 AM6/7/16
to Caffe Users
I think the mistake is in the .hpp

Le mardi 7 juin 2016 13:37:49 UTC+2, João Abrantes a écrit :

João Abrantes

unread,
Jun 7, 2016, 10:37:46 AM6/7/16
to Caffe Users
My hpp:

#ifndef CAFFE_CONV_LAYER_HPP_
#define CAFFE_CONV_LAYER_HPP_


#include <vector>


#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"


#include "caffe/layers/base_conv_layer.hpp"


namespace caffe {


/**
 * @brief Convolves the input image with a bank of learned filters,
 *        and (optionally) adds biases.
 *
 *   Caffe convolves by reduction to matrix multiplication. This achieves
 *   high-throughput and generality of input and filter dimensions but comes at
 *   the cost of memory for matrices. This makes use of efficiency in BLAS.
 *
 *   The input is "im2col" transformed to a channel K' x H x W data matrix
 *   for multiplication with the N x K' x H x W filter matrix to yield a
 *   N' x H x W output matrix that is then "col2im" restored. K' is the
 *   input channel * kernel height * kernel width dimension of the unrolled
 *   inputs so that the im2col matrix has a column for each input region to
 *   be filtered. col2im restores the output spatial structure by rolling up
 *   the output channel N' columns of the output matrix.
 */

template <typename Dtype>
class MaskConvolutionLayer : public BaseConvolutionLayer<Dtype> {
 
public:
 
/**
   * @param param provides ConvolutionParameter convolution_param,
   *    with ConvolutionLayer options:
   *  - num_output. The number of filters.
   *  - kernel_size / kernel_h / kernel_w. The filter dimensions, given by
   *  kernel_size for square filters or kernel_h and kernel_w for rectangular
   *  filters.
   *  - stride / stride_h / stride_w (\b optional, default 1). The filter
   *  stride, given by stride_size for equal dimensions or stride_h and stride_w
   *  for different strides. By default the convolution is dense with stride 1.
   *  - pad / pad_h / pad_w (\b optional, default 0). The zero-padding for
   *  convolution, given by pad for equal dimensions or pad_h and pad_w for
   *  different padding. Input padding is computed implicitly instead of
   *  actually padding.
   *  - dilation (\b optional, default 1). The filter
   *  dilation, given by dilation_size for equal dimensions for different
   *  dilation. By default the convolution has dilation 1.
   *  - group (\b optional, default 1). The number of filter groups. Group
   *  convolution is a method for reducing parameterization by selectively
   *  connecting input and output channels. The input and output channel dimensions must be divisible
   *  by the number of groups. For group @f$ \geq 1 @f$, the
   *  convolutional filters' input and output channels are separated s.t. each
   *  group takes 1 / group of the input channels and makes 1 / group of the
   *  output channels. Concretely 4 input channels, 8 output channels, and
   *  2 groups separate input channels 1-2 and output channels 1-4 into the
   *  first group and input channels 3-4 and output channels 5-8 into the second
   *  group.
   *  - bias_term (\b optional, default true). Whether to have a bias.
   *  - engine: convolution has CAFFE (matrix multiplication) and CUDNN (library
   *    kernels + stream parallelism) engines.
   */

 
explicit MaskConvolutionLayer(const LayerParameter& param)
     
: BaseConvolutionLayer<Dtype>(param) {}


 
virtual inline const char* type() const { return "ConvolutionMask"; }


 
protected:
 
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
     
const vector<Blob<Dtype>*>& top);
 
// virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
 
//   const vector<Blob<Dtype>*>& top);
 
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
     
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
 
// virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
 
// const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);
 
virtual inline bool reverse_dimensions() { return false; }
 
virtual void compute_output_shape();
};


}  // namespace caffe


#endif  // CAFFE_CONV_LAYER_HPP_

Clément Fuji Tsang

unread,
Jun 7, 2016, 10:50:38 AM6/7/16
to Caffe Users
Yup, I got one, it's suppose to be a ConvolutionLayer isn't it ? (if you choose to register it from GetConvolutionLayer)

So why did you implemented the following ?
virtual inline const char* type() const { return "ConvolutionMask"; }

replace it by:
virtual inline const char* type() const { return "Convolution"; }

and it should be alright, or don't use the layer_factory (and register the layer directly in the layer's .cpp)

happy deep compression ;)


Le mardi 7 juin 2016 13:37:49 UTC+2, João Abrantes a écrit :

João Abrantes

unread,
Jun 7, 2016, 10:56:13 AM6/7/16
to Caffe Users
Yes! That was indeed an error. First I was trying to register the layer directly in the layer's .cpp and then end up changing to the layer_factory but forgot to change that part. Anyway I changed that part did make clean and make all and the same exact error appeared ! 

P.S. (any code for working layers for deep compression would be super helpful :) :) )

Jan

unread,
Jun 9, 2016, 5:58:28 AM6/9/16
to Caffe Users
My bet is on the include guards: You did not change them, so they are the same as in the standard conv_layer.hpp implementation. Which is probably included in the layer factory as well, and which then causes one of them to not be included (seems yours is dropped). Change the include guards to something unique like MASK_CONV_LAYER_HPP_, and try to recompile.

Jan

João Abrantes

unread,
Jun 9, 2016, 6:39:00 AM6/9/16
to Caffe Users
Yes Jan! That was it.. thank you very much.
Message has been deleted

Clément Fuji Tsang

unread,
Jun 11, 2016, 11:41:06 PM6/11/16
to Caffe Users
Sry for the late answer,

check my repo here (branch: deep compression), just pushed it ;)

I may doing it for convolution aswell but I'm not sure it's an interesting part...

Le mardi 7 juin 2016 13:37:49 UTC+2, João Abrantes a écrit :

wuyilengbi

unread,
Aug 10, 2016, 2:43:54 AM8/10/16
to Caffe Users
i use your code about deep compression,the code just prune ?  could me  ask your some quession ?980087473 my qq.

在 2016年6月12日星期日 UTC+8上午11:41:06,Clément Fuji Tsang写道:

src
/caffe/<span style="
Reply all
Reply to author
Forward
0 new messages