target_blobs.size() == source_layer_blobs.size() (2 vs. 0) - Incompatible number of blobs for layer

516 views
Skip to first unread message

Белый Охотник

unread,
May 22, 2015, 6:27:59 AM5/22/15
to caffe...@googlegroups.com

Hey, i am currently trying to train and test my own model with my own data via latest version of caffe(c++). Training goes as expected with such prototxt:

name: "FACES"
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }

  data_param {
    source: "examples/_faces/trainldb"
    batch_size: 155
    backend: LEVELDB
  }
}
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }

  data_param {
    source: "examples/_faces/testldb"
    batch_size: 45
    backend: LEVELDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 12
    kernel_size: 13
    stride: 2
    weight_filler {
      type: "gaussian" # initialize the filters from a Gaussian
      std: 0.01        # distribution with stdev 0.01 (default mean: 0)
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

....

layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

but when i'm trying to load with model for making predictions with such prototxt:

name: "FACES"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 150
input_dim: 150
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 12
    kernel_size: 13
    stride: 2
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
....
layer {
  name: "prob"
  type: "Softmax"
  bottom: "ip2"
  top: "prob"
}

It breaks with target_blobs.size() == source_layer_blobs.size() (2 vs. 0) - Incompatible number of blobs for layer 1
but i can't see a mistake on my side. and it also matches simillar setup for caffenet.

maybe i'm not supposed to load it like that?:

Caffe::set_mode(Caffe::CPU);

    Net<float>* net;
    net = new Net<float>("azf_faces.prototxt", Phase::TEST);
    cout << "Num inputs in Net: " << net->num_inputs() << endl; //1
    cout << "Num outputs in Net: " << net->num_outputs() << endl; //1

    net->CopyTrainedLayersFrom("azf3_iter_10000.caffemodel"); << ERROR OCCURS

Белый Охотник

unread,
May 25, 2015, 1:39:05 AM5/25/15
to caffe...@googlegroups.com
Same error also occurs with mnist example. It has trained succesfully, but when i try to load resulting model with lenet.prototxt for recognition, it throws an exception "target_blobs.size() == source_layer_blobs.size() (2 vs. 0) - Incompatible number of blobs for layer 1". So either my initialization routine is wrong, or i did skrew up framework compilation(it was hell, i use it under windows 7 and VS2013), guided by this post: https://initialneil.wordpress.com/2015/01/11/build-caffe-in-windows-with-visual-studio-2013-cuda-6-5-opencv-2-4-9/

fractal

unread,
Jun 15, 2017, 6:05:39 PM6/15/17
to Caffe Users, cocut...@gmail.com
I have the same issue but with a different topology (Inception-BN). How did this get resolved for you? what was the issue? 

Thanks
Reply all
Reply to author
Forward
0 new messages