[DeepLab][V2] Protobuf parsing error - IMAGE_SEG_DATA

2,367 views
Skip to first unread message

Ruud

unread,
Jun 15, 2016, 6:58:09 AM6/15/16
to Caffe Users
Hi everyone,

I am trying to run my previously working Deeplab V1 prototext under the newly resleased Deeplab V2.

However, some small things seem to have changed. I get the following error:


I0615 12:54:01.154366 21986 solver.cpp:81] Creating training net from train_net file: /home/ruud/DeepLab/exper-sweeper-2/sweeper/config/deeplab_vanilla/train_train.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 21:3: Unknown enumeration value of "IMAGE_SEG_DATA" for field "type".
F0615 12:54:01.154516 21986 upgrade_proto.cpp:68] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/ruud/DeepLab/exper-sweeper-2/sweeper/config/deeplab_vanilla/train_train.prototxt
*** Check failure stack trace: ***
    @     0x7f3ccdc95daa  (unknown)
    @     0x7f3ccdc95ce4  (unknown)
    @     0x7f3ccdc956e6  (unknown)
    @     0x7f3ccdc98687  (unknown)
    @     0x7f3cce405b0e  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7f3cce43e6a7  caffe::Solver<>::InitTrainNet()
    @     0x7f3cce43f70c  caffe::Solver<>::Init()
    @     0x7f3cce43fa3a  caffe::Solver<>::Solver()
    @     0x7f3cce435493  caffe::Creator_SGDSolver<>()
    @           0x40ea7e  caffe::SolverRegistry<>::CreateSolver()
    @           0x407bb2  train()
    @           0x4059dc  main
    @     0x7f3cccfa3f45  (unknown)
    @           0x406111  (unknown)
    @              (nil)  (unknown)



When I look into the new prototext files of V2, is spot a difference of the layer type parameter. 

This is the old V1 formatting:

layers {
  name: "data"
  type: IMAGE_SEG_DATA
  top: "data"
...

And this is the new V2 formatting:

layers {
  name: "data"
  type: "ImageSegData"
  top: "data"
...

However, when I adot the new formatting and run my prototext again, I get a different error I cannot seem to solve:

I0615 12:57:02.523533 22016 solver.cpp:81] Creating training net from train_net file: /home/ruud/DeepLab/exper-sweeper-2/sweeper/config/deeplab_vanilla/train_train.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 20:9: Expected integer or identifier.
F0615 12:57:02.523633 22016 upgrade_proto.cpp:68] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/ruud/DeepLab/exper-sweeper-2/sweeper/config/deeplab_vanilla/train_train.prototxt
*** Check failure stack trace: ***
    @     0x7fbc2ea93daa  (unknown)
    @     0x7fbc2ea93ce4  (unknown)
    @     0x7fbc2ea936e6  (unknown)
    @     0x7fbc2ea96687  (unknown)
    @     0x7fbc2f203b0e  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7fbc2f23c6a7  caffe::Solver<>::InitTrainNet()
    @     0x7fbc2f23d70c  caffe::Solver<>::Init()
    @     0x7fbc2f23da3a  caffe::Solver<>::Solver()
    @     0x7fbc2f233493  caffe::Creator_SGDSolver<>()
    @           0x40ea7e  caffe::SolverRegistry<>::CreateSolver()
    @           0x407bb2  train()
    @           0x4059dc  main
    @     0x7fbc2dda1f45  (unknown)
    @           0x406111  (unknown)
    @              (nil)  (unknown)
Aborted (core dumped)

Any insights would be highly helpful!

Best,
Ruud



Ruud

unread,
Jun 15, 2016, 8:51:50 AM6/15/16
to Caffe Users
Okay, one step closer. "layers" should be renamed to "layer" in the prototext files. Layers was probably present in some old / outdated example.

However, now it seems not to recognize another Deeplab parameter:

[libprotobuf ERROR google/protobuf/text_format.cc:296] Error parsing text-format caffe.NetParameter: 70:11: Message type "caffe.LayerParameter" has no field named "blobs_lr".
F0615 14:48:06.274693  5537 upgrade_proto.cpp:68] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/ruud/DeepLab/exper-sweeper-2/sweeper/config/deeplab_vanilla/train_train.prototxt
*** Check failure stack trace: ***
    @     0x7f28a541adaa  (unknown)
    @     0x7f28a541ace4  (unknown)
    @     0x7f28a541a6e6  (unknown)
    @     0x7f28a541d687  (unknown)
    @     0x7f28a5b94a2e  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7f28a5bcd7e7  caffe::Solver<>::InitTrainNet()
    @     0x7f28a5bce82c  caffe::Solver<>::Init()
    @     0x7f28a5bceb5a  caffe::Solver<>::Solver()
    @     0x7f28a5bc4643  caffe::Creator_SGDSolver<>()
    @           0x40ea6e  caffe::SolverRegistry<>::CreateSolver()
    @           0x407bb2  train()
    @           0x4059dc  main
    @     0x7f28a4728f45  (unknown)

zxd...@163.com

unread,
Jun 19, 2016, 3:54:49 AM6/19/16
to Caffe Users
Hi, Ruud!
blobs_lr is out of style, replaced by lr_mult. I think you should modify all layers in new style.
Here is an example of a convolution layer. 

layer { 
  bottom: "conv3_1" 
  top: "conv3_2" 
  name: "conv3_2" 
  type: "Convolution"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param { 
    engine: CAFFE 
    num_output: 256 
    pad: 1 
    kernel_size: 3 
  } 
}

Best wishes!

在 2016年6月15日星期三 UTC+8下午6:58:09,Ruud写道:

xuany...@gmail.com

unread,
Jun 28, 2016, 9:08:59 AM6/28/16
to Caffe Users
Hi, Ruud,

when I try to run my previously working Deeplab V1 prototext under the newly resleased Deeplab V2, I meet the same problem.  I try to fix it as you said, but failed several times.

Can you show me the modified train.prototxt and email a copy to me? Thank you for help. My email address is xuany...@163.com




在 2016年6月15日星期三 UTC+8下午6:58:09,Ruud写道:
Hi everyone,

Ruud

unread,
Aug 26, 2016, 11:01:20 AM8/26/16
to Caffe Users
Sure! I translated deeplab_vanilla to the latest Caffe build:


#==============TRAIN.PROTOTXT=======================
# VGG 16-layer network convolutional finetuning
# Network modified to have smaller receptive field (128 pixels)
# and smaller stride (8 pixels) when run in convolutional mode.
#
# For alignment to work, we set:
# (1) input dimension equal to
# $n = 8 * k + 2$, e.g., 306 (for k = 38)
# (2) dimension after 3rd max-pooling (centered at -3.5)
# $m = k + 2$ (40 if k = 38)
# (3) dimension after 4th max-pooling (centered at -1.5)
# $m = k + 1$ (39 if k = 38)
# (4) Crop 1 pixels at the begin of label map and shrink by 8
# to produce the expected $m$

name: "${NET_ID}"

layer {
  name: "data"
  type: "ImageSegData"
  top: "data"
  top: "label"
  image_data_param {
    root_folder: "${DATA_ROOT}"
    source: "${EXP}/list/${TRAIN_SET}.txt"
    label_type: PIXEL
    batch_size: 24
    shuffle: true
  }
  transform_param {
    # Use BGR as order!
    # Use matlab script : calc_bgr_image_set_mean.m
    mean_value: 34.7887
    mean_value: 27.7252
    mean_value: 38.9483
    # mean_file: "/home/ruud/DeepLab/exper-sweeper-2/sweeper/data/sweeper_plant_1-7_synthetic_v2/mean_image.binaryproto"
    crop_size: 306
    mirror: true
  }
  include: { phase: TRAIN }
}
layer {
  name: "data"
  type: "ImageSegData"
  top: "data"
  top: "label"
  image_data_param {
    root_folder: "${DATA_ROOT}"
    source: "${EXP}/list/${TEST_SET}.txt"
    batch_size: 1
  }
  transform_param {
    # Use BGR as order!
    # Use matlabb script : calc_bgr_image_set_mean.m
    mean_value: 34.7887
    mean_value: 27.7252
    mean_value: 38.9483
    #mean_file: "/home/ruud/DeepLab/exper-sweeper-2/sweeper/data/sweeper_plant_1-7_synthetic_v2/mean_image.binaryproto"
    crop_size: 514 # = 64 * 8 + 2
    mirror: false
  }
  include: { phase: TEST }
}

### NETWORK ###

layer {
  bottom: "data"
  top: "conv1_1"
  name: "conv1_1"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_1"
  top: "conv1_1"
  name: "relu1_1"
  type: "ReLU"
}
layer {
  bottom: "conv1_1"
  top: "conv1_2"
  name: "conv1_2"
  type: "Convolution"

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_2"
  top: "conv1_2"
  name: "relu1_2"
  type: "ReLU"
}
layer {
  bottom: "conv1_2"
  top: "pool1"
  name: "pool1"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool1"
  top: "conv2_1"
  name: "conv2_1"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_1"
  top: "conv2_1"
  name: "relu2_1"
  type: "ReLU"
}
layer {
  bottom: "conv2_1"
  top: "conv2_2"
  name: "conv2_2"
  type: "Convolution"
 
 # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_2"
  top: "conv2_2"
  name: "relu2_2"
  type: "ReLU"
}
layer {
  bottom: "conv2_2"
  top: "pool2"
  name: "pool2"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool2"
  top: "conv3_1"
  name: "conv3_1"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_1"
  top: "conv3_1"
  name: "relu3_1"
  type: "ReLU"
}
layer {
  bottom: "conv3_1"
  top: "conv3_2"
  name: "conv3_2"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_2"
  top: "conv3_2"
  name: "relu3_2"
  type: "ReLU"
}
layer {
  bottom: "conv3_2"
  top: "conv3_3"
  name: "conv3_3"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_3"
  top: "conv3_3"
  name: "relu3_3"
  type: "ReLU"
}
layer {
  bottom: "conv3_3"
  top: "pool3"
  name: "pool3"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool3"
  top: "conv4_1"
  name: "conv4_1"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_1"
  top: "conv4_1"
  name: "relu4_1"
  type: "ReLU"
}
layer {
  bottom: "conv4_1"
  top: "conv4_2"
  name: "conv4_2"
  type: "Convolution"
    
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_2"
  top: "conv4_2"
  name: "relu4_2"
  type: "ReLU"
}
layer {
  bottom: "conv4_2"
  top: "conv4_3"
  name: "conv4_3"
  type: "Convolution"
      
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_3"
  top: "conv4_3"
  name: "relu4_3"
  type: "ReLU"
}
layer {
  bottom: "conv4_3"
  top: "pool4"
  name: "pool4"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    #pad: 1
    #stride: 2
    stride: 1
  }
}
layer {
  bottom: "pool4"
  top: "conv5_1"
  name: "conv5_1"
  type: "Convolution"
      
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead
    #hole: 2
    dilation: 2
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_1"
  top: "conv5_1"
  name: "relu5_1"
  type: "ReLU"
}
layer {
  bottom: "conv5_1"
  top: "conv5_2"
  name: "conv5_2"
  type: "Convolution"
      
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead for V2
    #hole: 2
    dilation: 2
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_2"
  top: "conv5_2"
  name: "relu5_2"
  type: "ReLU"
}
layer {
  bottom: "conv5_2"
  top: "conv5_3"
  name: "conv5_3"
  type: "Convolution"

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead for V2
    #hole: 2
    dilation: 2
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_3"
  top: "conv5_3"
  name: "relu5_3"
  type: "ReLU"
}
layer {
  bottom: "conv5_3"
  top: "pool5"
  name: "pool5"
  type: "Pooling"
  pooling_param {
    pool: MAX
    #kernel_size: 2
    #stride: 2
    kernel_size: 3
    stride: 1
    pad: 1
  }
}

layer {
  bottom: "pool5"
  top: "fc6"
  name: "fc6"
  type: "Convolution"
  
  # Works in V1, does not in V2?
  # strict_dim: false

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 4096
    pad: 6
    #hole is for V1, use 'dilation' instead for V2
    #hole: 4
    dilation: 4
    kernel_size: 4
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "relu6"
  type: "ReLU"
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "drop6"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc6"
  top: "fc7"
  name: "fc7"
  type: "Convolution"
  
  # This parameter seems deprecated in V2
  # strict_dim: false
 
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 4096
    kernel_size: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "relu7"
  type: "ReLU"
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "drop7"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc7"
  top: "fc8_synth_to_real"
  name: "fc8_synth_to_real"
  type: "Convolution"
  
  # This parameter seems deprecated in V2
  #strict_dim: false

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 10
  #blobs_lr: 20
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }

  convolution_param {
    num_output: ${NUM_LABELS}
    kernel_size: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  bottom: "label"
  top: "label_shrink"
  name: "label_shrink"
  type: "Interp"
  interp_param {
    shrink_factor: 8
    pad_beg: -1
    pad_end: 0
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "fc8_synth_to_real"
  bottom: "label_shrink"
  
  # This parameter seems deprecated in V2
  # softmaxwithloss_param {
  #  #weight_source: "voc12/loss_weight/loss_weight_train.txt"
  #  ignore_label: 255
  #}

  # Use this instead for V2
  loss_param {
     ignore_label: 255
  }

  include: { phase: TRAIN }
}
layer {
  name: "accuracy"
  type: "SegAccuracy"
  bottom: "fc8_synth_to_real"
  bottom: "label_shrink"
  top: "accuracy"
  seg_accuracy_param {
    ignore_label: 255
  } 
}

# layer {
#   name: "im_data"
#   type: IMSHOW
#   bottom: "data"
# }
# layer {
#   name: "im_scores"
#   type: IMSHOW
#   bottom: "fc8_pascal"
# }

layer {
  name: "fc8_mat"
  type: "MatWrite"
  bottom: "fc8_synth_to_real"
  mat_write_param {
    #prefix: "voc12/features/${NET_ID}/${TEST_SET}/fc8/"
    #source: "voc12/list/${TEST_SET}_id.txt"
    prefix: "${EXP}/features/${NET_ID}/${TEST_SET}/fc8/"
    source: "${EXP}/list/${TEST_SET}_id.txt"
    strip: 0
    period: 1
  }
  include: { phase: TEST }
}







===============TEST.PROTOTXT====================
# VGG 16-layer network convolutional finetuning
# Network modified to have smaller receptive field (128 pixels)
# and smaller stride (8 pixels) when run in convolutional mode.
#
# For alignment to work, we set:
# (1) input dimension equal to
# $n = 16 * k + 2$, e.g., 306 (for k = 19)
# (2) dimension after 4th max-pooling
# $m = 2 * k + 3$ (41 if k = 19)
# (3) interp dimension equal to
# $m + (m-1) * 7 = 8 * m - 7 = n + 15$, (321 if k = 19)
# (4) Crop 7 pixels at the begin and 8 pixels at the
# end of the interpolated signal to produce the expected $n$

name: "${NET_ID}"

layer {
  name: "data"
  type: "ImageSegData"
  top: "data"
  image_data_param {
    #root_folder: "/rmt/data/pascal/VOCdevkit/VOC2012"
    #source: "voc12/list/${TEST_SET}.txt"
    root_folder: "${DATA_ROOT}"
    source: "${EXP}/list/${TEST_SET}.txt"
    batch_size: 1
    
    # Probably deprecated in V2
    #has_label: false
  }
  transform_param {
    # Use BGR as order!
    # Use matlab script : calc_bgr_image_set_mean.m
    mean_value: 34.7887
    mean_value: 27.7252
    mean_value: 38.9483
    #crop_size: 514 #  = 32 * 16 + 2
    crop_size: 786 # 49 * 16 + 2
    mirror: false
  }
  include: { phase: TEST }
}

### NETWORK ###

layer {
  bottom: "data"
  top: "conv1_1"
  name: "conv1_1"
  type: "Convolution"
 
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

 
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv1_1"
  top: "conv1_1"
  name: "relu1_1"
  type: "ReLU"
}
layer {
  bottom: "conv1_1"
  top: "conv1_2"
  name: "conv1_2"
  type: "Convolution"
 
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv1_2"
  top: "conv1_2"
  name: "relu1_2"
  type: "ReLU"
}
layer {
  bottom: "conv1_2"
  top: "pool1"
  name: "pool1"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool1"
  top: "conv2_1"
  name: "conv2_1"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv2_1"
  top: "conv2_1"
  name: "relu2_1"
  type: "ReLU"
}
layer {
  bottom: "conv2_1"
  top: "conv2_2"
  name: "conv2_2"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv2_2"
  top: "conv2_2"
  name: "relu2_2"
  type: "ReLU"
}
layer {
  bottom: "conv2_2"
  top: "pool2"
  name: "pool2"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool2"
  top: "conv3_1"
  name: "conv3_1"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv3_1"
  top: "conv3_1"
  name: "relu3_1"
  type: "ReLU"
}
layer {
  bottom: "conv3_1"
  top: "conv3_2"
  name: "conv3_2"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv3_2"
  top: "conv3_2"
  name: "relu3_2"
  type: "ReLU"
}
layer {
  bottom: "conv3_2"
  top: "conv3_3"
  name: "conv3_3"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv3_3"
  top: "conv3_3"
  name: "relu3_3"
  type: "ReLU"
}
layer {
  bottom: "conv3_3"
  top: "pool3"
  name: "pool3"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool3"
  top: "conv4_1"
  name: "conv4_1"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv4_1"
  top: "conv4_1"
  name: "relu4_1"
  type: "ReLU"
}
layer {
  bottom: "conv4_1"
  top: "conv4_2"
  name: "conv4_2"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv4_2"
  top: "conv4_2"
  name: "relu4_2"
  type: "ReLU"
}
layer {
  bottom: "conv4_2"
  top: "conv4_3"
  name: "conv4_3"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
  }
}
layer {
  bottom: "conv4_3"
  top: "conv4_3"
  name: "relu4_3"
  type: "ReLU"
}
layer {
  bottom: "conv4_3"
  top: "pool4"
  name: "pool4"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    pad: 1
    #stride: 2
    stride: 1
  }
}
layer {
  bottom: "pool4"
  top: "conv5_1"
  name: "conv5_1"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead
    #hole: 2
    dilation: 2
    kernel_size: 3
  }
}
layer {
  bottom: "conv5_1"
  top: "conv5_1"
  name: "relu5_1"
  type: "ReLU"
}
layer {
  bottom: "conv5_1"
  top: "conv5_2"
  name: "conv5_2"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead
    #hole: 2
    dilation: 2
    kernel_size: 3
  }
}
layer {
  bottom: "conv5_2"
  top: "conv5_2"
  name: "relu5_2"
  type: "ReLU"
}
layer {
  bottom: "conv5_2"
  top: "conv5_3"
  name: "conv5_3"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead
    #hole: 2
    dilation: 2
    kernel_size: 3
  }
}
layer {
  bottom: "conv5_3"
  top: "conv5_3"
  name: "relu5_3"
  type: "ReLU"
}
layer {
  bottom: "conv5_3"
  top: "pool5"
  name: "pool5"
  type: "Pooling"
  pooling_param {
    pool: MAX
    #kernel_size: 2
    #stride: 2
    kernel_size: 3
    stride: 1
    pad: 1
  }
}

layer {
  bottom: "pool5"
  top: "fc6"
  name: "fc6"
  type: "Convolution"
  # Works in V1, does not in V2?
  # strict_dim: false
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4096
    pad: 6
    #hole is for V1, use 'dilation' instead
    #hole: 4
    dilation: 4
    kernel_size: 4
  }
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "relu6"
  type: "ReLU"
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "drop6"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc6"
  top: "fc7"
  name: "fc7"
  type: "Convolution"
  # Works in V1, does not in V2?
  # strict_dim: false
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4096
    kernel_size: 1
  }
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "relu7"
  type: "ReLU"
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "drop7"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc7"
  top: "fc8_synth_to_real"
  name: "fc8_synth_to_real"
  type: "Convolution"
  
  # Works in V1, does not in V2?
  # strict_dim: false

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 10
  #blobs_lr: 20
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }

  convolution_param {
    num_output: ${NUM_LABELS}
    kernel_size: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  bottom: "fc8_synth_to_real"
  top: "fc8_interp"
  name: "fc8_interp"
  type: "Interp"
  interp_param {
    zoom_factor: 8
  }
}

# Layer is deprecated in V2. However, cropping is proabably still required for the correct alignment with the input. Perhaps in Matlab or find a layer that can do this and do it here.
#layer {
#  bottom: "fc8_interp"
#  top: "fc8_crop"
#  name: "fc8_crop"
#  type: "Padding"
#  padding_param {
#    pad_beg: -7
#    pad_end: -8
#  }
#}

layer {
  name: "loss"
  type: "SoftmaxWithLoss"
#  bottom: "fc8_crop"
  bottom: "fc8_interp"
  bottom: "label"
  loss_param {
    #weight_source: "voc12/loss_weight/loss_weight_train.txt"
  }
  include: { phase: TRAIN }
}
layer {
  name: "accuracy"
  type: "SegAccuracy"
#  bottom: "fc8_crop"
  bottom: "fc8_interp"
  bottom: "label"
  top: "accuracy"
  include: { phase: TRAIN }
}

# layer {
#   name: "im_data"
#   type: IMSHOW
#   bottom: "data"
# }
# layer {
#   name: "im_scores"
#   type: IMSHOW
#   bottom: "fc8_synth_to_real"
# }

layer {
  name: "fc8_crop_mat"
  type: "MatWrite"
#  bottom: "fc8_crop"
  bottom: "fc8_interp"
  mat_write_param {
    prefix: "${FEATURE_DIR}/${TEST_SET}/fc8/"
    #source: "voc12/list/${TEST_SET}_id.txt"
    source: "${EXP}/list/${TEST_SET}_id.txt"
    strip: 0
    period: 1
  }
  include: { phase: TEST }
}

dxi...@gmail.com

unread,
Jan 13, 2017, 10:21:37 PM1/13/17
to Caffe Users
Hi, I try your configure, error:

I0114 11:09:57.631790  9020 solver.cpp:81] Creating training net from train_net file: exper/voc12/config/DeepLab-LargeFOV/train_train_aug.prototxt
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format caffe.NetParameter: 29:15: Message type "caffe.ImageDataParameter" has no field named "label_type".
F0114 11:09:57.632035  9020 upgrade_proto.cpp:68] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: exper/voc12/config/DeepLab-LargeFOV/train_train_aug.prototxt
*** Check failure stack trace: ***
    @     0x7f7b9b371daa  (unknown)
    @     0x7f7b9b371ce4  (unknown)
    @     0x7f7b9b3716e6  (unknown)
    @     0x7f7b9b374687  (unknown)
    @     0x7f7b9ba3609e  caffe::ReadNetParamsFromTextFileOrDie()
    @     0x7f7b9ba6dba7  caffe::Solver<>::InitTrainNet()
    @     0x7f7b9ba6ebfc  caffe::Solver<>::Init()
    @     0x7f7b9ba6ef09  caffe::Solver<>::Solver()
    @     0x7f7b9ba69423  caffe::Creator_SGDSolver<>()
    @           0x40eabe  caffe::SolverRegistry<>::CreateSolver()
    @           0x407d0b  train()
    @           0x405b61  main
    @     0x7f7b9a04bf45  (unknown)
    @           0x40631d  (unknown)

陈泓

unread,
Jul 16, 2017, 2:57:25 AM7/16/17
to Caffe Users
Have you ever solved this problem? I face the same problem,and it would be grateful if you could show me the method

在 2017年1月14日星期六 UTC+8上午11:21:37,dxi...@gmail.com写道:

AMARESH KUMAR

unread,
Sep 18, 2017, 1:26:37 PM9/18/17
to Caffe Users
I got the same problem. have u found the solution for this? if u have can email me the test.prototxt file to amarku...@gmail.com 

Tsaku Nelson

unread,
Jun 15, 2018, 11:08:27 AM6/15/18
to Caffe Users
Hello Ruud, did you finally solve the problem? because I had the same error as shown on the screenshot below, applied the fix by modifying the train_train_aug.prototxt file, but it seem to re-appear, and I have the feeling that the file "train_train_aug.prototxt" is automatically created with the parameters like "IMAGE_SEG_DATA" inside already. 


Is there a way to prevent the automatic creation, in order to use the correct version of the prototext you just displayed? Feedback most appreciated

Reply all
Reply to author
Forward
0 new messages