Unknown caffe.NetParameter type IMAGE_SEG_DATA ?

1,926 views
Skip to first unread message

Ruud

unread,
Sep 25, 2015, 8:13:22 AM9/25/15
to Caffe Users
Hi all,

I am trying to run the example of DeepLab on top of Caffe. However, it appears the field type in my prototext is not recognized by Caffe!

[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 24:3: Unknown enumeration value of "IMAGE_SEG_DATA" for field "type".

Any ideas how come?

Best,
Ruud

Ali Mousavi

unread,
Sep 26, 2015, 3:51:13 PM9/26/15
to Caffe Users
"You need to save the png annotations without the colormap; otherwise OpenCV will read the wrong values. See the script at matlab/my_script/SavePngAsRawPng.m for reference."

http://ccvl.stat.ucla.edu/deeplab_faq/

Ruud

unread,
Sep 27, 2015, 10:36:21 AM9/27/15
to Caffe Users
Hi Ali,

Thank you for your response. I also read that FAQ, but I didn't not link my problem to that in the first place; the error is a bit cryptic.

I will try to convert the annotations and come back here to report.

Best,
Ruud

Evan Shelhamer

unread,
Sep 27, 2015, 2:34:06 PM9/27/15
to Ruud, Caffe Users
You have to compile and run the DeepLab authors' fork of Caffe -- IMAGE_SEG_DATA is not a layer type in BVLC/caffe
--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/389b899f-4e55-4f2b-a23a-2401a7f918b8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Ruud

unread,
Sep 28, 2015, 2:11:17 AM9/28/15
to Caffe Users, rud...@gmail.com
Hi Evan,

That makes sense! It was not apparent from the instructions, so I assumed it was built on top of the regular Caffe release.

Best,
Ruud
Message has been deleted

Dan Shulman

unread,
Jul 26, 2016, 8:47:26 AM7/26/16
to Caffe Users, rud...@gmail.com
Where can I find the DeepLab authors' fork of Caffe? I'm using https://bitbucket.org/aquariusjay/deeplab-public-ver2

Thanks,
Dan

Ruud

unread,
Jul 26, 2016, 8:51:03 AM7/26/16
to Caffe Users, rud...@gmail.com
Yes, that is the correct link. 

Dan Shulman

unread,
Jul 26, 2016, 12:41:28 PM7/26/16
to Caffe Users, rud...@gmail.com
But this is the link to the deeplab project, it has no fork of Caffe, but uses the original one.

Dan Shulman

unread,
Jul 27, 2016, 5:36:29 AM7/27/16
to Caffe Users, rud...@gmail.com
ok, I compiled it, but still receive:

Unknown enumeration value of "IMAGE_SEG_DATA" for field "type"

What should I do?

Dan

Ruud

unread,
Jul 27, 2016, 1:21:25 PM7/27/16
to Caffe Users, rud...@gmail.com
Are you sure CAFFE_BIN=${CAFFE_DIR}/.build_release/tools/caffe.bin refers to the compiled Deeplab directory?

=${CAFFE_DIR}/.build_release/tools/caffe.bin


CAFFE_BIN=${CAFFE_DIR}/.build_release/tools/caffe.bin=${CAFFE_DIR}/.build_release/tools/caffe.binCAFFE_BIN=${CAFFE_DIR}/.build_release/tools/caffe.bin

Dan Shulman

unread,
Jul 27, 2016, 1:32:34 PM7/27/16
to Caffe Users, rud...@gmail.com
yes, it is:

CAFFE_DIR=.

CAFFE_BIN=${CAFFE_DIR}/.build_release/tools/caffe.bin


where CAFFE_DIR (.) is the deeplab directory.

Ruud

unread,
Jul 28, 2016, 1:37:32 PM7/28/16
to Caffe Users, rud...@gmail.com
So you are still getting the error?

Which prototext are you using for training? With V2, you might be using an outdated Caffe prototext definition that was used in V1 but that is no longer supported without some adjustments.

Dan Shulman

unread,
Jul 29, 2016, 5:26:24 AM7/29/16
to Caffe Users, rud...@gmail.com
Yes, that was the problem, thanks!

Do you know how I can run and get results for my own images?

Tao Liu

unread,
Aug 6, 2016, 1:24:34 PM8/6/16
to Caffe Users, rud...@gmail.com

Did encounter the following error when your  make runtest the caffe from the deeplab author

[ RUN      ] SoftmaxWithLossLayerTest/2.TestGradientWeights
F0806 13:21:43.072187  3521 softmax_loss_layer.cpp:29] Check failed: infile.is_open()
*** Check failure stack trace: ***
    @     0x2b0d1998adaa  (unknown)
    @     0x2b0d1998ace4  (unknown)
    @     0x2b0d1998a6e6  (unknown)
    @     0x2b0d1998d687  (unknown)
    @           0x82db08  caffe::SoftmaxWithLossLayer<>::LayerSetUp()
    @           0x4518c3  caffe::GradientChecker<>::CheckGradientExhaustive()
    @           0x6be271  caffe::SoftmaxWithLossLayerTest_TestGradientWeights_Test<>::TestBody()
    @           0x726a43  testing::internal::HandleExceptionsInMethodIfSupported<>()
    @           0x71d587  testing::Test::Run()
    @           0x71d62e  testing::TestInfo::Run()
    @           0x71d735  testing::TestCase::Run()
    @           0x720a78  testing::internal::UnitTestImpl::RunAllTests()
    @           0x720d07  testing::UnitTest::Run()
    @           0x4248aa  main
    @     0x2b0d1d036f45  (unknown)
    @           0x42eb67  (unknown)
    @              (nil)  (unknown)
Aborted (core dumped)
make: *** [runtest] Error 134
Message has been deleted

Alessandro Musumeci

unread,
Nov 14, 2016, 4:08:49 PM11/14/16
to Caffe Users, rud...@gmail.com
Hi Dan, I have right now the same problem that you had before: "Unknown enumeration value of "IMAGE_SEG_DATA" for field "type" ". I'm using deeplabV2, I changed the prototext file but now I have: 
"Message type "caffe.ImageDataParameter" has no field named "label_type". " that sounds the same. Can you tell me please how did you solved the problem?

Thanks
Alessandro

Alessandro Musumeci

unread,
Nov 14, 2016, 6:04:54 PM11/14/16
to Caffe Users

Hi, please can you share what have you done to solve this problem?
Thanks

Ruud

unread,
Nov 15, 2016, 10:43:14 AM11/15/16
to Caffe Users
Hi Allesandro,

I used the following prototext file for Deeplab V2, using the vanilla model:

# VGG 16-layer network convolutional finetuning
# Network modified to have smaller receptive field (128 pixels)
# and smaller stride (8 pixels) when run in convolutional mode.
#
# For alignment to work, we set:
# (1) input dimension equal to
# $n = 8 * k + 2$, e.g., 306 (for k = 38)
# (2) dimension after 3rd max-pooling (centered at -3.5)
# $m = k + 2$ (40 if k = 38)
# (3) dimension after 4th max-pooling (centered at -1.5)
# $m = k + 1$ (39 if k = 38)
# (4) Crop 1 pixels at the begin of label map and shrink by 8
# to produce the expected $m$

name: "${NET_ID}"

layer {
  name: "data"
  type: "ImageSegData"
  top: "data"
  top: "label"
  image_data_param {
    root_folder: "${DATA_ROOT}"
    source: "${EXP}/list/${TRAIN_SET}.txt"
    label_type: PIXEL
    batch_size: 10
    shuffle: true
  }
  transform_param {
    # Use BGR as order!
    # Use matlab script : calc_bgr_image_set_mean.m
    mean_value: 34.7887
    mean_value: 27.7252
    mean_value: 38.9483
    crop_size: 306
    mirror: true
  }
  include: { phase: TRAIN }
}


### NETWORK ###

layer {
  bottom: "data"
  top: "conv1_1"
  name: "conv1_1"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_1"
  top: "conv1_1"
  name: "relu1_1"
  type: "ReLU"
}
layer {
  bottom: "conv1_1"
  top: "conv1_2"
  name: "conv1_2"
  type: "Convolution"

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv1_2"
  top: "conv1_2"
  name: "relu1_2"
  type: "ReLU"
}
layer {
  bottom: "conv1_2"
  top: "pool1"
  name: "pool1"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool1"
  top: "conv2_1"
  name: "conv2_1"
  type: "Convolution"
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_1"
  top: "conv2_1"
  name: "relu2_1"
  type: "ReLU"
}
layer {
  bottom: "conv2_1"
  top: "conv2_2"
  name: "conv2_2"
  type: "Convolution"
 
 # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 128
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv2_2"
  top: "conv2_2"
  name: "relu2_2"
  type: "ReLU"
}
layer {
  bottom: "conv2_2"
  top: "pool2"
  name: "pool2"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool2"
  top: "conv3_1"
  name: "conv3_1"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_1"
  top: "conv3_1"
  name: "relu3_1"
  type: "ReLU"
}
layer {
  bottom: "conv3_1"
  top: "conv3_2"
  name: "conv3_2"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_2"
  top: "conv3_2"
  name: "relu3_2"
  type: "ReLU"
}
layer {
  bottom: "conv3_2"
  top: "conv3_3"
  name: "conv3_3"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 256
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv3_3"
  top: "conv3_3"
  name: "relu3_3"
  type: "ReLU"
}
layer {
  bottom: "conv3_3"
  top: "pool3"
  name: "pool3"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
    pad: 1
  }
}
layer {
  bottom: "pool3"
  top: "conv4_1"
  name: "conv4_1"
  type: "Convolution"
  
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_1"
  top: "conv4_1"
  name: "relu4_1"
  type: "ReLU"
}
layer {
  bottom: "conv4_1"
  top: "conv4_2"
  name: "conv4_2"
  type: "Convolution"
    
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_2"
  top: "conv4_2"
  name: "relu4_2"
  type: "ReLU"
}
layer {
  bottom: "conv4_2"
  top: "conv4_3"
  name: "conv4_3"
  type: "Convolution"
      
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv4_3"
  top: "conv4_3"
  name: "relu4_3"
  type: "ReLU"
}
layer {
  bottom: "conv4_3"
  top: "pool4"
  name: "pool4"
  type: "Pooling"
  pooling_param {
    pool: MAX
    kernel_size: 2
    #pad: 1
    #stride: 2
    stride: 1
  }
}
layer {
  bottom: "pool4"
  top: "conv5_1"
  name: "conv5_1"
  type: "Convolution"
      
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead
    #hole: 2
    dilation: 2
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_1"
  top: "conv5_1"
  name: "relu5_1"
  type: "ReLU"
}
layer {
  bottom: "conv5_1"
  top: "conv5_2"
  name: "conv5_2"
  type: "Convolution"
      
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead for V2
    #hole: 2
    dilation: 2
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_2"
  top: "conv5_2"
  name: "relu5_2"
  type: "ReLU"
}
layer {
  bottom: "conv5_2"
  top: "conv5_3"
  name: "conv5_3"
  type: "Convolution"

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 512
    #pad: 1
    pad: 2
    #hole is for V1, use 'dilation' instead for V2
    #hole: 2
    dilation: 2
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "conv5_3"
  top: "conv5_3"
  name: "relu5_3"
  type: "ReLU"
}
layer {
  bottom: "conv5_3"
  top: "pool5"
  name: "pool5"
  type: "Pooling"
  pooling_param {
    pool: MAX
    #kernel_size: 2
    #stride: 2
    kernel_size: 3
    stride: 1
    pad: 1
  }
}

layer {
  bottom: "pool5"
  top: "fc6"
  name: "fc6"
  type: "Convolution"
  
  # Works in V1, does not in V2?
  # strict_dim: false

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 4096
    pad: 6
    #hole is for V1, use 'dilation' instead for V2
    #hole: 4
    dilation: 4
    kernel_size: 4
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "relu6"
  type: "ReLU"
}
layer {
  bottom: "fc6"
  top: "fc6"
  name: "drop6"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc6"
  top: "fc7"
  name: "fc7"
  type: "Convolution"
  
  # This parameter seems deprecated in V2
  # strict_dim: false
 
  # These parameter do not seem to be parsed in V2
  #blobs_lr: 1
  #blobs_lr: 2
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }

  convolution_param {
    num_output: 4096
    kernel_size: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "relu7"
  type: "ReLU"
}
layer {
  bottom: "fc7"
  top: "fc7"
  name: "drop7"
  type: "Dropout"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  bottom: "fc7"
  top: "fc8_synth_to_real"
  name: "fc8_synth_to_real"
  type: "Convolution"
  
  # This parameter seems deprecated in V2
  #strict_dim: false

  # These parameter do not seem to be parsed in V2
  #blobs_lr: 10
  #blobs_lr: 20
  #weight_decay: 1
  #weight_decay: 0

  # For V2 use these instead
  param {
    lr_mult: 10
    decay_mult: 1
  }
  param {
    lr_mult: 20
    decay_mult: 0
  }

  convolution_param {
    num_output: ${NUM_LABELS}
    kernel_size: 1
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  bottom: "label"
  top: "label_shrink"
  name: "label_shrink"
  type: "Interp"
  interp_param {
    shrink_factor: 8
    pad_beg: -1
    pad_end: 0
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "fc8_synth_to_real"
  bottom: "label_shrink"
  top: "loss"
  
  # Use this instead for V2
  loss_param {
     ignore_label: 255
     normalize : true
  }

  #include: { phase: TRAIN }
}
layer {
  name: "accuracy"
  type: "SegAccuracy"
  bottom: "fc8_synth_to_real"
  bottom: "label_shrink"
  top: "accuracy"
  seg_accuracy_param {
    ignore_label: 255
  } 
}

# layer {
#   name: "im_data"
#   type: IMSHOW
#   bottom: "data"
# }
# layer {
#   name: "im_scores"
#   type: IMSHOW
#   bottom: "fc8_pascal"
# }

#layer {
#  name: "fc8_mat"
#  type: "MatWrite"
#  bottom: "fc8_synth_to_real"
#  mat_write_param {
#    #prefix: "voc12/features/${NET_ID}/${TEST_SET}/fc8/"
#    #source: "voc12/list/${TEST_SET}_id.txt"
#    prefix: "${EXP}/features/${NET_ID}/${TEST_SET}/fc8/"
#    source: "${EXP}/list/${TEST_SET}_id.txt"
#    strip: 0
#    period: 1
#  }
#  include: { phase: TEST }
#}

Tsaku Nelson

unread,
Jun 11, 2018, 10:55:52 AM6/11/18
to Caffe Users

Hello, I applied all the proposed fixes to my code but the same issue still prevails. What other steps to I have to follow in order to solve the issue.

Tsaku Nelson

unread,
Jun 15, 2018, 11:11:07 AM6/15/18
to Caffe Users
Hello Alessandro, how did you finally solve the problem? because I had the same error as shown on the screenshot below, applied the fix by modifying the train_train_aug.prototxt file, but it seem to re-appear, and I have the feeling that the file "train_train_aug.prototxt" is automatically created with the parameters like "IMAGE_SEG_DATA" inside already.  


Is there a way to prevent the automatic creation, in order to use the correct version of the prototext you used? Feedback most appreciated

Tsaku Nelson

unread,
Jun 22, 2018, 11:35:35 AM6/22/18
to Caffe Users
Hi Alessandro, I have the same error as you, after solving the "IMAGE_SEG_DATA" issue:
  
"Message type "caffe.ImageDataParameter" has no field named "label_type". " 



 Can you tell me please how did you solved the problem? Thanks

Reply all
Reply to author
Forward
0 new messages