FCN, when finetuning from "VGG16.caffemodel", there's a mismatch of blob shape at layer "fc6".

2,959 views
Skip to first unread message

Ben Gee

unread,
Jul 20, 2015, 2:05:51 AM7/20/15
to caffe...@googlegroups.com
I dowload the vgg model from model zoo: vgg16. And in solve.py, load the vgg16 as instructed. And then training, there's a mismatch of blob shape at layer "fc6". 

Is the "vgg16" model that I use  the right one ? 

"fc6" in my train_val.prototxt:
layer {
name: "fc6"
type: "Convolution"
bottom: "pool5"
top: "fc6"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 4096
kernel_size: 7
engine: CAFFE
}
}
 
and "fc6" in vgg model:

layers {
bottom: "pool5"
top: "fc6"
name: "fc6"
type: INNER_PRODUCT
inner_product_param {
num_output: 4096
}
}


PLZ tell me how to resolve this?

Carlos Treviño

unread,
Jul 20, 2015, 3:36:38 AM7/20/15
to caffe...@googlegroups.com
You have to do a net surgery as the instructions here https://github.com/BVLC/caffe/blob/master/examples/net_surgery.ipynb say:

# Load the original network and extract the fully connected layers' parameters.
net = caffe.Net('../models/bvlc_reference_caffenet/deploy.prototxt', 
                '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', 
                caffe.TEST)
params = ['fc6', 'fc7', 'fc8']
# fc_params = {name: (weights, biases)}
fc_params = {pr: (net.params[pr][0].data, net.params[pr][1].data) for pr in params}

for fc in params:
    print '{} weights are {} dimensional and biases are {} dimensional'.format(fc, fc_params[fc][0].shape, fc_params[fc][1].shape)

# Load the fully convolutional network to transplant the parameters.
net_full_conv = caffe.Net('net_surgery/bvlc_caffenet_full_conv.prototxt', 
                          '../models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
                          caffe.TEST)
params_full_conv = ['fc6-conv', 'fc7-conv', 'fc8-conv']
# conv_params = {name: (weights, biases)}
conv_params = {pr: (net_full_conv.params[pr][0].data, net_full_conv.params[pr][1].data) for pr in params_full_conv}

for conv in params_full_conv:
    print '{} weights are {} dimensional and biases are {} dimensional'.format(conv, conv_params[conv][0].shape, conv_params[conv][1].shape)

for pr, pr_conv in zip(params, params_full_conv):
    conv_params[pr_conv][0].flat = fc_params[pr][0].flat  # flat unrolls the arrays
    conv_params[pr_conv][1][...] = fc_params[pr][1]

net_full_conv.save('net_surgery/bvlc_caffenet_full_conv.caffemodel')


I hope this helps

Carlos

religi...@gmail.com

unread,
Jan 5, 2016, 7:36:36 AM1/5/16
to Caffe Users
Hello, have you solve this problem?

But when I Load the fully convolutional network to transplant the parameters. 
net_full_conv = caffe.Net('fcn/deploychf.prototxt', 'fcn/VGG_ILSVRC_16_layers.caffemodel', caffe.TEST) 

It shows :
Cannot copy param 0 weights from layer 'fc6'; shape mismatch. Source param shape is 1 1 4096 25088 (102760448); target param shape is 4096 512 7 7 (102760448). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer. 

How can I solve this problem?

Thank you

在 2015年7月20日星期一 UTC+8下午2:05:51,Ben写道:

Wong Fungtion

unread,
Feb 27, 2017, 11:13:06 PM2/27/17
to Caffe Users
VGG16 prototxt using old layer definition "layers" instead of "layer",using it like:

  1. layers {
  2. name: "fc6"
  3. type: INNER_PRODUCT
  4. bottom: "pool5"
  5. top: "fc6"
  6. blobs_lr: 1
  7. blobs_lr: 2
  8. weight_decay: 1
  9. weight_decay: 0
  10. inner_product_param {
  11. num_output: 4096
  12. weight_filler {
  13. type: "gaussian"
  14. std: 0.01
  15. }
  16. bias_filler {
  17. type: "constant"
  18. value: 0.0
  19. }
  20. }
  21. }

在 2015年7月20日星期一 UTC+8下午2:05:51,Ben写道:
I dowload the vgg model from model zoo: vgg16. And in solve.py, load the vgg16 as instructed. And then training, there's a mismatch of blob shape at layer "fc6". 
Reply all
Reply to author
Forward
0 new messages