CuDNNDeconvolutionLayer input must have 2 spatial axes (e.g., height and width).

57 views
Skip to first unread message

Shrabani Ghosh

unread,
Jul 10, 2018, 3:10:05 PM7/10/18
to Caffe Users
After running few lines of the code , I am getting this error. how to fix this problem ?

I0710 14:50:51.146951 10878 layer_factory.hpp:77] Creating layer data
I0710 14:50:51.146968 10878 net.cpp:84] Creating Layer data
I0710 14:50:51.146982 10878 net.cpp:380] data -> data
I0710 14:50:51.147017 10878 net.cpp:122] Setting up data
I0710 14:50:51.147032 10878 net.cpp:129] Top shape: 1 1 20 20 20 (8000)
I0710 14:50:51.147038 10878 net.cpp:137] Memory required for data: 32000
I0710 14:50:51.147047 10878 layer_factory.hpp:77] Creating layer deconv1
I0710 14:50:51.147063 10878 net.cpp:84] Creating Layer deconv1
I0710 14:50:51.147070 10878 net.cpp:406] deconv1 <- data
I0710 14:50:51.147083 10878 net.cpp:380] deconv1 -> deconv1
F0710 14:50:51.499456 10878 cudnn_deconv_layer.cpp:96] Check failed: 2 == this->num_spatial_axes_ (2 vs. 3) CuDNNDeconvolutionLayer input must have 2 spatial axes (e.g., height and width). Use 'engine: CAFFE' for general ND convolution.
*** Check failure stack trace: ***
Aborted (core dumped)



layer {
  name: "data"
  type: "DummyData"
  top: "data"
  dummy_data_param {
    shape {
      dim: 1
      dim: 1
      dim: 20
      dim: 20
      dim: 20
    }
  }
}
layer {
  name: "deconv1"
  type: "Deconvolution"
  bottom: "data"
  top: "deconv1"
  param {
    lr_mult: 0.1
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 1
    pad: 7
    pad: 0
    pad: 0
    kernel_size: 19
    kernel_size: 1
    kernel_size: 1
    stride: 5
    stride: 1
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "deconv1"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 0.1
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "bn1"
  type: "BatchNorm"
  bottom: "conv1"
  top: "bn1"
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "bn1"
  top: "bn1"
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "bn1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 0.1
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "bn2"
  type: "BatchNorm"
  bottom: "conv2"
  top: "bn2"
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "bn2"
  top: "bn2"
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "bn2"
  top: "conv3"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 0.1
  }
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "bn3"
  type: "BatchNorm"
  bottom: "conv3"
  top: "bn3"
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "bn3"
  top: "bn3"
}
layer {
  name: "conv4"
  type: "Convolution"
  bottom: "bn3"
  top: "conv4"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 0.1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "bn4"
  type: "BatchNorm"
  bottom: "conv4"
  top: "bn4"
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
}
layer {
  name: "relu4"
  type: "ReLU"
  bottom: "bn4"
  top: "bn4"
}
layer {
  name: "conv5"
  type: "Convolution"
  bottom: "bn4"
  top: "conv5"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 0.1
  }
  convolution_param {
    num_output: 16
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "bn5"
  type: "BatchNorm"
  bottom: "conv5"
  top: "bn5"
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
  param {
    lr_mult: 0
  }
}
layer {
  name: "relu5"
  type: "ReLU"
  bottom: "bn5"
  top: "bn5"
}
layer {
  name: "conv6"
  type: "Convolution"
  bottom: "bn5"
  top: "conv6"
  param {
    lr_mult: 0.1
  }
  param {
    lr_mult: 0.1
  }
  convolution_param {
    num_output: 1
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "gaussian"
      std: 0.001
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "recon"
  type: "Eltwise"
  bottom: "deconv1"
  bottom: "conv6"
  top: "recon"
  eltwise_param {
    operation: SUM
  }
}





Siyuan

unread,
Jul 10, 2018, 10:56:37 PM7/10/18
to Shrabani Ghosh, Caffe Users
Hi,
CuDNN does not support 3D convolution. Try caffe engine instead.
layer {
  name: "deconv1"
  type: "Deconvolution"
  bottom: "data"
  top: "deconv1"
  ......
  engine: CAFFE
} 



--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/f7cc1ea4-b38c-47b7-8240-db9d89724f5a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Shrabani Ghosh

unread,
Jul 11, 2018, 10:56:50 AM7/11/18
to Caffe Users
Where I have to put this line? engine:CAFEE?

Shrabani Ghosh

unread,
Jul 11, 2018, 11:08:10 AM7/11/18
to Caffe Users
Thank You so much I solved it.


On Tuesday, July 10, 2018 at 10:56:37 PM UTC-4, Siyuan wrote:
Reply all
Reply to author
Forward
0 new messages