How to reshape input data between data layer and convolution layer?

168 views
Skip to first unread message

Hugo G

unread,
Jan 5, 2017, 3:07:24 PM1/5/17
to Caffe Users
Hi,

What I wanted to do is to use convolution layer to train a network on non-image data(1 dimension PCM audio data). So that the dimension of the input LMDB data is i.e. 256(batch size)  x  1  x 1960 x  1. In order to use convolutional filters whose size length are 2 or larger, I'll have to reshape the input data dimension to have 2-dimensional image like format. So I defined the following right after data layer:
layer {
    name: "reshape"
    type: "Reshape"
    bottom: "data"
    top: "conv1"
    reshape_param {
      shape {
        dim: 0  # copy the dimension from below
        dim: 0  # copy the dimension from below
        dim: 7
        dim: 280 #  -1: infer it from the other dimensions
      }
    }


The data layer looks like this:
name: "LeNet"
layer {
   name: "data"
   type: "Data"
   top: "data"
   top: "label"
   include {
   phase: TRAIN
   }
   data_param {
     source: "/path/to/lmdb_train"
     batch_size: 256
backend: LMDB
   }
 }
 
 layer {
   name: "data"
   type: "Data"
   top: "data"
   top: "label"
   include {
   phase: TEST
   }
   data_param {
     source: "/path/to/lmdb_test"
     batch_size: 100
backend: LMDB
   }
 }

However, Caffe complained: 
F0105 11:38:01.728590 32701 insert_splits.cpp:29] Unknown bottom blob 'data' (layer 'reshape', bottom index 0)

If I add one more output at data layer, i.e:
   top: "data"
   top: "label"
   top: "reshape"

It complained again that one layer cannot have more that 2 outputs:    "Check failed: MaxTopBlobs() >= top.size() (2 vs. 3) Data Layer produces at most 2 top blob(s) as output."
 

Does anyone know how to reshape 1 dimensional data to 2 dimensional between data layer and convolution layer so that LeNet is working? I can find zero resources regarding this issue.

Thanks
Hugo 




Przemek D

unread,
Jan 9, 2017, 4:54:13 AM1/9/17
to Caffe Users
Will a 1x1960x1 blob reshaped to 1x7x280 still make sense? Will a convolution over it make sense?

I can't see why convolution over the original blob would be impossible... have you tried using a rectangular kernel? That is one where you don't specify a kernel_size but instead kernel_h and kernel_w - you could use a 1x7x1 kernel to slide over that volume with no problems, but I haven't done  that myself.

For a good documentation on layers I'll recommend you read this one by Jonathan Williford.
Reply all
Reply to author
Forward
0 new messages