What is self._inputs supposed to contain?

532 views
Skip to first unread message

Franck Dernoncourt

unread,
Jul 10, 2015, 6:14:01 PM7/10/15
to caffe...@googlegroups.com
I was surprised that `self._inputs` is an empty list in my neural network: when I use pycaffe, `net = caffe.Net(prototxt_filename, caffemodel_filename, caffe.TEST);print(len(net.inputs))` returns 0.

What is net.inputs supposed to contain? I read in https://github.com/BVLC/caffe/issues/2246 that "input in this case does not include a data layer; it's an unattached bottom.". Does that mean len(self._inputs)>0 iff there is at least one unattached bottom? 


Here is the prototxt I used:

name: "IHGNet"
layer {
  name: "ihg"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  hdf5_data_param {
    source: "/media/sf_test/Archive/HDF5/txt/tr_0.txt"
    batch_size: 356

  }
}

layer {
  name: "ihg"
  type: "HDF5Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  hdf5_data_param {
    source: "/media/sf_test/Archive/HDF5/txt/ts_0.txt"
    batch_size: 1422

  }
}

layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "data"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 300
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "drop1"
  type: "Dropout"
  bottom: "ip1"
  top: "ip1"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 200
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "drop2"
  type: "Dropout"
  bottom: "ip2"
  top: "ip2"
  dropout_param {
    dropout_ratio: 0.4
  }
}

layer {
  name: "ip3"
  type: "InnerProduct"
  bottom: "ip2"
  top: "ip3"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 200
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

layer {
  name: "drop3"
  type: "Dropout"
  bottom: "ip3"
  top: "ip3"
  dropout_param {
    dropout_ratio: 0.3
  }
}


layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip3"
  bottom: "label"
  top: "loss"
}

layer {
  name: "acc"
  type: "Accuracy"
  bottom: "ip3"
  bottom: "label"
  top: "acc"
  include {
    phase: TEST
  }
}

Franck Dernoncourt

unread,
Jul 10, 2015, 6:27:20 PM7/10/15
to caffe...@googlegroups.com
Bonus question, which is actually what bothers me: if len(self._inputs)==0, how can I generate the predicted outputs as well as their probabilities using pycaffe? 

I was planning to use something along the lines of out = net.forward(data=my_input_data), but it wouldn't work since net.forward() requires data to be in self._inputs (https://github.com/BVLC/caffe/blob/master/python/caffe/pycaffe.py#L86 : raise Exception('Input blob arguments do not match net inputs.'))

Here is the network architecture corresponding to the prototxt mentioned in the previous message, if that's ANN-dependent: http://i.stack.imgur.com/6Y0L6.png

--
You received this message because you are subscribed to a topic in the Google Groups "Caffe Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/caffe-users/aojN_bmbg74/unsubscribe.
To unsubscribe from this group and all its topics, send an email to caffe-users...@googlegroups.com.
To post to this group, send email to caffe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/caffe-users/77979935-8ca1-49d1-9db4-355eb363efdd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Franck Dernoncourt

unread,
Jul 11, 2015, 2:16:06 AM7/11/15
to caffe...@googlegroups.com
After trying the tutorials MNIST and CIFAR10, it looks like len(self._inputs)>0 iff in the prototxt file input was defined. E.g. in cifar10_quick.prototxt, there is input: "data" .

So I think to predict outputs I should use another prototxt that is similar to the one I use to define the trained network but removing the input and output layers, and add the following in bold (see below). Is that what I am supposed to do? 

New prototxt file used to predict outputs (but I still use the previous prototxt file to train the network):

name: "IHGNet"
input: "data"
input_dim: 1
input_dim
: 1
input_dim
: 1
input_dim
: 250
{
  name
: "prob"
  type
: "Softmax"
  bottom
: "ip3"
  top
: "prob"
}




Evan Shelhamer

unread,
Jul 11, 2015, 5:28:07 AM7/11/15
to Franck Dernoncourt, caffe...@googlegroups.com
`self._inputs` is indeed for the manual or "deploy" inputs as defined by the input fields in a prototxt. To run a net with data layers in through pycaffe, just call `net.forward()` without arguments. No need to change the definition of your train or test nets.

See for instance code cell [10] of the Python LeNet example: http://nbviewer.ipython.org/github/BVLC/caffe/blob/tutorial/examples/01-learning-lenet.ipynb

Evan Shelhamer

--
You received this message because you are subscribed to the Google Groups "Caffe Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to caffe-users...@googlegroups.com.

To post to this group, send email to caffe...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages