mlp training

27 views
Skip to first unread message

Marios-Evaggelos Kogias

unread,
Jun 7, 2017, 10:41:35 AM6/7/17
to bob-devel
Hi everyone,

I am completely new to bob and I am trying to use it for some neural net training.
I tried the example here http://pythonhosted.org/bob.learn.mlp/guide.html and I have the following issue. Where I add an extra sample to the training set, namely d0 is of size (2,3) and t0 is of size (2,1) I am getting RuntimeError: array dimensions do not match 1 != 2 when I run the train function as described in the doc.

Am I doing something wrong? I think I'm following the API conventions properly.

Thanks in advance

Cheers,
Marios

Amir Mohammadi

unread,
Jun 7, 2017, 11:03:27 AM6/7/17
to bob-devel
Hi,

Could you please post a set of commands to re-produce this problem?

Thanks,
Amir

--
-- You received this message because you are subscribed to the Google Groups bob-devel group. To post to this group, send email to bob-...@googlegroups.com. To unsubscribe from this group, send email to bob-devel+...@googlegroups.com. For more options, visit this group at https://groups.google.com/d/forum/bob-devel or directly the project website at http://idiap.github.com/bob/
---
You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Marios-Evaggelos Kogias

unread,
Jun 7, 2017, 12:09:07 PM6/7/17
to bob-...@googlegroups.com
It's exactly the example in the docs just with more samples.
Here is a sample of code

In [1]: import numpy

In [2]: import bob.learn.mlp

In [3]: mlp = bob.learn.mlp.Machine((3, 3, 2, 1))

In [4]: input_to_hidden0 = numpy.ones((3,3), 'float64')

In [5]: hidden0_to_hidden1 = 0.5*numpy.ones((3,2), 'float64')

In [6]: hidden1_to_output = numpy.array([0.3, 0.2], 'float64').reshape(2,1)

In [7]: bias_hidden0 = numpy.array([-0.2, -0.3, -0.1], 'float64')

In [8]: bias_hidden1 = numpy.array([-0.7, 0.2], 'float64')

In [9]: bias_output = numpy.array([0.5], 'float64')

In [10]: mlp.weights = (input_to_hidden0, hidden0_to_hidden1, hidden1_to_output)

In [11]: mlp.biases = (bias_hidden0, bias_hidden1, bias_output)

In [12]: d0 = numpy.array([[.3, .7, .5], [.2, .1, .6]]) # input

In [13]: t0 = numpy.array([[.0], [1.0]]) # target

In [14]: trainer = bob.learn.mlp.BackProp(1,
bob.learn.mlp.SquareError(mlp.output_activation), mlp,
train_biases=False) # Creates a BackProp trainer with a batch size of
1

In [15]: trainer.train(mlp, d0, t0) # Performs the Back Propagation
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-15-24b52d1aed62> in <module>()
----> 1 trainer.train(mlp, d0, t0) # Performs the Back Propagation

RuntimeError: array dimensions do not match 1 != 2

> You received this message because you are subscribed to a topic in the
> Google Groups "bob-devel" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/bob-devel/aqDH5YzT4J0/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to

André Anjos

unread,
Jun 8, 2017, 3:57:15 AM6/8/17
to bob-...@googlegroups.com

On Wed, Jun 7, 2017 at 6:09 PM, Marios-Evaggelos Kogias <marios...@gmail.com> wrote:
trainer.train(mlp, d0, t0) # Performs the Back Propagation

The reason why it is failing is that you're setting the batch size to be 1 and than passing 2 values to it.

Bob's MLP infrastructure allows you to train your network in any way you deem necessary. You define the batch size (which makes it pre-allocate to store any number of elements you need. Then, you iterate (for loop) with the correct amount of elements at each iteration.

If you'd like to implement stochastic training (the default), pass 1 to the constructor, then inside a for loop, pass 1 element at a time to the trainer.

I hope that clarifies,

Best, Andre

PS: I reckon the output message is not self-explanatory in this aspect. Will open a ticket for this.

--
Dr. André Anjos
Idiap Research Institute
Centre du Parc - rue Marconi 19
CH-1920 Martigny, Suisse
Phone: +41 27 721 7763
Fax: +41 27 721 7712
http://andreanjos.org

André Anjos

unread,
Jun 8, 2017, 4:00:57 AM6/8/17
to bob-...@googlegroups.com

On Thu, Jun 8, 2017 at 9:56 AM, André Anjos <andre...@idiap.ch> wrote:
PS: I reckon the output message is not self-explanatory in this aspect. Will open a ticket for this.

Actually - a old ticket already exists for this:


I'll try to have a look on it at some point next week. Please be aware of this behaviour though.

Cheers, A

Marios-Evaggelos Kogias

unread,
Jun 8, 2017, 4:04:30 AM6/8/17
to bob-...@googlegroups.com
That makes things much more clear. A hint in the documentation that we
have to call training iteratively over all the samples or batches
would help.


Thanks a lot,
Marios

Amir Mohammadi

unread,
Jul 5, 2017, 10:24:09 AM7/5/17
to bob-...@googlegroups.com
Hi Marios,

If you want/can, you can open a merge request on bob.learn.mlp's mirror to fix its documentation:
https://github.com/bioidiap/bob.learn.mlp

Best,
Amir

You received this message because you are subscribed to the Google Groups "bob-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bob-devel+...@googlegroups.com.

Marios-Evaggelos Kogias

unread,
Jul 5, 2017, 11:54:15 AM7/5/17
to bob-...@googlegroups.com
Thanks Amir,

I'll make a pull request as soon as possible.

Best,
Marios

Amir Mohammadi

unread,
Jul 6, 2017, 7:57:23 AM7/6/17
to bob-...@googlegroups.com
Hi Marios,

Thank you for your contribution.
Your changes are now merged to master:
https://gitlab.idiap.ch/bob/bob.learn.mlp/merge_requests/6

Best,
Amir
Reply all
Reply to author
Forward
0 new messages