Using MOSES to learn the parameters of a neural network

56 views
Skip to first unread message

Cosmo Harrigan

unread,
Nov 10, 2015, 9:10:05 PM11/10/15
to ope...@googlegroups.com
Hi,

How well should MOSES work for learning the parameters of a neural network (neuroevolution)? For instance, at AGI 2015, Schmidhuber talked about their work at IDSIA using genetic algorithms for choosing the parameters of a convolutional neural network which is used to process images from a simulated environment, where the agent then uses the output of that network as input to a controller which learns how to act to maximize reward:

Evolving Deep Unsupervised Convolutional Networks for Vision-Based Reinforcement Learning

I found this page from 2009 that discusses a similar application of MOSES:
http://wiki.opencog.org/wikihome/index.php/Extending_MOSES_to_evolve_Recurrent_Neural_Networks

but I am curious what a more current assessment would be?

Thanks,
Cosmo

Ben Goertzel

unread,
Nov 10, 2015, 9:25:52 PM11/10/15
to opencog

Historically, MOSES has not extremely good at optimizing floating-point values...

However, in his GSoC project in 2015, Arley Ristar implemented PSO as an alternative algorithm for intra-deme search in MOSES for the floating point case

And I note that PSO is a viable approach to training NNs, perhaps better than backprop:


Also, MOSES clearly *is* a good approach for learning the *structure* of neural nets (apart from the issue of optimizing the float parameters on the links)

So I would say that: getting the "MOSES w/ PSO inside" to work effectively as an algorithm for learning NNs "structures plus parameters" is a viable research project, but would likely require a fair bit of tweaking and tuning...

-- Ben



--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at http://groups.google.com/group/opencog.
To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAE2yDO6_jzDUZ9UgTCr4K3815Atbx1eMynQ_HK%2BM_P%2BoPfdnyg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.



--
Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." -- George Bernard Shaw

Cosmo Harrigan

unread,
Nov 10, 2015, 10:16:49 PM11/10/15
to ope...@googlegroups.com
On Tue, Nov 10, 2015 at 6:25 PM, Ben Goertzel <b...@goertzel.org> wrote:

Historically, MOSES has not extremely good at optimizing floating-point values...

However, in his GSoC project in 2015, Arley Ristar implemented PSO as an alternative algorithm for intra-deme search in MOSES for the floating point case

And I note that PSO is a viable approach to training NNs, perhaps better than backprop:


Thanks, that's an interesting direction to read more about.
 
Also, MOSES clearly *is* a good approach for learning the *structure* of neural nets (apart from the issue of optimizing the float parameters on the links)

So I would say that: getting the "MOSES w/ PSO inside" to work effectively as an algorithm for learning NNs "structures plus parameters" is a viable research project, but would likely require a fair bit of tweaking and tuning...

Maybe one possible step in that direction could be to modify the Pole Balancing code that Joel Lehman wrote to use the PSO algorithm?

Cosmo

Ben Goertzel

unread,
Nov 10, 2015, 10:34:47 PM11/10/15
to opencog


So I would say that: getting the "MOSES w/ PSO inside" to work effectively as an algorithm for learning NNs "structures plus parameters" is a viable research project, but would likely require a fair bit of tweaking and tuning...

Maybe one possible step in that direction could be to modify the Pole Balancing code that Joel Lehman wrote to use the PSO algorithm?



Yes... 

Linas Vepstas

unread,
Nov 11, 2015, 10:04:19 AM11/11/15
to opencog
Yes.

But we probably need some simpler PSO examples, as well.  e.g. learning a single linear classifying hyperplane (which is what vector machines do extremely efficiently; its also what single-layer feed-forward neurons do.)

A more impressive demo would be one where moses learns two or three hyperplanes ( to carve up some space into quadrants or octants)  or even some kind of space with holes in it, or some alternating quadrants/octants (e.g. upper-left + bottom-right are accepted, but lower-left plus upper right are rejected.) The linear kernels can't learn these at all, and most other algos kind-of suck at this, although NN's can do this well.  However, moses +PSO should be able to learn these quickly, without using/being/emulating a neural net.  Or at least,  that is the theory: no one has analyzed this, studied the bottlenecks, optimized moses to work well for this.

(I'm not sure where we would use this in practice ...)


--linas

--
You received this message because you are subscribed to the Google Groups "opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opencog+u...@googlegroups.com.
To post to this group, send email to ope...@googlegroups.com.
Visit this group at http://groups.google.com/group/opencog.

Ben Goertzel

unread,
Nov 15, 2015, 4:55:50 AM11/15/15
to opencog

Cosmo,

Regarding the pole balancing example, if you haven't seen it please read


which explains the many interesting complexities here...

-- Ben



For more options, visit https://groups.google.com/d/optout.

Ben Goertzel

unread,
Nov 15, 2015, 4:56:13 AM11/15/15
to opencog

Oh, never mind, I see you already found that page...

Ben Goertzel

unread,
Nov 15, 2015, 4:57:31 AM11/15/15
to opencog

Pretty much, where he left off, we were unsure if the crappy results were due to a bad optimizer or some subtler phenomenon.  Trying with PSO would help resolve that...
Reply all
Reply to author
Forward
0 new messages