>
> I think a big reason why most people avoid evolutionary algorithms is because our
> understanding and theory of them is not as mature as that for gradient based
> methods. There arent any guarantees of convergence within some X timesteps like with
> gradient descent. Also most evolutionary algorithms require a lot of parameter tuning
> (there is a lot of active research on how to automatically tune parameters both in an
> online and offline manner). However evolutionary algorithms do have advantages over
> gradient based methods, like for example, evolutionary algorithms if run for an
> infinitely long time, will always find the global optima.
So will brute-force search....
Peter
> >> a possible future area of research. be
> >>
> >>
> >> On Sunday, May 25, 2014 11:15:56 PM UTC-5, Matthew Hausknecht wrote:
> >>>
> >>> Thanks Craig for sharing this resource. It has a wealth of citations for
> >>> just about any type of NN related research. I'm not sure how I feel about
> >>> his new notation / CAP analysis.
> >>>
> >>> Jason - Check out the section at the end of the paper - 6.7. One of the
> >>> author's students evolved a rather large network for the TORCS racing game
> >>> that used visual-like input. I'm not aware of explicitly convolutional
> >>> networks being weight-evolved.
> >>>
> >>>
> >>> On Sat, May 24, 2014 at 10:58 PM, Jason Liang <
jason...@gmail.com> wrote:
> >>>>
> >>>> Thanks for the paper, it was very informative. Just wondering, have
> >>>> there any published work regarding evolutionary approaches to deep learni
> >>>> for example like evolving convolutional neural networks?ng,
> >>>>
> >>>>
> >>>> On Friday, May 16, 2014 1:24:10 PM UTC-5, Craig Corcoran wrote:
> >>>>>
> >>>>> Hi Folks,
> >>>>>
> >>>>> This was just sent out on the RL listserv and may be of interest to the
> >>>>> group. Its a very high level view of the history of neural networks andd
> >>>>> learning with lots of references (and no equations...).eep
> >>>>>
> >>>>> Craig
> >>>>>
> >>>>> ---------- Forwarded message ----------
> >>>>> From: Schmidhuber Juergen <
jue...@idsia.ch>
> >>>>> Date: Fri, May 16, 2014 at 11:52 AM
> >>>>> Subject: [rl-list] Deep Learning Overview - Draft
> >>>>> To:
rl-...@googlegroups.com
> >>>>>
> >>>>>
> >>>>> Here is the preliminary draft of an invited Deep Learning overview - it
> >>>>> also briefly discusses applications of deep (possibly recurrent) neural
> >>>>> networks to Reinforcement Learning:
> >>>>>
> >>>>>
http://www.idsia.ch/~juergen/DeepLearning15May2014.pdf
> >>>>>
> >>>>> It mostly consists of references (about 800 entries so far). Important
> >>>>> citations are still missing though. As a machine learning researcher, I
> >>>>> obsessed with credit assignment. In case you know of references to add or am
> >>>>> correct, please send them with brief explanations to
jue...@idsia.ch (NOT
> >>>>> THE ENTIRE LIST!), preferably together with URL links to PDFs for TO
> >>>>> verification. Please also do not hesitate to send me additional correcti
> >>>>> / improvements / suggestions / Deep Learning success stories withons
> >>>>> feedforward and recurrent neural networks. I'll post a revised version
> >>>>> later. Thanks a lot!
> >>>>>
> >>>>>
> >>>>> Abstract. In recent years, deep artificial neural networks (including
> >>>>> recurrent ones) have won numerous contests in pattern recognition and
> >>>>> machine learning. This historical survey compactly summarises relevant wo
> >>>>> much of it from the previous millennium. Shallow and deep learners arerk,
> >>>>> distinguished by the depth of their credit assignment paths, which are
> >>>>> chains of possibly learnable, causal links between actions and effects
> >>>>> review deep supervised learning (also recapitulating the history of. I
> >>>>> backpropagation), unsupervised learning, reinforcement learning &
> >>>>> evolutionary computation, and indirect search for short programs encod
> >>>>> deep and large networks.ing