reimplementing MOSES atop the pattern miner: wiki page

조회수 36회
읽지 않은 첫 메시지로 건너뛰기

Ben Goertzel

읽지 않음,
2016. 12. 17. 오후 11:38:2816. 12. 17.
받는사람 opencog
I have posted my design idea (discussed in a recent email thread) on
the wiki site here

http://wiki.opencog.org/w/Reimplementing_MOSES_in_the_Atomspace_Atop_the_Pattern_Miner

so it doesn't get lost

ben

--
Ben Goertzel, PhD
http://goertzel.org

“I tell my students, when you go to these meetings, see what direction
everyone is headed, so you can go in the opposite direction. Don’t
polish the brass on the bandwagon.” – V. S. Ramachandran

Nil Geisweiller

읽지 않음,
2017. 1. 13. 오전 8:55:0017. 1. 13.
받는사람 ope...@googlegroups.com
On 12/18/2016 06:38 AM, Ben Goertzel wrote:
> I have posted my design idea (discussed in a recent email thread) on
> the wiki site here
>
> http://wiki.opencog.org/w/Reimplementing_MOSES_in_the_Atomspace_Atop_the_Pattern_Miner

The Backward Chainer can also be framed that way. :-) Even more so
directly than what you've sketched on the SampleLink page.

The BC as currently implemented evolves atomese programs, where each
program encodes a specific forward chaining strategies (FCS for short).
There is a difficulty though, there are no easy way to evaluate the
fitness of a FCS, either it proves the target or it doesn't. Or let's
say that the fitness landscape is extremely chaotic, some FCS may prove
nothing at all, while a tiny variation of it may prove our target
completely.

BUT this can be overcome by meta-learning, i.e. learning a measure of
success for FCS that are half-way there. So in this framework
meta-learning would be used to reshape the fitness function, from a
crisp chaotic one to a smooth regular one.

Nil
전체답장
작성자에게 답글
전달
새 메시지 0개