Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Evolutionary Intelligence

0 views
Skip to first unread message

Ayala

unread,
Apr 2, 2003, 11:21:05 AM4/2/03
to
Hello. I've been working on a new way to evolve artificial intelligence
systems, and I would love to get some feedback. Here's the summary:

"These pages show how to construct a dynamic virtual world of complex and
interacting genes, genomes, organisms and populations that compete against
one another in a rigorous selection regime in which fitness is judged by
one criterion alone: intelligence. The genetic architecture underlying
this evolutionary system is versatile, creative and powerful enough to
engender a practically infinite variety of data processing and analysis
capabilities, adaptable to almost any conceivable intellectual task. This
virtual world could witness the emergence of our first learning, thinking
machines, and provide a rich opportunity to study the nature of
intelligence and foray into a vast, untapped technological market."

I know it sounds crazy, but I think it could work. You can read it at
http://www.neuroblast.net/EI/EI.html. I'd welcome any comments, criticisms,
suggestions or collaborations from anybody who was interested.

Cheers,
Francisco

ian glendinning

unread,
Apr 10, 2003, 7:35:08 PM4/10/03
to
Now this is promising ...

Ayala is a French(?) wine.
Ayala is a suburb of Manilla, Philipines.
I learned the former when in the latter, when I noticed a contractor
with a web server named Ayala and asked a stupid question.

This is evolution personified - nothing planned, but significant
(future-wise) when it happens.

Andreas Hüwel

unread,
May 7, 2003, 2:04:03 AM5/7/03
to
I was overlooking the paper and it seems on the first glimpse:
brilliant introduction.

It seems, that it is worth more attention (than in this comp.ai.alife ;-)

So now I can spend some time to read it a bit more carefully:
How precicely could this be DONE, if s.o. would have a team to let
implement a first prototype?
And there are several questions arising.

Because the paper keeps :-( theoretical,
So I will only point at this kind of questions.

(I am regarding to the pdf-pages)
Page 11: Cumulative Input
You first introduce the processing genes to do any kind of processing
(produce any kind of numerical output).
Then you invent the neural input (NI) as current (weighted) sum of all
input.
And the same with cumulative input (CI) as sum of NI since "last
firering" (LF), which you define as time since last non-zero output.
But this CI would not make sense
(it will be the same than NI)
- for your normal Perzeptron [-1,1]
- nor for any sigmoidial activation function ]0,1[ (with 0,1 only for
input of +-unlimited, so it practicalby never outputs exact 0 or 1).
- It would also not make sense for any mutated/arbitrary output function
of processing gene.

It would make sense to binary output functions (e.g. of
McCulloch/Pitts), where exact "0" indeed has the special meaning of
"inactivity" .

You will have to generalize the term "time since last fiering" LF to
anything more useful/general... :-(
###
You spend much effort on making the terms of your strength and
processing function most flexible - nice, but - why?
I mean: common wheight functions are mostly "simple skalar multiplier
wheights", and processing (activation) functions also are mostly simple
nonlinear functions (sigmoidal/gaussian). The problem then is to find
the apropriate connections and wheights. In your paper it seems up side
down: MUCH effort to make elaborated activation/wheight-terms, but only
simple "number-mutation" of the count/direction/range/Target-Type
subgenes do adopt the network to the given problem!? How should the
"correct wheights" ever be found by having each single neuron complex
wheight-functions, and "only" mutation to adopt the wheights (i.e.
learn). I know: you are looking something further than common neuronal
nets, but with them you will have to compete in the first step!

###
Then I have differences in understandiing your network-construction plan
with your connection genes:
What is the target subgene encoding? As I read (p19) "determines what
kind of neuron the connection should link up with." Now: Where do you
describe different kinds (types) of neurons? Where is this "Type" of a
given neuron encoded, so a (number-)match with target-type subgene would
make sense? What do you mean with this? On (p20) you describe
brain-built up. I think some more words on THIS aspect of your
networking-construction could enlighten me ;-)

more coming (hopefully) soon ...

Andreas

Ayala

unread,
May 9, 2003, 4:27:00 PM5/9/03
to
Andreas Hüwel <andreas...@web.de> wrote

Thank you very much for your comments. I'll address each one.

> (I am regarding to the pdf-pages)
> Page 11: Cumulative Input
> You first introduce the processing genes to do any kind of processing
> (produce any kind of numerical output).
> Then you invent the neural input (NI) as current (weighted) sum of all
> input.
> And the same with cumulative input (CI) as sum of NI since "last
> firering" (LF), which you define as time since last non-zero output.
> But this CI would not make sense
> (it will be the same than NI)
> - for your normal Perzeptron [-1,1]
> - nor for any sigmoidial activation function ]0,1[ (with 0,1 only for
> input of +-unlimited, so it practicalby never outputs exact 0 or 1).
> - It would also not make sense for any mutated/arbitrary output function
> of processing gene.
> It would make sense to binary output functions (e.g. of
> McCulloch/Pitts), where exact "0" indeed has the special meaning of
> "inactivity" .
> You will have to generalize the term "time since last fiering" LF to
> anything more useful/general... :-(

Your point is correct, but we have to realize that not every type of
function or terminal will make sense in every single context. In any
sufficiently complex genetic system, evolution will have to do its
best to optimize away extraneous or useless features (although Genetic
Programming is almost unique in that all mutants produce expressions
that are at least syntactically valid).

The [-1, 1] Perceptron "fires" every single time interval, so in this
case CI and LF are useless terminals. The same would be true for many
other types of neural processing as well, but for an infinite variety
of others they would be extremely useful.


> You spend much effort on making the terms of your strength and
> processing function most flexible - nice, but - why?
> I mean: common wheight functions are mostly "simple skalar multiplier
> wheights", and processing (activation) functions also are mostly simple
> nonlinear functions (sigmoidal/gaussian). The problem then is to find
> the apropriate connections and wheights. In your paper it seems up side
> down: MUCH effort to make elaborated activation/wheight-terms, but only
> simple "number-mutation" of the count/direction/range/Target-Type
> subgenes do adopt the network to the given problem!? How should the
> "correct wheights" ever be found by having each single neuron complex
> wheight-functions, and "only" mutation to adopt the wheights (i.e.
> learn). I know: you are looking something further than common neuronal
> nets, but with them you will have to compete in the first step!

I tried to make every aspect of the genome as versatile as possible,
so that evolution could create just about any possible kind of brain.
The mutational operators of the parameter subgenes (count/ direction/
range/ target) are relatively simple, but they are nevertheless quite
powerful in producing an infinite variety of physical layouts for the
neurons and their connections.

The dynamics of the connection weights are determined by the strength
subgenes, but the actual values they take on are not predetermined --
they will change during training (and in fact they may continue to
change throughout the entire lifetime of the brain). There may be
confusion with the term "adapt". The weight *subgenes* evolve and
adapt from one population to the next (through mutation and
selection), but the weight *values* "adapt" within an individual (in
response to environental feedback).


> Then I have differences in understandiing your network-construction plan
> with your connection genes:
> What is the target subgene encoding? As I read (p19) "determines what
> kind of neuron the connection should link up with." Now: Where do you
> describe different kinds (types) of neurons? Where is this "Type" of a
> given neuron encoded, so a (number-)match with target-type subgene would
> make sense? What do you mean with this? On (p20) you describe
> brain-built up. I think some more words on THIS aspect of your
> networking-construction could enlighten me ;-)

Yes, you're right that this section is the most confusing. On page
19, the neuron type number is a reference to the relative chromosomal
position of the neuron making the connection; it is not an absolute
reference. If the target allele of a given connection is 0, then
connections are made to neurons of the same type as itself. If the
allele is 1, then connections are made to neurons of the type located
one position to the right on the chromosome. And so on. Hopefully
the figures (of the post-synaptic connection targets, and of the
network on the next page) clarify this somewhat.


> more coming (hopefully) soon ...

I look forward to it!

Johnny Astro

unread,
May 10, 2003, 3:52:16 AM5/10/03
to
Since the original message was posted over a month ago, April 2, you
probably should have kept the original link:
http://www.neuroblast.net/EI/EI.html
For people viewing with Outlook Express the default time that messages are
held is 5 days (under Tools/ Options/ Maintenance). So many people can't
see the message you're responding to. I've disabled this and messages only
disappear after my ISP deletes them (about 3 months).

And you're right. It is an interesting link.

"Andreas Hüwel" wrote in message news:b9a7jp$hpo$03$1...@news.t-online.com...


> I was overlooking the paper and it seems on the first glimpse:
> brilliant introduction.
>
> It seems, that it is worth more attention (than in this comp.ai.alife ;-)
>

<remainder removed>

Daniel Månsson

unread,
May 11, 2003, 8:30:03 AM5/11/03
to
...has any other report been done on "EI"?
It sounds very interesting and definalty worth looking into.
0 new messages