Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

simulating a neural network

3 views
Skip to first unread message

Jonathan Marshall

unread,
Oct 20, 1986, 2:25:50 PM10/20/86
to
In article <2...@eneevax.UUCP> iar...@eneevax.UUCP (Bill Dorsey) writes:
>
> Having recently read several interesting articles on the functioning of
>neurons within the brain, I thought it might be educational to write a program
>to simulate their functioning. Being somewhat of a newcomer to the field of
>artificial intelligence, my approach may be all wrong, but if it is, I'd
>certainly like to know how and why.
> The program simulates a network of 1000 neurons. Any more than 1000 slows
>the machine down excessively. Each neuron is connected to about 10 other
>neurons.
> .
> .
> .
> The initial results have been interesting, but indicate that more work
>needs to be done. The neuron network indeed shows continuous activity, with
>neurons changing state regularly (but not periodically). The robot (!) moves
>around the screen generally winding up in a corner somewhere where it occas-
>ionally wanders a short distance away before returning.
> I'm curious if anyone can think of a way for me to produce positive and
>negative feedback instead of just feedback. An analogy would be pleasure
>versus pain in humans. What I'd like to do is provide negative feedback
>when the robot hits a wall, and positive feedback when it doesn't. I'm
>hoping that the robot will eventually 'learn' to roam around the maze with-
>out hitting any of the walls (i.e. learn to use its senses).
> I'm sure there are more conventional ai programs which can accomplish this
>same task, but my purpose here is to try to successfully simulate a network
>of neurons and see if it can be applied to solve simple problems involving
>learning/intelligence. If anyone has any other ideas for which I may test
>it, I'd be happy to hear from you.


Here is a reposting of some references from several months ago.
* For beginners, I especially recommend the articles marked with an asterisk.

Stephen Grossberg has been publishing on neural networks for 20 years.
He pays special attention to designing adaptive neural networks that
are self-organizing and mathematically stable. Some good recent
references are:

(Category Learning):----------
* G.A. Carpenter and S. Grossberg, "A Massively Parallel Architecture for
a Self-Organizing Neural Patttern Recognition Machine." Computer
Vision, Graphics, and Image Processing. In Press.
G.A. Carpenter and S. Grossberg, "Neural Dynamics of Category Learning
and Recognition: Structural Invariants, Reinforcement, and Evoked
Potentials." In M.L. Commons, S.M. Kosslyn, and R.J. Herrnstein (Eds),
Pattern Recognition in Animals, People, and Machines. Hillsdale, NJ:
Erlbaum, 1986.
(Learning):-------------------
* S. Grossberg, "How Does a Brain Build a Cognitive Code?" Psychological
Review, 1980 (87), p.1-51.
* S. Grossberg, "Processing of Expected and Unexpected Events During
Conditioning and Attention." Psychological Review, 1982 (89), p.529-572.
S. Grossberg, Studies of Mind and Brain: Neural Principles of Learning,
Perception, Development, Cognition, and Motor Control. Boston:
Reidel Press, 1982.
S. Grossberg, "Adaptive Pattern Classification and Universal Recoding:
I. Parallel Development and Coding of Neural Feature Detectors."
Biological Cybernetics, 1976 (23), p.121-134.
S. Grossberg, The Adaptive Brain: I. Learning, Reinforcement, Motivation,
and Rhythm. Amsterdam: North Holland, 1986.
* M.A. Cohen and S. Grossberg, "Masking Fields: A Massively Parallel Neural
Architecture for Learning, Recognizing, and Predicting Multiple
Groupings of Patterned Data." Applied Optics, In press, 1986.
(Vision):---------------------
S. Grossberg, The Adaptive Brain: II. Vision, Speech, Language, and Motor
Control. Amsterdam: North Holland, 1986.
S. Grossberg and E. Mingolla, "Neural Dynamics of Perceptual Grouping:
Textures, Boundaries, and Emergent Segmentations." Perception &
Psychophysics, 1985 (38), p.141-171.
S. Grossberg and E. Mingolla, "Neural Dynamics of Form Perception:
Boundary Completion, Illusory Figures, and Neon Color Spreading."
Psychological Review, 1985 (92), 173-211.
(Motor Control):---------------
S. Grossberg and M. Kuperstein, Neural Dynamics of Adaptive Sensory-
Motor Control: Ballistic Eye Movements. Amsterdam: North-Holland, 1985.


If anyone's interested, I can supply more references.

--Jonathan Marshall

harvard!bu-cs!jam

we...@megaron.uucp

unread,
Oct 21, 1986, 1:22:54 PM10/21/86
to
Anyone interested in neural modelling should know about the Parallel
Distributed Processing pair of books from MIT Press. They're
expensive (around $60 for the pair) but very good and quite recent.

A quote:

Relaxation is the dominant mode of computation. Although there
is no specific piece of neuroscience which compels the view that
brain-style computation involves relaxation, all of the features
we have just discussed have led us to believe that the primary
mode of computation in the brain is best understood as a kind of
relaxation system in which the computation proceeds by iteratively
seeking to satisfy a large number of weak constraints. Thus,
rather than playing the role of wires in an electric circuit, we
see the connections as representing constraints on the co-occurrence
of pairs of units. The system should be thought of more as "settling
into a solution" than "calculating a solution". Again, this is an
important perspective change which comes out of an interaction of
our understanding of how the brain must work and what kinds of processes
seem to be required to account for desired behavior.

(Rumelhart & Mcclelland, Chapter 4)

Alan Wendt
U of Arizona

Brad Banko

unread,
Oct 22, 1986, 6:15:32 PM10/22/86
to

Bill,
Your program sounds very interesting... I have heard of related work
being done using matrices and transforms upon them to cause the "learning", but
your approach does something very interesting... it points out just what the
"missing" link is in the learning mode... getting the feedback in...
I suppose you have heard of these hardware devices that have been used recently
(Bell Labs, I think) based on a neural network model to find good solutions to
hard problems (travelling salesman...) fast... not the best solutions, but
fast good solutions...
I would like to have a look at your source, if you would post it, or
send it to me...

Brad Banko

...!decvax!cwruecmp!ncoast!btb

--
Bradley T. Banko

Peter Berke

unread,
Oct 27, 1986, 12:36:23 PM10/27/86
to

Isn't 'computation' a technical term? Do R&Mc prove that PDP is
equivalent to computation? Would Turing agree that "settling into
a solution" is computation? Some people have tried to show that
symbols and symbol processing can be represented in neural nets,
but I don't think anyone has proved anything about the problems
they purportedly "solve," at least not to the extent that Turing
did for computers in 1936, or Church in the same year for lambda
calculus.

Or are R&Mc using 'computing' to mean 'any sort of machination whatever'?
And is that a good idea?

Church's Thesis, that computing and lambda-conversion (or whatever he
calls it) are both equivalent to what we might naturally consider
calcuable could be extended to say that neural nets "settle" into
the same solutions for the same class of problems. Or, one could
maintain, as neural netters tend to implicitly, that "settling" into
solutions IS what we might naturally consider calculable, rather than
being merely equivalent to it. These are different options.

The first adds "neural nets" to the class of formalisms which can
express solutions equivalent to each other in "power," and is thus
a variant on Church's thesis. The second actually refutes Church's
Thesis, by saying this "settling" process is clearly defined and
that it can realize a different (or non-comparable) class of problems,
in which case computation would not be (provably) equivalent to it.

Of course, if we could show BOTH that:
(1) "settling" is equivalent to "computing" as formally defined by Turing,
and (2) that "settling" IS how brains work,
then we'd have a PROOF of Church's Thesis.

Until that point it seems a bit misleading or misled to refer to
"settling" as "computation."

Peter Berke

a

unread,
Oct 28, 1986, 4:05:49 PM10/28/86
to

I just read an interesting short blurb in the most recent BYTE issue
(the one with the graphics board on the cover)...it was in Bytelines or
something. Now, since I skimmed it, my info is probably a little sketchy,
but here's about what it said:

Apparently Bell Labs (I think) has been experimenting with neural
network-like chips, with resistors replacing bytes (I guess). They started
out with about 22 'neurons' and have gotten up to 256 or 512 (can't
remember which) 'neurons' on one chip now. Apparently these 'neurons' are
supposed to run much faster than human neurons...it'll be interesting to see
how all this works out in the end.

I figured that anyone interested in the neural network program might
be interested in the article...check Byte for actual info. Also, if anyone
knows more about this experiment, I would be interested, so please mail me
any information at the below address.

--
Chris Lishka /l lis...@uwslh.uucp
Wisconsin State Lab of Hygiene -lishka%uwslh...@rsch.wisc.edu
\{seismo, harvard,topaz,...}!uwvax!uwslh!lishka

Charles Daffinger

unread,
Oct 30, 1986, 10:20:24 AM10/30/86
to
In article <1...@uwslh.UUCP> lis...@uwslh.UUCP [Chris Lishka] writes:
>
>...

> Apparently Bell Labs (I think) has been experimenting with neural
>network-like chips, with resistors replacing bytes (I guess). They started
>out with about 22 'neurons' and have gotten up to 256 or 512 (can't
>remember which) 'neurons' on one chip now. Apparently these 'neurons' are
>supposed to run much faster than human neurons...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

What bothers me is that the performance is rated upon speed. Unlike the
typical syncronous digital computer, neuronal networks are asyncronous,
communicating via a temporal discharge of 'spikes' through axons which vary
considerably in length, as well as speed, and exploit the use of SLOW signals
just as they do those of FAST signals. (look at the neral mechanism for a
reflex, or for that of focusing the eye, as an example).

I am curious as to how much of the essence of their namesakes was really
captured in these 'neurons'?


-charles

--
... You raise the blade, you make the change, you re-arrange me til I'm sane...
Pink Floyd

0 new messages