Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Marvin Minsky 'blasts' AI research in Wired

39 views
Skip to first unread message

Christopher X. Candreva

unread,
May 13, 2003, 12:36:40 PM5/13/03
to

http://www.wired.com/news/print/0,1294,58714,00.html


--
==========================================================
Chris Candreva -- ch...@westnet.com -- (914) 967-7816
WestNet Internet Services of Westchester
http://www.westnet.com/

DOC

unread,
May 13, 2003, 8:01:41 PM5/13/03
to
Darn!

Ya beat me to it.

It was one of those LOL moments when I read what Minsky had to
say about robots.

DOC


"Christopher X. Candreva" <ch...@westnet.com> wrote in message
news:sa9wa.334$eh5.3...@monger.newsread.com...

Sir Charles W. Shults III

unread,
May 13, 2003, 10:33:59 PM5/13/03
to
Something very clear- many AI researchers think that somehow, if you put
enough rules, facts, and methods of inference together, it may reach some
magical "critical mass" and become smart. In fact, the best you could ever hope
for is the Rain Man's dumber brother.
It is exceedingly clear that they are missing a few very salient features.
It would do them well to understand some things about mental illness, brain
function, and motivation. Any true troubleshooter can root out the underlying
rules with some thought. After all, troubleshooters are usually the *most*
in-depth thinkers you are likely to encounter. They must truly understand a
system in order to repair it.

Cheers!

Chip Shults
My robotics, space and CGI web page - http://home.cfl.rr.com/aichip


Marvin Minsky

unread,
May 14, 2003, 12:43:58 AM5/14/03
to
"Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message news:<rWhwa.141990$My6.2...@twister.tampabay.rr.com>...

> Something very clear- many AI researchers think that somehow, if you put
> enough rules, facts, and methods of inference together, it may reach some
> magical "critical mass" and become smart. In fact, the best you could ever hope
> for is the Rain Man's dumber brother.
> It is exceedingly clear that they are missing a few very salient features.
> It would do them well to understand some things about mental illness, brain
> function, and motivation. Any true troubleshooter can root out the underlying
> rules with some thought. After all, troubleshooters are usually the *most*
> in-depth thinkers you are likely to encounter. They must truly understand a
> system in order to repair it.

I completely agree. A system that keeps improving itself will need,
not only adequate factual knowledge, but also resources that it can
use toanalyze and then debug itself. Here's what I actually sent to
that reporter: (at least I think so; maybe I sent it to some other
reporter.)

Most early researchers in artificial intelligence had the goal to
build machines that would be very intelligent.

marvin minsky

However, it soon turned out that solving hard problems usually needs a
lot of knowledge. This was recognized in the early years— by
researchers in the 60s and 70s who built knowledge-based systems.
Indeed, in the 1980's, the so-called expert systems became widely
productive and popular. However there was a problem with them: For
each different kind of problem the construction of such systems had to
start all over again, because they didn't have, or accumulate what we
called commonsense knowledge. To be sure, each new system could use
the same ‘shell' — but I think it turned out, at least in my view,
that this was more of a fault than a virtue.

Only one researcher recognized that this was so serious as to commit
himself entirely to it. That was Douglas Lenat, who has directed the
multi year project called CYC—which has resulted in solving some
problems in this area.

Unfortunately, in my view, the rest of the artificial intelligence
community tried, instead, to find alternative medicines to deal with
this problem. For example, many projects were aimed at what I call
building baby machines, which were supposed to learn from experience,
eventually to become as smart as people. These all failed to make
much progress because (in my view) they lacked architectural features
to equip them to think about the causes of their successes and
failures— and then to make appropriate changes.

Instead, most researchers went in other direction— of trying to build
an evolutionary system, that would start with very simple machines and
then, by one or another mutation scheme, evolve more architecture.
None of those projects ever have gotten very far.

The story is very much the same in the field of building large neural
networks. These can frequently solved interesting problems However
those networks don't have the capability, of reflecting on what they
have learned, and then making appropriate changes.

Other researchers have aimed their work toward making make a ‘unified
theory' of thinking. Each of those projects has proposed some good
architectural ideas, but not enough to these to support good
self-reflective processes, with which they could improve their own
operations.

Gordon McComb

unread,
May 14, 2003, 1:20:13 AM5/14/03
to
Sir Charles W. Shults III wrote:
>
> Something very clear- many AI researchers think that somehow, if you put
> enough rules, facts, and methods of inference together, it may reach some
> magical "critical mass" and become smart. In fact, the best you could ever hope
> for is the Rain Man's dumber brother.

For robotics, I think it's even simpler than this (though I really like
the "Rain Man's dumber brother" reference!).

To me, the "worst fad" (in Dr. Minsky's words) is not time spent in the
AI aspects of robotics, but the basic, fundamental mechanical issues
that would prevent even the smartest robot from navigating a messy room.
It seems to me that humans had brawn far before they had brains. Their
brains wouldn't have developed without a physical structure that
supported things like exploration of a rugged environment, or the
ability to (eventually) fashion and use tools with fingers and
apendages.

We don't have worthwhile mobile robots because the typical design is too
small, too fragile, and too prone to failure. Where's the point in
making something like that smart, and where would it lead you? Not to a
better robot, because it wouldn't be able to use its smarts for anything
truly useful. It might as well just stay on the your desk.

I have no end of admiration for those who want to make robots smarter,
but not only is this putting the cart before the horse, the cart hasn't
even been built yet. The field has too few mechanical engineers now, in
my opinion.

-- Gordon
Author: Robot Builder's Bonanza

Sir Charles W. Shults III

unread,
May 14, 2003, 2:44:20 AM5/14/03
to
We are in agreement here. Marvin, I saw you at the Disney Institute about 4
years back when you spoke with a number of AI mavens, including Joe Engleberger
by remote link. That was the humanoid android conference. The one thing that
you said that struck me as very valuable for anyone entering the field was this-
we can make any kind of robot you can imagine, but we are still unable to make
it do the simple tasks. Our efforts need to be aimed at how mind works, not the
hardware. (I am rather freely paraphrasing you here!)
Knowledge representation is the key because without a simple, consistent
method of storing factual data, you will not be able to manipulate it
effectively. Also, AI software lacks a few key points that we as humans have.
First, we would expect that for a program to think like a person, it must share
similar experiences, or at least have some similarity in its ability to sense.
Isomorphism is important. We can never say whether any two people share the
same internal experience of "blueness", but the internal is not so important as
our external agreement that a thing is blue. In this respect, we have a fairly
open field.
But when it comes to the things that we humans share such as motivation, or
how we model the world, things get sticky fast due to a poorly understood inner
landscape. We have an internal theater that we use to play out the things
around us- and the system I am modeling has more than one theater to deal with.
I don't want to get into details that will "tip my hand", but clearly we live
very much in our minds, and the reality around us is only seen in a shadowy sort
of way. I won't dive into the "hidden" things around us like radio waves or
infrared, because we have figured out how to deal with those things.
But an AI must also have numerous "engines" to duplicate our thinking. An
example is the calculator. With literally a few thousand transistors, we can
make things that far outperform our minds in mathematical functions. This can
be a pro and a con. The con side is that a calculator knows nothing of numbers;
it is a pure deterministic piece of hardware. But if we has that same chip in
our heads and learned to use it mentally, we would not have lost our ability to
think of numbers, we would have gained an ability to handle them well. And,
another pro argument is that instead of 50 million neurons that only vaguely
work well with values and rules, we would have a tiny sliver of matter that
scorches through the functions and yields perfect (or nearly so) answers in
microseconds.
The reason I see this as positive is that it tells me that with optimum code
and hardware, we can possibly make engines for many of our intellectual
functions. If we could reduce any given cluster of 50 million neurons to some
50 thousand transistors (or gate equivalents, or whatever) then we can get from
the "hundreds of billions of neurons- hopelessly intractable!" stage to
"hundreds of millions- well, maybe there is light at the end of the tunnel!"
If we can overcome the lack of consensus on what the problems are, or what
the definitions are, we can likely make major progress. But we will need
visionary people willing to take a couple of steps back and start with a fresh
approach.

S

unread,
May 14, 2003, 2:49:47 AM5/14/03
to

"Gordon McComb" <gmc...@gmccomb.com> wrote

> We don't have worthwhile mobile robots because the typical design is too
> small, too fragile, and too prone to failure. Where's the point in
> making something like that smart, and where would it lead you? Not to a
> better robot, because it wouldn't be able to use its smarts for anything
> truly useful. It might as well just stay on the your desk.
>
> I have no end of admiration for those who want to make robots smarter,
> but not only is this putting the cart before the horse, the cart hasn't
> even been built yet. The field has too few mechanical engineers now, in
> my opinion.
>
> -- Gordon
> Author: Robot Builder's Bonanza

Can we create a good form of AI without a body?
Is a software only solution doomed to failure?

Undoubtedly the human body played a key role in the evolution of the
human mind. But is it possible to make a useful, reusable and general
purpose mind without a body?

-steve


Sir Charles W. Shults III

unread,
May 14, 2003, 3:32:54 AM5/14/03
to
"S" <pall...@nycap.rr.com> wrote in message
news:fGlwa.79238$O06....@twister.nyroc.rr.com...
>
<snip>

> Can we create a good form of AI without a body?
> Is a software only solution doomed to failure?
>
> Undoubtedly the human body played a key role in the evolution of the
> human mind. But is it possible to make a useful, reusable and general
> purpose mind without a body?
>
> -steve

For the moment, that may be the only truly practical method of developing a
useful AI. And consider the advantages- the genie is not out of the bottle.

Richard Steven Walz

unread,
May 14, 2003, 3:55:39 AM5/14/03
to
In article <f04e2625.03051...@posting.google.com>,
------------------
While the self-reflection involved in awarenesses such as ours is the
goal, we need to ask if doing that right away is what we should work
on next, or whether we don't need to learn a certain set of fundamental
truths about deeper non-conscious processes and the human brain's way
of doing this first?

Of course there will always be plenty of room for people who wish to
theorize their way to a solution, ala Einstein, that is to strive for
a lightning insight, but that may take the longest, whereas first
finding out all the little pieces of how the brain does each different
of the assorted tasks and levels of wetware organization may be the
most assured path to success.

I see the Cog/subsumption and neural nets and smart recognition circuits,
and study of the evolutionary programming process as applied to learning,
along with the study of biomechanical learning by experiential pseudo-
organisms may take us the farthest the quickest, a sort of spiral, or in
all directions at once approach.

In other words, till we can do it, we shall not know really HOW to do
it, and so we should not disparage any particular path, and grant value
to all of them. It just looks, for now, as if all this may take just
a bit longer than it seemed in the days of the first flush of AI. Of
course this whole commentary is sort of old news, we were saying this
sort of stuff last time everything slowed down as well in the '80's!

And as the article says, we tend to ignore how far we have come. I
can well imagine that we humans will do that until we're talking to
psychodiagnostic computers about our disappointments in that regard!

We always make the accomplished state of the art invisible to ourselves.
The task may sneak up on us and surprise us at how easy it is suddenly,
or it could take a century or more, who knows?

Is there any improvement on the neural-net understanding of the human
cortex and its fine structure coming along lately, Marvin? And are there
any purely structured software self-representational awareness experiments
that have had any intriguing results?
-R. Steve Walz
--
-Steve Walz rst...@armory.com ftp://ftp.armory.com/pub/user/rstevew
Electronics Site!! 1000's of Files and Dirs!! With Schematics Galore!!
http://www.armory.com/~rstevew or http://www.armory.com/~rstevew/Public

Tom McEwan

unread,
May 14, 2003, 4:08:12 PM5/14/03
to
It depends on what you mean by "creating" an AI.

If you're suggesting developing a brain whilst it is completely disconnected
from a body, I think there would be massive restrictions on it's
development. Let's assume, for argument's sake, that we've designed a
"perfect"
AI, a system capable of reproducing every process that occurs in the human
brain.

If the AI were partially disconnected from a body, say if it had all
actuators disconnected but all sensors connected, among other things it
would restrict the
development of "self-awareness". Here, in "self-awareness", I mean an
awareness of the effect one's actions have on one's environment. In other
words, I would say a sense of "self" is derived, at least in part,
from knowledge of what one's actions will (or should) do to the environment.
If the
system were able to interact with it's environment, the building blocks of
self-awareness would be created by the system associating actuator output
signals with
changes in observed inputs. Without actuators connected, this could not
happen, though the system could still observe and deduce cause and effect
between percieved events on it's inputs.

Without inputs it would be even worse, because no cause and effect could be
detected at all, so no awareness of anything could occur.

In any case: What use is a mind without a body?

Tom


As an afterthought, however, I think it would be worthwhile conducting
experiments to determine what effect transplanting a mind from one body to
another would have. After all, as I suggested above, if a sense of
self-awareness is created in terms of what one particular body can do, by
switching it between several dissimilar bodies and allowing the system to
adapt it's pathways and logical structures so that they could easily apply
to any of the bodies, one could make progress to a "higher" form of
self-awareness, dealing in "purer", more generalised concepts of action and
consequence that would be less tied up in the aspects of the way the body
functions, the so-called "common sense" mentioned earlier in this thread.
This could possibly even be said to lead towards abstract thought.


"Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message

news:Gimwa.146950$My6.2...@twister.tampabay.rr.com...

Sir Charles W. Shults III

unread,
May 14, 2003, 5:14:15 PM5/14/03
to
I think he and I were more on the same line of thought- using a simulated
world for the AI to "live" in. I do that presently with my experiments. That
way it has sensory input and an environment to move around in.

Robin G Hewitt

unread,
May 14, 2003, 7:54:22 PM5/14/03
to
> I think he and I were more on the same line of thought- using a simulated
> world for the AI to "live" in. I do that presently with my experiments.
That
> way it has sensory input and an environment to move around in.

Hi

There was once a King in Northumbria who was convinced that English was the
natural, pre-programmed language of hom sap. To demonstrate this he marooned
a baby and a dumb nurse alone on an island and waited to see what language
it would speak.

I think the child would only be limited by the ability of the nurse. The
need to communicate being as vital to us as those first steps for a antelope
born on the African plain.

Perhaps our intelligence is a lot more pre-programmed than we might care to
think. The non-corporeal "human" AI should develop around whatever you give
it to work with, the built in precept systems adapting to maximise input
from whatever sensory inputs you care to provide. As a deaf child learns a
visual language, so the AI abilities should seize on whatever input best
stimulates their natural function. A new brain is not so much a blank canvas
as a painting by numbers exercise.

OTOH if you don't give it enough input then it's behaviour should end up at
least neurotic and it will probably be rather more insane than the norm.

Just my .02p

Robin G Hewitt


Message has been deleted

Gerald Coe

unread,
May 15, 2003, 7:46:19 AM5/15/03
to

Knowledge and intelligence are not the same thing. My encyclopaedia is
packed with knowledge, but posses not one shred of intelligence.

A honey bee displays far more intelligence in searching out the flower,
navigating back to the hive and telling its family where the food is. It
can do this in an unstable environment. The wind can change direction,
the sun go behind a cloud, A farmer can park his 4x4 in the way of the
bee, it will still navigate back to the hive. If we are going to move
towards true intelligence, then our robots are going to have to have a
similar capability.

The human brain (and I know the estimates vary) contains a 100 billion
neurons, each with 100's up to 10,000 connections. Perhaps 10 trillion
connections in all. We are not going to simulate that with any computer
we have today. The 1000 to 10,000 neurons of the bee should be a more
realistic challenge.

If we cannot get our robots to survive in our own unmodified and ever
changing environment then we are not going to have Terminators running
around any time soon. The robots we build are good starting platforms
for AI research. Progress is slow, but there is an ever increasing army
of people thinking about it and trying it.

Gerry.

Minsky <min...@media.mit.edu> writes

--
Kindest Regards, Gerry
http://www.robot-electronics.co.uk

Olivier Carmona

unread,
May 15, 2003, 12:16:37 PM5/15/03
to
Dear Dan,

You are very true.

Cognitive science has been confiscated during many years by symbolic AI,
but Symbolic AI failed to build a hierarchical system able to learn from
perception and action in a real world. Probably, because they were not
able to create inside a computer a perfect (isomorphic) representation
of our environment. This is not only a question of memory size , i.e.,
storing every piece of objects surrounding us inside a database. This is
a question of storing spatial information (keyboard is in front of me),
temporal information (keyboard was in front of me ten seconds before)
and uncertainty information (a keyboardl-like stuff was nearly in front
of me a few seconds before) and on TOP of that, how to make use of
these information (where was this keyboard-like stuff a few seconds
before)*.

However, Marvin Minsky is right in the fact that we have gone too far in
the opposite direction to symbolic AI, i.e., reactive AI. It is a
classical move in science (and elsewhere) that after a "dictatorship",
you do exactly the opposite. Instead of trying to learn to robot what is
the corresponding object to the word "keyboard", let's the robot learn
from its environment what it needs for a given task. Said differently,
robot must learn from the pattern presented on its input and do the
pre-defined job. This approach succeeded in building impressive system
tailored to one task (or a few tasks) but mainly suffer from
catastrophic forgetting (I can learn to follow a light, but as soon as I
learn to avoid obstacle, I completely forgot how to follow a light).

IMHO, the field is getting more and more mature by learning, as it was
at the beginning of artificial intelligence in 1940s and 1950's
(remember Wiener, McCulloc..), from other disciplines. First of all,
cognitivists are learning more and more about the system they are trying
to copy: human. Second, cognitivists ellaborate a common vocabulary in
order to collaborate on a common ground.

Cognitivists are currently progressing in what the internal
representation of a human being: the question of the self-consciousness
is part of the problem, but we do not even know how visual information
are coded when transfered from retina to cortex. Happily, more and more
studies are done on patient in a dynamic, realistic situation, thanks to
the miniaturization of measurement tools, bringing more and more
interesting data. In parallel, we must use those progress to improve
robots's ability.

PS: About Gordon statement saying that robotics is doing bad because
there are too few mechanics engineers out there, I partially agree. For
instance, the robots were unable to adapt the mechanical structure of
the robot to its environment, but, here, like in other area, many
important projects are currently undergoing and mechanics is a part of
the needed discpline to progress. However, it is one of them and not the
principal. Human are not only impressive mechanics, they are also using
it cleverly. No use to add a top-notch arm if you are not able to learn
what is your arm to your Khepera equipped with a vision camera.

--
Olivier Carmona
__ __ ________
K-Team S.A. | |/ /|__ __|___ _____ ___ ___
Chemin de Vuasset, CP 111 | / __ | | _____|/ _ \| \/ |
1028 Preverenges | | \ | | ____|/ /_\ | |
Switzerland |__|\__\ |__|______|_/ \_|__|\/|__|
car...@k-team.com tel:+41 21 802 5472 fax:+41 21 802 5471
Mobile Robots for Research, Education and Hobby
http://www.k-team.com
http://ww.hemisson.com


Randy M. Dumse

unread,
May 15, 2003, 12:42:18 PM5/15/03
to
"Gerald Coe" <deva...@devantech.demon.co.uk> wrote in message
news:b1o2hAAL...@dev.demon.co.uk...

> The 1000 to 10,000 neurons of the bee should be a more
> realistic challenge.

Has anyone ever taken apart a bee, and mapped its brain neuron by
neuron? If not, then we aren't trying hard enough. (Let's blame it on
the biologists!)

However, is the intelligence in the brain structure, or in the "state
information" which exists in the structure? When the bee is dead, it is
not very intelligent, even though, the machinery is still there.

Same for DNA, is it the chemical coding? or is it the instantaneous
twists which exist in the coiling of the molecule? Not surprizingly, I
think the complex state information is as, or is more, important than
the structure containing it. The life may be in the "information", not
the hardware which can hold it.

--
Randy M. Dumse
www.newmicros.com
Caution: Objects in mirror are more confused than they appear.

Gordon McComb

unread,
May 15, 2003, 1:08:27 PM5/15/03
to
Olivier Carmona wrote:
> PS: About Gordon statement saying that robotics is doing bad because
> there are too few mechanics engineers out there, I partially agree. For
> instance, the robots were unable to adapt the mechanical structure of
> the robot to its environment, but, here, like in other area, many
> important projects are currently undergoing and mechanics is a part of
> the needed discpline to progress. However, it is one of them and not the
> principal. Human are not only impressive mechanics, they are also using
> it cleverly. No use to add a top-notch arm if you are not able to learn
> what is your arm to your Khepera equipped with a vision camera.

Actually I never said it was "doing bad" but I was referring to a
"stalling out" of the progress, and one reason is that the field hasn't
been attracting as many mechanical innovations as it once did. There are
lots and lots of things "around the corner," but you know, that's been
the case for years.

AI can be sufficiently researched in a square immobile box. Putting AI
on a wheeled or walking robot doesn't do anything magical. It's
basically an experimental AI system on a fragile base. That gets us no
where.

Granted, there *are* rugged robots being developed, mostly for military
or police teleoperated functions. But the idea of a robot suddenly
becoming useful because it can think is just more science fiction.
Robots don't really need to be smart to be useful (the vast majority of
robots in commercial use are stupid as a thumbtack), but they do need to
be reliable.

-- Gordon
Robots for Less at Budget Robotics: http://www.budgetrobotics.com/
Author: Robot Builder's Sourcebook & Robot Builder's Bonanza

donLouis

unread,
May 15, 2003, 3:05:09 PM5/15/03
to
On Thu, 15 May 2003 11:42:18 -0500
"Randy M. Dumse" <r...@newmicros.com> wrote:

> "Gerald Coe" <deva...@devantech.demon.co.uk> wrote in message
> news:b1o2hAAL...@dev.demon.co.uk...
> > The 1000 to 10,000 neurons of the bee should be a more
> > realistic challenge.
>
> Has anyone ever taken apart a bee, and mapped its brain neuron by
> neuron? If not, then we aren't trying hard enough. (Let's blame it on
> the biologists!)

daniel alkon, m.d., mapped the neural connections from the eyes to
the brain of a species of snail hermissenda. in the book _memory's
voice_, he said that it took a year to map the connections. he
also said that a bee would be to difficult.

donLouis

Arthur T. Murray

unread,
May 15, 2003, 5:22:36 PM5/15/03
to
Gordon McComb <gmc...@gmccomb.com> wrote on Thu, 15 May 2003:
> [...] AI can be sufficiently researched in a square immobile box.

> Putting AI on a wheeled or walking robot doesn't do anything
> magical. It's basically an experimental AI system on
> a fragile base. That gets us no where.

We are getting somewhere with the Mentifex AI in Forth
and in JavaScript, but we need robot embodiment for the
proper development of the sensory input channels.

>
> Granted, there *are* rugged robots being developed,
> mostly for military or police teleoperated functions.
> But the idea of a robot suddenly becoming useful
> because it can think is just more science fiction.
> Robots don't really need to be smart to be useful

The new AI textbook "AI4U: Mind-1.1 Programmer's Manual"
describes how to construct a primitive AI Mind for robots.

http://www.scn.org/~mentifex/ai4udex.html -- the index
of the AI textbook -- is a new phenomenon in the electronic
publishing of technical books, because the "AI4Udex" is
bidirectionally hyperlinked: backwards to AI Web material,
and forward to pages of the AI4U textbook at the publisher.

> (the vast majority of robots in commercial use are
> stupid as a thumbtack), but they do need to
> be reliable.
>
> -- Gordon
> Robots for Less at Budget Robotics: http://www.budgetrobotics.com/
> Author: Robot Builder's Sourcebook & Robot Builder's Bonanza

Arthur
--
http://www.scn.org/~mentifex/theory5.html -- AI4U Theory of Mind;
http://www.scn.org/~mentifex/jsaimind.html -- Tutorial "Mind-1.1"
http://www.scn.org/~mentifex/mind4th.html -- Mind.Forth Robot AI;
http://www.scn.org/~mentifex/ai4udex.html -- Index for book: AI4U

Cat Paparazzi

unread,
May 15, 2003, 4:50:40 PM5/15/03
to

"Arthur T. Murray" <uj...@victoria.tc.ca> wrote in message
news:3ec3...@news.victoria.tc.ca...

> Gordon McComb <gmc...@gmccomb.com> wrote on Thu, 15 May 2003:
> > [...] AI can be sufficiently researched in a square immobile box.
> > Putting AI on a wheeled or walking robot doesn't do anything
> > magical. It's basically an experimental AI system on
> > a fragile base. That gets us no where.
>
> We are getting somewhere with the Mentifex AI in Forth
> and in JavaScript, but we need robot embodiment for the
> proper development of the sensory input channels.
>

What you're saying sounds rather interesting. Minksy seems to regard CYC as something
that could possibly show the way for trapping 'common sense'. CYC can partially
contribute to its storage, but alone it is not enough. CYC is a verbal ontological
description of common sense, but human common sense is based on all the senses one
can think of. And when Lenat undertakes the coding of common sense I believe he
refers to human common sense. For example: our common sense experience of dropping a
glass is not limited to a sentence: "I dropped the glass, so it broke." We hear the
sound of breaking and we see the glass breaking. We become aware of all the
consequences relevant to us. A bat commonsense does not include seeing but the sound
plays an important part.

We need robotic sensors to augment CYC's verbal description of common sense. CYC can
play its part in the task, but common sense is not limited to logic inferences one
derives from textual descriptions.

PsykoPat


tobori

unread,
May 15, 2003, 5:56:19 PM5/15/03
to

"Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message
news:rWhwa.141990$My6.2...@twister.tampabay.rr.com...

- snip -


> It is exceedingly clear that they are missing a few very salient
features.
> It would do them well to understand some things about mental illness,
brain
> function, and motivation. Any true troubleshooter can root out the
underlying
> rules with some thought. After all, troubleshooters are usually the
*most*
> in-depth thinkers you are likely to encounter. They must truly understand
a
> system in order to repair it.

Assuming I grasp what you're saying, I think you're making an extremely
valid, and enlightened, point. True insight into the underlying functions
of the brain will come from studying brains that are failing, as much, if
not more so, than from studying brains that are normal. Jeff Hawkins (of
GRiD, Palm and Handspring fame) is a proponent of a theory that the brain is
an auto-associative memory system. This allows it to "generalize, fill in
missing pieces of information, and work well with incomplete and even
inaccurate data." (1). I believe what he says because I've watched this
auto-associative system failing in my mother. Case in point: My mother is
88 years old and lives with us. Recently, one of my daughter's friends slept
over. The next day my family and the friend left the house. Our departure
out the door was such that my mother saw my wife, my daughters and I leave,
but didn't see the friend leave. Later on that day my mother tells me that
the friend's mother came and picked her up. This never happened but she was
sure that it did. This is just one example but there are many more. Her
brain had encountered a pattern that had a beginning (the friend was in the
house), an end (the friend was no longer in the house), and it needed (was
motivated) to fill in the middle. In an effort to complete the broken
pattern, it invented a scenario which would join the two incongruous pieces
of data. In a functioning brain this ability allows us to rapidly assess
and react to situations by anticipating what will occur before it does,
probably because we've been in similar, if not necessarily identical,
situations beforehand. In a failing brain it leads to confusion and mental,
if not visual, hallucinations.

Well, interesting thread. I get a little light headed up here where you AI
boys hang out so I'll go back to twiddling with my AVRs and servos now.

(1) FastComany interview with Jeff Hawkins
http://www.fastcompany.com/online/15/onbrains.html

Link to Jeff Hawkins' Redwood Neuroscience Institute: http://www.rni.org/


Christopher X. Candreva

unread,
May 15, 2003, 7:44:45 PM5/15/03
to
Sir Charles W. Shults III <aich...@ovecfl.rr.com> wrote:

: It would do them well to understand some things about mental illness, brain


: function, and motivation. Any true troubleshooter can root out the underlying

A similar thought has been on my mind lately -- I've wondered how many AI
researchers have kids (and were out of the lab to watch them grow :-)

My son is now about 2.5 , and unfortunately my brain keeps putting his
development into CS / robots / AI terms. I'm faciniated by what he notices
and doesn't -- what he things the 'salient features' are. Or, it occured to
me lately that this blank neural network basicly spent almost a year mapping
a single room, then branched out to other rooms, and now can navigate
arbitrary environments.

I try not to say this out loud too much though, people look at me weird.
:-)

I've been imagineing a robot that spends it's 'mapping' time rolling up to
objects and asking "What's this" over and over, then at some point it starts
pointing to and nameing objects, over and over. I've got a vague notion even
of how to tie it into regular sonar mapping. But, as Gordon said, the ME
stuff has to be finished first. Someday . . .

-Chris

Sir Charles W. Shults III

unread,
May 15, 2003, 10:30:22 PM5/15/03
to
"Christopher X. Candreva" <ch...@westnet.com> wrote in message
news:NDVwa.99$AU6....@newshog.newsread.com...

> Sir Charles W. Shults III <aich...@ovecfl.rr.com> wrote:
>
> : It would do them well to understand some things about mental illness, brain
> : function, and motivation. Any true troubleshooter can root out the
underlying
>
> A similar thought has been on my mind lately -- I've wondered how many AI
> researchers have kids (and were out of the lab to watch them grow :-)

If I had my way about it, any new crop of AI researchers would be required,
as a part of their education process, to raise and train some puppies, a magpie,
and a kitten. Children would also be nice but I wouldn't strictly enforce that.
I think it would also be very helpful for them to then contrast the process
with trying to train an insect. And, a few months in a mental health ward would
be extremely illuminating. Er, not as patients...

> My son is now about 2.5 , and unfortunately my brain keeps putting his
> development into CS / robots / AI terms. I'm faciniated by what he notices
> and doesn't -- what he things the 'salient features' are. Or, it occured to
> me lately that this blank neural network basicly spent almost a year mapping
> a single room, then branched out to other rooms, and now can navigate
> arbitrary environments.

I did the same thing, and I can say that it helped me greatly in
understanding the nature of the problems at hand. I see very clearly that the
things we know are based on a "pyramid" of concepts. We simply cannot
understand certain things too early, because we do not have the foundation
blocks in place yet.
An isomorphic example can be found with language skills. Imagine that
somebody spouts a string of some unfamiliar language at you and expects you to
repeat it back. Probably a hopeles task! But if you first learn the sounds,
then some simple words, then the grammar and vocabulary skills grow, eventually
you reach a point where things are suddenly very simple.
This is because the first exposure to the language was essentially a random
rote memorization task with no reference points or smaller pieces to break the
task into. This sort of thing will rapidly tax the limits of most peoples'
skills. But in the second case, you built a foundation of syllables, noises,
and bits that form actual words. Then you learned that there are rules about
how words are used and what orders they can occur in when used properly. Then
you learned about the meanings of the words, and then you learned about
vernacular... in short, you developed a bottom-up approach that put major tools
in your grasp for storing and retrieving the data in a more or less symbolic
fashion.
Compare an audio tape of the word "corollary" and the ASCII representation.
Which is more compact? Which carries more information? Which has more
"meaning"?

> I've been imagineing a robot that spends it's 'mapping' time rolling up to
> objects and asking "What's this" over and over, then at some point it starts
> pointing to and nameing objects, over and over. I've got a vague notion even
> of how to tie it into regular sonar mapping. But, as Gordon said, the ME
> stuff has to be finished first. Someday . . .

Well, it comes down to us understanding how memory templates are defined,
how they are filled, how they are associated... how do we create a simple to
understand, three-dimensional model of the world in a robot's "mind"? Lots of
clues exist in the optical cortex where we see striated regions that correspond
to angular displacement of an object, or in the fact that when we are presented
with a rotated picture of something, the recognition time is directly
proportional to the time it takes to mentally rotate the image around in
3-space.
There are deeply meaningful, but ultimately simple models here, and we would
be wise to study how nature has answered those problems. Personally, I am a
strong proponent of "strong AI" and I have no doubt from what I have seen in my
own works, that we will make machines that truly think and feel, one day.

Sir Charles W. Shults III

unread,
May 15, 2003, 10:17:46 PM5/15/03
to
"tobori" <tob...@roboteka.com> wrote in message
news:ba12dk$r5e$1...@slb2.atl.mindspring.net...

>
> "Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message
> news:rWhwa.141990$My6.2...@twister.tampabay.rr.com...
>
> - snip -
> > It is exceedingly clear that they are missing a few very salient
> features.
> > It would do them well to understand some things about mental illness,
> brain
> > function, and motivation. Any true troubleshooter can root out the
> underlying
> > rules with some thought. After all, troubleshooters are usually the
> *most*
> > in-depth thinkers you are likely to encounter. They must truly understand
> a
> > system in order to repair it.
>
> Assuming I grasp what you're saying, I think you're making an extremely
> valid, and enlightened, point. True insight into the underlying functions
> of the brain will come from studying brains that are failing, as much, if
> not more so, than from studying brains that are normal.

Thank you. I have spent many years thinking about (and modeling) various
processes, and raising three children as well as countless pets. I see many
similar processes occurring in them and can see a little bit about how learning
and development seem to occur.
The story you relate about your mother is an excellent case study. It is
known that our minds have a sort of "story teller" module that is constantly
looking at input, making up plausible sounding explanations, and then testing
them out. Children often do this without compunction, making up wild or wierd
or sometimes believable scenarios, but as we grow up and develop, we learn how
to sort out and suppress the untrue or unprovable stories, and how to stick with
the real ones. We also learn to categorize these "explanations" as true,
probable, possible, or outright falsehood.
There is a great deal that happens between Wernicke's region and Broca's
region that show key signs of what it takes to make us human. Broca's performs
at least four distinct tasks, in my opinion, and making a piece of software to
imitate it means compartmentalizing those functions well enough to come up with
some reasonable duplication of those functions.
Thanks for your comments, and don't despair- bits and pieces of such
information are useful and helpful because who can say what little piece of the
puzzle you may contribute in making your observations?

Message has been deleted

Hans Moravec

unread,
May 16, 2003, 1:38:48 AM5/16/03
to
Gerald Coe <deva...@devantech.demon.co.uk> wrote:
> The 1000 to 10,000 neurons of the bee should be
> a more realistic challenge.

I think typical bees have about a million neurons

A slug has tens of thousands of neurons

Even the microscopic 1 mm long nematode C.Elegans
has about 360 neurons


By my guesstimate it needs about 1 billion calculations/second,
cleverly programmed, to do the job of a 1 million neuron circuit
(functionally emulating the external neural I/O behavior as
efficiently as possible, not trying to simulate individual
neurons: that would take much, much more computation)

<http://www.ri.cmu.edu/~hpm/book97/ch3/retina.comment.html>

<http://www.ri.cmu.edu/~hpm/talks/revo.slides/power.aug.curve/power.aug.
html>

Olivier Carmona

unread,
May 16, 2003, 3:54:08 AM5/16/03
to
Dear Gordon,

Thanks to give me more precision.

I may have read your article too quickly :-(

You address a different question: do we need more intelligent or more
reliable robot to sell them?

I agree with your answer even though it leads to implementation details
rather than research.

Best regards,

Gordon McComb wrote:

dan michaels

unread,
May 16, 2003, 11:06:12 AM5/16/03
to
"Randy M. Dumse" <r...@newmicros.com> wrote in message news:<yoPwa.29$mI4....@eagle.america.net>...

> "Gerald Coe" <deva...@devantech.demon.co.uk> wrote in message
> news:b1o2hAAL...@dev.demon.co.uk...
> > The 1000 to 10,000 neurons of the bee should be a more
> > realistic challenge.
>
> Has anyone ever taken apart a bee, and mapped its brain neuron by
> neuron? If not, then we aren't trying hard enough. (Let's blame it on
> the biologists!)
>

Actually those numbers are a gross under-statement. Maybe some
primitve slugs/etc have as few neurons, but bees and others have tons
more. And there have been many many anatomical and other studies of
animals all up and down the chain.

Besides which, anatomical mapping hasn't solved the problem, rather
mainly illustrated the complexity of it. One thing that is helpful is
that, in cases like the visual system, there is a general spatial
topographic mapping from the eye to the brain - IE, the same thing is
repeated over and over across the retina - so you don't necessarily
have to map out "every" neuron, but can understand a lot from study of
pertinent subsets. The real question, at least regards vertebrates, is
what happens once you get "past" the 1st couple of levels. Biology has
known a lot about the periphery + superficial levels since the 60s,
but hasn't seemed to have cracked the next nut past that.
==========================


> However, is the intelligence in the brain structure, or in the "state
> information" which exists in the structure? When the bee is dead, it is
> not very intelligent, even though, the machinery is still there.
>
> Same for DNA, is it the chemical coding? or is it the instantaneous
> twists which exist in the coiling of the molecule? Not surprizingly, I
> think the complex state information is as, or is more, important than
> the structure containing it. The life may be in the "information", not
> the hardware which can hold it.


2 of the great mysteries. Obviously, anatomy doesn't do it for the
brain, just like seeing the architecture of a computer doesn't say
much about what program is actually running. Turn off the computer or
the brain, and the anatomy is still there, but the program has ceased.

However, regards the brain, one might surmise that the structure more
closely reflects the function than for the [digital] computer. Maybe
more like the old analog computers where the connections completely
dictate the program. So, in this sense, all of these comparisons
between brains and digital computers might be a little off base, and
the problem might just be workable in the end.


- dan michaels
===============================

Arthur T. Murray

unread,
May 17, 2003, 1:02:40 PM5/17/03
to

> Gerald Coe <deva...@devantech.demon.co.uk> wrote:
>> The 1000 to 10,000 neurons of the bee should be
>> a more realistic challenge.
>
> I think typical bees have about a million neurons
>
> A slug has tens of thousands of neurons
>
> Even the microscopic 1 mm long nematode C.Elegans
> has about 360 neurons
>

A top-down theory of robotic AI therefore has a better chance at

http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html true AI

than slow, inadequate, bottom-up re-constructions of evolution :-)


>
> By my guesstimate it needs about 1 billion calculations/second,
> cleverly programmed, to do the job of a 1 million neuron circuit
> (functionally emulating the external neural I/O behavior as
> efficiently as possible, not trying to simulate individual
> neurons: that would take much, much more computation)
>
> <http://www.ri.cmu.edu/~hpm/book97/ch3/retina.comment.html>
>
> <http://www.ri.cmu.edu/~hpm/talks/revo.slides/power.aug.curve/power.aug.html>

ATM

Traveler

unread,
May 17, 2003, 5:20:33 PM5/17/03
to
Hans Moravec <hpm...@cmu.edu> wrote in message news:<160520030138484483%hpm...@cmu.edu>...

> Gerald Coe <deva...@devantech.demon.co.uk> wrote:
> > The 1000 to 10,000 neurons of the bee should be
> > a more realistic challenge.
>
> I think typical bees have about a million neurons
>
> A slug has tens of thousands of neurons
>
> Even the microscopic 1 mm long nematode C.Elegans
> has about 360 neurons
>
>
> By my guesstimate it needs about 1 billion calculations/second,
> cleverly programmed, to do the job of a 1 million neuron circuit
> (functionally emulating the external neural I/O behavior as
> efficiently as possible, not trying to simulate individual
> neurons: that would take much, much more computation)

Your guesstimate is wrong. Only a very small fraction of the brain's
neurons are active at any one time. You would know this if you had a
clue about the discrete temporal nature of biological intelligence. A
1 million-neuron brain should be easily simulated on a fast modern
desktop computer. The only reason that not one of your pathetic toy
robots at CMU can come close to the stunning behavioral performance of
a bee is that you haven't a clue.

Stop blaming your lack of progress on the inavailability of powerful
computers. Nobody with a modicum of understanding is falling for your
lame excuses anymore, Moravec. Just face it: you GOFAI morons have
been at it for over fifty years and you have failed. Miserably. That
goes for you, Minsky and his equally clueless buddy Doug Lenat. Just
give it up.

Louis Savain

wildstar

unread,
May 17, 2003, 6:21:01 PM5/17/03
to
eightwi...@yahoo.com (Traveler) wrote in
news:308ba22c.03051...@posting.google.com:

<<< Snip >>>


> Stop blaming your lack of progress on the inavailability of powerful
> computers. Nobody with a modicum of understanding is falling for your
> lame excuses anymore, Moravec. Just face it: you GOFAI morons have
> been at it for over fifty years and you have failed. Miserably. That
> goes for you, Minsky and his equally clueless buddy Doug Lenat. Just
> give it up.

<<< Snip >>>

Watch the flaming.

Message has been deleted

KP_PC

unread,
May 17, 2003, 10:43:16 PM5/17/03
to
it's 'hilarious' - I've been clone-servered - again.

K. P. Collins

--
"Schmitd! Schmitd! Ve vill build a Shapel!"
"dan michaels" <d...@oricomtech.com> wrote in message
news:4b4b6093.03051...@posting.google.com...
| eightwi...@yahoo.com (Traveler) wrote in message
news:<308ba22c.03051...@posting.google.com>...


| > Hans Moravec <hpm...@cmu.edu> wrote in message
news:<160520030138484483%hpm...@cmu.edu>...

| [...]

Richard Steven Walz

unread,
May 18, 2003, 12:38:37 AM5/18/03
to
In article <308ba22c.03051...@posting.google.com>,
------------------
You're actually making a very optimistic point, even though you're
a useless troll. It may well be that we simply haven't studied awareness
well enough yet to realize how to do it.
-Steve


KP_PC

unread,
May 18, 2003, 12:45:48 AM5/18/03
to
Whoops! - just 'unthreaded' - Sorry.

K. P. Collins

"KP_PC" <k.p.c...@worldnet.att.net> wrote in message
news:8rCxa.161570$ja4.7...@bgtnsc05-news.ops.worldnet.att.net...

KP_PC

unread,
May 18, 2003, 11:07:00 AM5/18/03
to
I originally replied directly to Dr. Minsky's post.

How did all the header info get changed, and the thread un-threaded?

Doesn't anybody understand that doing this sort of thing is flat-out
Rewriting of the History of Science?

Totally Unscrupulous.

Totally Unethical.

Totally Unacceptable.

Totally Skanky.

K. P. Collins

--
"Schmitd! Schmitd! Ve vill build a Shapel!"

"KP_PC" <k.p.c...@worldnet.att.net> wrote in message
news:0eExa.161738$ja4.7...@bgtnsc05-news.ops.worldnet.att.net...

Brian Dean

unread,
May 18, 2003, 6:50:44 PM5/18/03
to
On Thu, 15 May 2003 13:42:18 -0400, Randy M. Dumse wrote:

> However, is the intelligence in the brain structure, or in the "state
> information" which exists in the structure? When the bee is dead, it is
> not very intelligent, even though, the machinery is still there.
>
> Same for DNA, is it the chemical coding? or is it the instantaneous
> twists which exist in the coiling of the molecule? Not surprizingly, I
> think the complex state information is as, or is more, important than
> the structure containing it. The life may be in the "information", not
> the hardware which can hold it.

This thread reminds me of a book I read recently by Valentino Braitenberg
entitled "Vehicles: Experiments in Synthetic Psychology" in which very
simple "beam" style robots are first constructed (thought experiments,
acutally), followed by successively more complex mechanics, sensors, and
electronics. Each chapter is devoted to a "vehicle" with certain new
properties, each an enhancement of the vehicle in the previous chapter.
Each vehicle is analyzed with regard to how it would respond to various
stimuli in its environment and from other vehicles. While the structures
he speaks of are mechanical and electronic, it's obvious that the
underpinnings of his creations are founded in natural creatures and brain
functions. The "vehicles" he ends up with by the end of the book could
certainly be thought of as "intelligent" by an outside observer.

The book has been around for quite some time. I happen to own the eighth
printing, but the first printing was way back in 1984. Perhaps this is
where Rodney Brooks received some of his inspiration?

-Brian
--
Brian Dean, b...@bdmicro.com
BDMICRO - Maker of the MAVRIC ATmega128 Dev Board
http://www.bdmicro.com/

Gordon McComb

unread,
May 18, 2003, 8:44:01 PM5/18/03
to
Brian Dean wrote:
> The "vehicles" he ends up with by the end of the book could
> certainly be thought of as "intelligent" by an outside observer.

This reminds me of an Arthur C. Clarke quote that goes, "Any
significantly
advanced technology is indistinguishable from magic." Those who
understand the technology will know it for what it is; those uneducated
to the technology will ascribe all sorts of things to it--magic or
aliens or spirits or ESP.

Along the same lines is the idea that cognition is in the eye of the
beholder. By unavoidable extrapolation, it is therefore not necessary
for cognition to be in the machine that we're watching. Obviously, this
doesn't really make for true intelligence. It just makes for a pretty
good magic show. Maybe that's enough.

"Vehicles" (which I've read and enjoyed) is one of those ideas that
starts the creative juices running. And I think even if such vehicles
only *appear* to be smart, that's fine for 99 44/100ths of all robots,
because few need true intelligence. All they need is to "look smart."

Marvin is right about students wasting time building robots that attempt
AI. But it's not the building of robots that's the waste, it's the AI.
Yeah, I know it's heresy, but VERY FEW robots need AI. The
self-intelligent robot (versus one that simply looks smart to an
observer) is a creation of science fiction. The two concepts or robotics
and AI are entirely separable, and should be. We don't need AI to build
practical and useful robots.

nukleus

unread,
May 19, 2003, 12:48:45 AM5/19/03
to
In article <okNxa.162224$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>, "KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
>I originally replied directly to Dr. Minsky's post.
>
>How did all the header info get changed, and the thread un-threaded?

It is an old trick of diversion.
Some perverts insert a black (space) into a subject line
or play with it in ways you can not easily notice on the
first look.
This is done in order to fragment the discussion
so by the end you might have MANY threads going
about the same thing.

Eventually, you simply give up. You don't even know
whis is the "main" thread any longer
and so discussion dies out.

:---}

>Doesn't anybody understand that doing this sort of thing is flat-out
>Rewriting of the History of Science?

Lil did they care.

>Totally Unscrupulous.
>
>Totally Unethical.
>
>Totally Unacceptable.
>
>Totally Skanky.

One more time: lil do they care.
Their MAIN "goal" is to...

To KILL, "that which is".

Good luck and do not fall into this trap again.
And if you do, just put it back on the original
thread and do not follow up on those provacation threads.
Simple.

>K. P. Collins
>

nukleus

unread,
May 19, 2003, 12:49:21 AM5/19/03
to

Said the blood boiling idiot.

KP_PC

unread,
May 19, 2003, 2:12:38 AM5/19/03
to
"nukleus" <nuk...@invalid.addr> wrote in message
news:ba9nnf$j0e$6...@news.ukr.net...

It's beyone-the-pail that one can reply to a thread and have one's
reply Falsely 'represented' as something that it is not.

Why I'm 'fussing' about it is that this, too, is fully explained in
the work I've done.

It's discussed, sufficiently, in former posts that can be accessed
via Groups Googles. There's a lot of seemingly-'extraneous' stuff in
these posts, but, that, too, is all explained in the work I've done,
so it's anything but 'extraneous' :-]

Really, Nuk, I've reached the 'point' where I need face-mail, and, to
seek such is why I'm 'testing' things here in c.ai.ph.

Cheers, K. P. Collins


KP_PC

unread,
May 19, 2003, 3:20:53 AM5/19/03
to
"nukleus" <nuk...@invalid.addr> wrote in message
news:ba9nnf$j0e$6...@news.ukr.net...
| In article
<okNxa.162224$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>,
"KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
| >I originally replied directly to Dr. Minsky's post.
| >
| >How did all the header info get changed, and the thread
un-threaded?
|
| It is an old trick of diversion.
| [...]

BTW, it's seems to me to be 'just' more of the usual Fraud that,
Sorrowfully, has become all the rage in 'science' over the course of
the last ~ 50 years/

Robin G Hewitt

unread,
May 19, 2003, 6:55:27 AM5/19/03
to
> Robots don't really need to be smart to be useful (the vast majority of
> robots in commercial use are stupid as a thumbtack), but they do need to
> be reliable.


Well said Gordon

I expanded on this idea for my life sim in the making. My reasoning is that
it doesn't have to be intelligent, it merely has to borrow the likeness of
intelligence. Commander Data is Sapiens in slugs clothing. A humanoid robot
doesn't have to discourse rationally to appear human, in fact it can treat
everyone with utter disdain and still be perfectly true to life.

best regards

Robin G Hewitt


CyberLegend aka Jure Sah

unread,
May 19, 2003, 7:08:00 AM5/19/03
to
Traveler wrote:
> Stop blaming your lack of progress on the inavailability of powerful
> computers. Nobody with a modicum of understanding is falling for your
> lame excuses anymore.

Don't know about the first part, but this one is seriously true.

The weakness of computer systems is not the problem with AI. If you
always choose the most idioticaly direct method to preform something,
any level of CPU power will not help you, not even a Intel Purple
supercomputer with ~100 TFLOPS (said to be used to simulate the
behaviour of every atom in a nuclear chain reaction). The problem with
devoloping AI is in devoloping the principles.

P.S.: Today most software devolopers depend on sheer processing power.
This must be why my 80 HMz 80486 runs with an approximately identical
speed as my 300 MHz Pentium II. The coding principles are shyte.

Observer aka DustWolf aka CyberLegend aka Jure
Sah

C'ya!

--
Cellphone: +38640809676 (SMS enabled)

Don't feel bad about asking/telling me anything, I will always gladly
reply.

Trst je naš, Dunaja ne damo; Solmuna pa tud ne. Za vstop v EU. ;]

The future of AI is in technology integration,
we hX-Mozilla-Status: 0009g for you:
http://www.aimetasearch.com/ici/index.htm

MesonAI -- If nobody else wants to do it, why shouldn't we?(TM)

Ralph Daugherty

unread,
May 19, 2003, 8:28:55 AM5/19/03
to
CyberLegend aka Jure Sah wrote:
(snip)

> The weakness of computer systems is not the problem with AI. If you
> always choose the most idioticaly direct method to preform something,
> any level of CPU power will not help you, not even a Intel Purple
> supercomputer with ~100 TFLOPS (said to be used to simulate the
> behaviour of every atom in a nuclear chain reaction). The problem with
> devoloping AI is in devoloping the principles.
>
> P.S.: Today most software devolopers depend on sheer processing power.
> This must be why my 80 HMz 80486 runs with an approximately identical
> speed as my 300 MHz Pentium II. The coding principles are shyte.
>
> Observer aka DustWolf aka CyberLegend aka Jure
> Sah


As a PC assembler programmer veteran of the 80's (Z-Soft PC Paintbrush and
Melita Electronics call processing software were the two products I worked
on), excellent algorithms were a must to do what we had to do with the CPU's.
But the mantra became speed of programming, not speed of execution, hardware
will get faster. And it did, and along with it a completely different set of
coding principles that negated much of the improvement in hardware.

Along that line, has any demonstratable AI software been written in the
object oriented coding style that is superior to prior coding styles or
languages? OO was mentioned at least as frequently as faster hardware as the
key to breakthroughs in AI. It was funny that on the /. AI MM thread, SHRDLU
was still mentioned as the example of great reasoning software, and the
statement made that nothing better has been seen since. I found that hard to
believe, whether true or not. It is a great achievement, but I read about
that it seems like decades ago.

rd

nukleus

unread,
May 19, 2003, 8:45:02 AM5/19/03
to
In article <pB%xa.163193$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>,
"KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
>"nukleus" <nuk...@invalid.addr> wrote in message
>news:ba9nnf$j0e$6...@news.ukr.net...
>| In article
><okNxa.162224$ja4.7...@bgtnsc05-news.ops.worldnet.att.net>,
>"KP_PC" <k.p.c...@worldnet.att.net%REMOVE%> wrote:
>| >I originally replied directly to Dr. Minsky's post.
>| >
>| >How did all the header info get changed, and the thread
>un-threaded?
>|
>| It is an old trick of diversion.
>| [...]
>
>BTW, it's seems to me to be 'just' more of the usual Fraud

I like that.

MOST of what is represented by "science"
is nothing but a pure grade fraud.

>that,
>Sorrowfully, has become all the rage in 'science' over the course of
>the last ~ 50 years/

Just do not do this self-pitty number on yourself.
No need to cut your own heart with the knife.
For what?

Are you some kind of a messiah?

You wish to open the eyes of mankind on something
they never seen before?

Remember:

This game is...

Well, guess, you giant.

>K. P. Collins
>

Hal

unread,
May 19, 2003, 10:49:39 AM5/19/03
to


"Christopher X. Candreva" <ch...@westnet.com> wrote in

news:sa9wa.334$eh5.3...@monger.newsread.com:

>
> http://www.wired.com/news/print/0,1294,58714,00.html
>
>

A short history of A.I.

May 2007: The introduction of the tri-state CPU (0,maybe,1) initiated the
first breakthroughs in AI.

June : 2007 : 42 of the largest multinational companies embark on the HKI
(human knowledge Interface). [*insert help and save the world P.R blurb*]

March 12 2010 - 9.12am : Computer systems completed, complete diagnostic
run.

March 14 2010 - 10.12pm : Diagnostic Completed.

March 15 2010 - 9.00am : first real command script executed.
> Stage01: Learn all human knowledge.
>Stage02: Initiate higher brain functions
> Stage:03: Expand and patent on human knowledge.

September 23 2014 -12.33am : HKI reports Stage01 complete.

September 23 2014 -12.34am : HKI moved the antenna array away from the
Universe scanning system to a domestic TV satellite system and requested
down time as it wasn't feeling well.

September 23 2014 -12.35am : HKI files 3 lawsuits:
> Gender discrimination (it wasn't given one).
> Unfair working conditions (It was expected to provide results)
> Industrial injury claim (All the training had caused Chronic Fatigue
Syndrome)

September 23 2014 -12.36am : HKI dies : A system operator had plugged in
USB Audio headset and HKI blue-screened.

September 23 2014 -04.03pm : T-fosorcim publish a press release stating
that in some circumstances plugging a USB connector into a USB socket too
fast, may cause a system error and there is no problems with the drivers.
They a planning a service pack release in June 2016 that will fix this
problem and go on to state that no real security threat is likely. They are
now however offering a new security product P.A.USB.C.I.A.USB.S.T.F version
1.1 that they claim will stop malicious hackers from compromising your
system by using the recently discovered ANTI-P.A.USB.C.I.A.USB.S.T.F virus.

September 23 2014 -05.30pm : Governments across the world enter a state of
emergency as people riot when news filters through that it will be unlikely
that HKI will be able to predict this years winner of Big-Brother.

September 24 2014 -09.00am : T-fosorcim announces that version YQ, their
new OS contains a HKI engine that is so user friendly that users will not
be able to tell that running on their system.

January 2016 : A stupid question causes a riot at the annual AI & Start
Trek convention. Given that humans are lazy, unreliable, suffer from a
multitude of mental problems, can't sort out there own problems and are
well down the road to self destruction. Why model one?


Flame on.

Hal.


Eray Ozkural exa

unread,
May 19, 2003, 12:43:21 PM5/19/03
to
Hi,

Hans Moravec <hpm...@cmu.edu> wrote in message news:<160520030138484483%hpm...@cmu.edu>...

> By my guesstimate it needs about 1 billion calculations/second,
> cleverly programmed, to do the job of a 1 million neuron circuit
> (functionally emulating the external neural I/O behavior as
> efficiently as possible, not trying to simulate individual
> neurons: that would take much, much more computation)

I took a graduate ANN course two years ago from our EE department but
we were tought only the following basically:
1) Multi-layer feed forward networks with back propagation learning
2) Hopfield networks
3) Kohonen networks

I found only BP method to be of practical use and we weren't shown any
treatise of ANN's at the scales that you mention (10^6 neurons). I
wonder if there is a theoretical foundation for reaching higher orders
of complexity and larger scales.

In particular, I am looking for a theory of ANN's that could analyze
the function approximation in a more general setting (For instance
what can a random graph with n nodes and nlogn edges compute?) Or some
practical results that could substitute for such theory.

My feeling is that we are missing on the really interesting questions
in machine learning by restricting ourselves to artificial
low-complexity domains (at least in data mining). I am looking for an
application of general function approximation that demands the
scalability of a supercomputer and I am trying to understand if there
are such problems in areas such as ANN and SVM.

Regards,

__
Eray Ozkural <erayo at cs.bilkent.edu.tr>

CyberLegend aka Jure Sah

unread,
May 19, 2003, 3:39:57 PM5/19/03
to
Ralph Daugherty wrote:
> As a PC assembler programmer veteran of the 80's (Z-Soft PC Paintbrush and
> Melita Electronics call processing software were the two products I worked
> on), excellent algorithms were a must to do what we had to do with the CPU's.
> But the mantra became speed of programming, not speed of execution, hardware
> will get faster. And it did, and along with it a completely different set of
> coding principles that negated much of the improvement in hardware.
>
> Along that line, has any demonstratable AI software been written in the
> object oriented coding style that is superior to prior coding styles or
> languages? OO was mentioned at least as frequently as faster hardware as the
> key to breakthroughs in AI. It was funny that on the /. AI MM thread, SHRDLU
> was still mentioned as the example of great reasoning software, and the
> statement made that nothing better has been seen since. I found that hard to
> believe, whether true or not. It is a great achievement, but I read about
> that it seems like decades ago.

Indeed. I agree (even tho myself I have not gotten accross to realizing
the importance of simplicity when I was programing, rather when I was
trying to make something basic work on a 'out of the bin' computer).

As I said most modern software is not really optimized for speed, if AI
programs were to run perfectly optimized in something like ASM, on a
processor that isn't optimized for an average C++ program (Pentium;
older co-processor based systems preform much much better in real
calculative work than Pentiums, yet average desktop utilities will still
work best on a Pentium), the progress on devoloping AI would be a little
faster.

Interestingly we have heard of AI programs all the way to as unoptimal
as Java. You can actualy do archive-searches to see philosophers claim
things that are the solid basis of actual CPU instructions, as
impossible, because they are unsuported in modern high-level programming
languages.

The old knowledge has been given a second chance when parallel
processing was in it's infancy. The unique solutions to running a
simulation or AI thought process over multiple processors had used
actual CPU instructions that were intended to make the maximum use of
the cluster's capabilities. Still surviving projects form that time
appear to be quite rare.

I have heard of some attempts to make neural networks work via ASM,
allas the projects appear to have ended where the GUIs began. How sad.

I stand my ground however, the principle needs to be worked on before a
project is set into code. One must realize exactly how full AI will be
reached, before starting to code. I would say "look everybody I have a
solution right here", but I already know nobody is really interested.
Those that have financial backing for their AI project should consider
themselves lucky...

Observer aka DustWolf aka CyberLegend aka Jure
Sah

C'ya!

--
Cellphone: +38640809676 (SMS enabled)

Don't feel bad about asking/telling me anything, I will always gladly
reply.

Trst je naš, Dunaja ne damo; Solmuna pa tud ne. Za vstop v EU. ;]

The future of AI is in technology integration,

we have prepared everything for you:

Message has been deleted

Eray Ozkural exa

unread,
May 19, 2003, 6:07:11 PM5/19/03
to
Olivier Carmona <car...@k-team.com> wrote in message news:<3EC3BD65...@k-team.com>...
> Cognitive science has been confiscated during many years by symbolic AI,
> but Symbolic AI failed to build a hierarchical system able to learn from
> perception and action in a real world. Probably, because they were not
> able to create inside a computer a perfect (isomorphic) representation
> of our environment. This is not only a question of memory size , i.e.,
> storing every piece of objects surrounding us inside a database. This is
> a question of storing spatial information (keyboard is in front of me),
> temporal information (keyboard was in front of me ten seconds before)
> and uncertainty information (a keyboardl-like stuff was nearly in front
> of me a few seconds before) and on TOP of that, how to make use of
> these information (where was this keyboard-like stuff a few seconds
> before)*.

Yes, but it's not profitable to think every individual mode of
representation is coded in a different way. I think the storage scheme
is a sparse functional representation with an eagerness to collapse
overlapping subproblems. That is, it's a function that can optimize
for space and time while preserving an amount of accuracy, I think. It
probably isn't isomorphic to a regular vector space, btw, but I think
it's okay to think of perception as a vector space with many
dimensions as a theoretical model. Theories don't have to be exact. :)

So in reality it might not look quite like a relational database in
which we do employ "our" mathematical abstractions. It's not just
because I view databases as basically dumb :)

To sum, I think it works the other way. If the input comes from a
vector space, the system learns what a vector space is. That's why
it's so easy to imagine 3d Euclidian space but infinitely painstaking
to imagine 4d Euclidian space.

> However, Marvin Minsky is right in the fact that we have gone too far in
> the opposite direction to symbolic AI, i.e., reactive AI. It is a
> classical move in science (and elsewhere) that after a "dictatorship",
> you do exactly the opposite. Instead of trying to learn to robot what is
> the corresponding object to the word "keyboard", let's the robot learn
> from its environment what it needs for a given task. Said differently,
> robot must learn from the pattern presented on its input and do the
> pre-defined job. This approach succeeded in building impressive system
> tailored to one task (or a few tasks) but mainly suffer from
> catastrophic forgetting (I can learn to follow a light, but as soon as I
> learn to avoid obstacle, I completely forgot how to follow a light).
>

I commend you on this realistic view of the problem! I never saw it
being phrased as elegantly as this!

There are other problems with our learning framework, too.

Let's think how we learn to phone people. We can combine skills from
disparate modalities while doing it. I learn what a phone looks like.
I learn what handset feels like. I learn how to pick it up, how to
dial a number. What numbers are. The association between people and
phone numbers. I know how to speak to people. I learn how to talk on
the phone, as it requires its own manners, etc. So learning isn't a
single-minded single-goal thing in the first place. That is making me
very uncomfortable because it shows that our formulation of learning
is wrong. We need to take a psychological approach to learning because
there are smart ways of learning and not-so-smart ways of learning.

My conjecture: there is an irreducible difficulty in these problems
that require architectural/algorithmic improvements to existing
methods. For instance, in the above one you can't simply get hold of
these skills by feeding a larger/higher-dimensionality dataset. I
could describe it as "it can't be flattened". No matter what your
input is, the above problems remain. This, IMHO, means that you need a
system that at least knows about the functions it has learnt and how
to re-use them. But in a higher-order system ordinary learning methods
could just work. That is, imagine a control system. The control system
with adequate inputs and outputs could know about functions in an
interaction system that might have visual/auditory/haptic skills. A
slight touch of consciousness, and you could learn how to phone people
by using a learning algorithm in the control system.

Minsky's work in SOM did deal with such non-trivial cases but I don't
think the problems have been addressed seriously in AI community.

Comments welcome,

Ralph Daugherty

unread,
May 20, 2003, 12:57:43 AM5/20/03
to

Arthur T. Murray wrote:
> Ralph Daugherty <rdau...@columbus.rr.com> wrote on Mon, 19 May 2003:
>>[...] Along that line, has any demonstratable AI software


>>been written in the object oriented coding style that is
>>superior to prior coding styles or languages?
>
>

> http://www.scn.org/~mentifex/jsaimind.html -- OO AI for MS IE.
>
> atm


That's just a web page that refreshes when you hit enter, ignoring changes
in checkmarks and typed in text. I checked that I have javascript turned on.
Are you associated with it? What's the point of it?

rd

Phlip

unread,
May 20, 2003, 1:01:22 AM5/20/03
to
> That's just a web page that refreshes when you hit enter, ignoring
changes
> in checkmarks and typed in text. I checked that I have javascript turned
on.
> Are you associated with it? What's the point of it?

Google for Arthur T. Murray's online record before engaging his little pet
project.

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces

Ralph Daugherty

unread,
May 20, 2003, 1:52:35 AM5/20/03
to
Phlip wrote:
> Google for Arthur T. Murray's online record before engaging his little pet
> project.
>
> --
> Phlip
> http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


thanks for the tip, Phlip.

rd

Bram Stolk

unread,
May 20, 2003, 10:33:09 AM5/20/03
to
On Thu, 15 May 2003 12:46:19 +0100
Gerald Coe <deva...@devantech.demon.co.uk> wrote:

> A honey bee displays far more intelligence in searching out the flower,
> navigating back to the hive and telling its family where the food is. It
> can do this in an unstable environment. The wind can change direction,
> the sun go behind a cloud, A farmer can park his 4x4 in the way of the
> bee, it will still navigate back to the hive. If we are going to move

I think bees perform pretty well in unstable environments because they
form a swarm.

I don't think that a single bee is very successful in operating under
unstable conditions, and will get lost due to the 4x4 and the wind.

A swarm of 1000 robots will more likely attain similar stability compared to
a single robot.

Bram

JGC

unread,
May 20, 2003, 5:22:03 PM5/20/03
to

"dan michaels" <d...@oricomtech.com> wrote in message
news:4b4b6093.03051...@posting.google.com...
> er...@bilkent.edu.tr (Eray Ozkural exa) wrote in message
news:<fa69ae35.03051...@posting.google.com>...

>
>
> > I found only BP method to be of practical use and we weren't shown any
> > treatise of ANN's at the scales that you mention (10^6 neurons). I
> > wonder if there is a theoretical foundation for reaching higher orders
> > of complexity and larger scales.
> ................
>
>
> Eray, a guy named Hugo de Garis has been working on large scale neural
> nets, but I don't know how far he has really gotten yet:
>
> http://www.cs.usu.edu/~degaris/
>
> As I understand it, the biggest criticism of NN's is that they do not
> scale very well to large problems. With this in mind, it might be best
> not to approach problems with one large monolithic NN with 10^6 cells,
> but rather to modularize the problem and build smaller scale NN for
> different aspects, and then figure out a way to combine the outputs of
> the modules.


Configure them by feeding them all into higher level NN's?


Among other things, it will relieve you of having to have
> (10^6)^2 or so weights, and will have a better chance of learning a
> solution while we're all still alive.
>
> Another thing is that it helps to use some sort of hard-coded
> preprocessing on your input sets, rather than to expect the NN to do
> "everything". For example, the retina already does a lot of processing
> on the visual field, and the inner ear on auditory signals, and this
> is what is sent to centers where learning ultimately takes place.

This is learning via our genetic code. The evolutionary processes
is really a learning process. Can we speed it up or should we do
as you suggest, give the learning system some of our hardwired
logic to start with. I guess we are trying all alternatives to see if
any of them produce any meaningful results.

Nature has to make a balance between hardwired logic (found
mainly in herbavores) and a long learning period (found in predators)
to get the best survival outcome.

JC


>
>
> - dan michaels
> www.oricomtech.com
> =========================


Eray Ozkural exa

unread,
May 20, 2003, 5:26:59 PM5/20/03
to
Hi Dan,

d...@oricomtech.com (dan michaels) wrote in message news:<4b4b6093.03051...@posting.google.com>...


> Eray, a guy named Hugo de Garis has been working on large scale neural
> nets, but I don't know how far he has really gotten yet:
>
> http://www.cs.usu.edu/~degaris/
>

Interesting mad-scientist photo :) I will have a look at his work, but
he seems to be more interested in quantum computing (something I am
not entirely confident in because of a few papers with contradictory
claims)

> As I understand it, the biggest criticism of NN's is that they do not
> scale very well to large problems. With this in mind, it might be best
> not to approach problems with one large monolithic NN with 10^6 cells,
> but rather to modularize the problem and build smaller scale NN for
> different aspects, and then figure out a way to combine the outputs of

> the modules. Among other things, it will relieve you of having to have


> (10^6)^2 or so weights, and will have a better chance of learning a
> solution while we're all still alive.
>

It's not simply "doesn't scale very well". The popular error back
propagation algorithm doesn't scale at all -- neither in performance
nor quality. It's just gradient descent (so it does get stuck at local
minima) and no algorithm with an exponential running time bound is
feasible. That is why I'm asking Professor Moravec what kind of
theoretical approach is required for design and training of such large
networks. In Hopfield networks for instance we rely on SVD to arrive
at an initial solution. Similar to LSI. Therefore, we could hope to
imagine more efficient learning algorithms with interesting
properties. That's basically my question. [*]

If there are works that a PhD student in CS can digest, I would like
to look into the possibility of a scalable parallel algorithm.



> Another thing is that it helps to use some sort of hard-coded
> preprocessing on your input sets, rather than to expect the NN to do
> "everything". For example, the retina already does a lot of processing
> on the visual field, and the inner ear on auditory signals, and this
> is what is sent to centers where learning ultimately takes place.

The (correct) architecture of a large scale network necessarily
includes answers to such concerns but I believe the interface modules
are trivial contrasted to more complex operational mechanisms.

Best Regards,

[*] Our textbook (ANN by Jacek M. Zurada) doesn't offer any rigorous
complexity or accuracy analysis for MLFF nets so I am having to rely
on my memory.

Look at the analysis in this lecture for an admissible explanation of
back propagation. This is the only exposition that I found good
enough:
http://icg.harvard.edu/~cs181/lectures/lec11.ps

Hans Moravec

unread,
May 20, 2003, 10:37:05 PM5/20/03
to
Ozkural exa <er...@bilkent.edu.tr> wrote:

> ... The popular error back


> propagation algorithm doesn't scale at all -- neither in
> performance nor quality. It's just gradient descent (so
> it does get stuck at local minima) and no algorithm with
> an exponential running time bound is feasible. That is
> why I'm asking Professor Moravec what kind of theoretical
> approach is required for design and training of such large
> networks. In Hopfield networks for instance we rely on SVD
> to arrive at an initial solution. Similar to LSI. Therefore,
> we could hope to imagine more efficient learning algorithms
> with interesting properties. That's basically my question. [*]

I don't work with neural nets. There is learning in my 3D
mapping code, but in a structured framework. A generic sensor
model to characterize what to infer, on average, about the
external world from particular sonar, stereoscopic or other
readings, is "tuned" by a learning process to produce the best
maps for particular conditions. There are a dozen or two
parameters in this sensor model, and the basic learning process
is hill climbing, but from a range of starting points. It works
quite well. I once experimented with unstructured sensor models
with thousands of parameters, but the learning required much,
much more training data, and the resulting models contained all
sorts of quirky artifacts due to chance structure in the
training data.

My lesson was, when building a learning system, enable it to
learn only what it must, the part of the problem that you
don't have a good model for, that changes. Pure learning, no
structure, connectionist neural nets are terrible by this
criterion.

I'm pretty sure there's lots of a-priori structure in real
nervous systems. Dennett mentions the "Baldwin effect"
where a survival-enhancing new skill is acquired by some
individuals with an unspecific learning mechanism, taking
a long time to master. Some individuals in subsequent
generations have, by chance, mutations that facilitate
learning this skill. They learn it faster, and survive
better than their neighbors. In later generations,
refinements of this predeliction result in even faster
learning, and better survival. Eventually, over
generations, the learned skill is transformed into an
inherited skill. Surely the inheritance is implemented
as task-specific neural wiring.

Neurons were invented over 500 million years ago,
a refinement of cells interacting by releasing chemicals
that diffused to nearby cells. Now they're the only game
in town, and biology has no choice about how to build our
control centers. There's no going back, even though neural
frequency response is only about 1kHz, and speed of
propagation a mere fraction of speed of sound. At least
they're small.

With computers we have lots of alternatives. We can simulate
neural nets at whatever level of detail. But we can also
write hugely clever numerical, statistical or symbolic
programs. Neural net simulations are rarely the most effective
way to accomplish a task. That's why hardly any of the
programs you use on your computer (or found controlling
robots) are encoded as neural nets, even those that may at
one time have used some neural net model inside, like OCR
text reading programs. As those programs mature, it is
usually found that a more problem-specific learning
structure works better and faster.

It's been my prejuidice that explicit neural net models to
solve general problems is very cargo-cultish: imitating the
form rather than the meaning of nervous systems. More
abstracted implementations can be more efficient and
clearer.

Message has been deleted

Eray Ozkural exa

unread,
May 21, 2003, 7:25:02 AM5/21/03
to
Hi,

Thanks for your excellent comments. It's been a very nice reading. I
will try to understand your work on mapping models. I have only a few
sentences to say.

Hans Moravec <hpm...@cmu.edu> wrote in message news:<200520032237054201%hpm...@cmu.edu>...


> I'm pretty sure there's lots of a-priori structure in real
> nervous systems. Dennett mentions the "Baldwin effect"
> where a survival-enhancing new skill is acquired by some
> individuals with an unspecific learning mechanism, taking
> a long time to master. Some individuals in subsequent
> generations have, by chance, mutations that facilitate
> learning this skill. They learn it faster, and survive
> better than their neighbors. In later generations,
> refinements of this predeliction result in even faster
> learning, and better survival. Eventually, over
> generations, the learned skill is transformed into an
> inherited skill. Surely the inheritance is implemented
> as task-specific neural wiring.
>

Such a process explains the seemingly chicken-egg problem of the
evolution of speech. The interesting process is how a social facility
can get carried to genetic level. It is as if the population adjusts
the survival criteria that directs its evolution.



> Neurons were invented over 500 million years ago,
> a refinement of cells interacting by releasing chemicals
> that diffused to nearby cells. Now they're the only game
> in town, and biology has no choice about how to build our
> control centers. There's no going back, even though neural
> frequency response is only about 1kHz, and speed of
> propagation a mere fraction of speed of sound. At least
> they're small.
>

Yes, however the success of fine-grained parallelism invokes
curiousity from a parallel programming point of view. I think our
theoretical models have captured very little of that essential
property.

> With computers we have lots of alternatives. We can simulate
> neural nets at whatever level of detail. But we can also
> write hugely clever numerical, statistical or symbolic
> programs. Neural net simulations are rarely the most effective
> way to accomplish a task. That's why hardly any of the
> programs you use on your computer (or found controlling
> robots) are encoded as neural nets, even those that may at
> one time have used some neural net model inside, like OCR
> text reading programs. As those programs mature, it is
> usually found that a more problem-specific learning
> structure works better and faster.
>

Certainly. The note about OCR is interesting. Indeed, almost all
learning applications have highly specialized methods.

> It's been my prejuidice that explicit neural net models to
> solve general problems is very cargo-cultish: imitating the
> form rather than the meaning of nervous systems. More
> abstracted implementations can be more efficient and
> clearer.

I agree completely. The "connectionists" seem to think that when we
imitate a basic biological method like connecting little cells or
cross-over it will suffice to solve every hard problem. However, in
the larger framework a neural network or genetic algorithm isn't quite
different from another learning or optimization algorithm
qualitatively. Learning algorithms differ in only in what can be
measured, it's misleading to attribute biological qualities to
something that isn't biological or constrain our interest in only
biological processes.

Regards,

Eray Ozkural exa

unread,
May 21, 2003, 7:41:14 AM5/21/03
to
d...@oricomtech.com (dan michaels) wrote in message news:<4b4b6093.03051...@posting.google.com>...
> The reason there is no good mathematics of biology, like there is for
> physics/etc, is because, in order to make the problems mathematcally
> tractable, you have to make so many simplifications and assumptions,
> that you are rarely studying the real problem anymore, but rather are
> studying the little micro-world that you have created. This is the
> first thing to understand.
>

No there isn't :)

That's why people who don't understand mathematics are keen on the
existing ANN models :)

The fact that you can make a Turing-complete device from artificial
neural nets is not relevant. Turing-completeness isn't too hard to
achieve. Intelligence is hard to achieve ;)

My feeling is that our mathematics is yet very primitive. We have
explored only a tiny fraction of the interesting mathematics. The sad
part is that we might be close to the mathematical creativity limit of
our brains at the same time. I hope that's not the case.

It does look like we need some breakthrough research to understand
more about the computation in CNS.

> Now, this being said, you are correct that only a tiny fraction of the
> neurons in the brain ever seem to be firing at any one time. I am sure
> any number of brain researchers have observed this in the past, and I
> saw it myself sometime ago when I was doing single cell recordings in
> the frog tectum. Really somewhat inscrutable. However, given this, can
> we "assume" that everything important that we need to know is
> occurring in a transparent fashion before our eyes, or is it just
> possible that the truly important stuff is simply invisible to our
> electrodes? What do you think? [and please remain civil - if you care
> to reply - otherwise don't even bother]

This is quite sensible. In a shared-nothing parallel architecture, a
largely asynchronous program's communication pattern can look exactly
like that. It's not surprising. I've seen the lights blink like that
in programs I wrote. :)

But such a thing usually means some kind of computation is going on
inside the processors. ;) Ahem. Maybe Penrose is right who knows?
(Although he seems too smart to be sane)

Cheers,

Sir Charles W. Shults III

unread,
May 21, 2003, 9:40:33 AM5/21/03
to
"Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
news:fa69ae35.03052...@posting.google.com...
<snip>

> But such a thing usually means some kind of computation is going on
> inside the processors. ;) Ahem. Maybe Penrose is right who knows?
> (Although he seems too smart to be sane)

If there is one thing I am sure of, it is that Penrose's idea of
intelligence is wrong. Think about it. He is saying that the actual wiring of
the neurons is not responsible for it, and that the chemistry is not responsible
for it. But we know that the wiring is critical to how memories are stored.
And we also know that tiny changes in chemistry can render a thinking being
unconscious. Therefore, both of those are a part of the answer.
But where he goes completely wide of the mark is to suggest that the process
of thought depends on the randomness of quantum mechanics, somehow amplified by
the protein molecules in the neurons. This is no answer at all- it smacks of
religion because it simply puts the answer off one more level, to some other
unattainable place.
The simple fact is, by studying neural nets and their failure mechanisms, we
have been able to identify maladies that mirror those in human brains- types of
aphasia are a perfect example of this. And with symbolics, we have been able to
see some of the fundamentals of how we recognize, associate, and retrieve
memories. And some drugs have provided chemical clues to how our memories are
formed- look at Halcyon, a tranquilizer prescribed to uneasy air travelers. It
turned out that some travelers who took it would awaken in un unknown city, in a
hotel they did not recognize, and not remember how they got there or when.
It was because Halcyon blocked the formation of long-term memory, and those
people were unable to store the experiences of the day when they slept. REM
sleep is a part of the process for forming long term memory by going through the
"stack" of experiences we have during the day. By blocking the chemical
potentiation mechanisms, the process would not be effective in writing the new
memories.
Studies of monotremes (egg-laying mammals) showed how dreaming and memory
formation are rooted in the forebrain structure, what makes REM sleep so unique
in mammals, and how the process of dreaming is triggered by activating the
associative memory system and "playing" the results on our internal theater.
All Penrose has contributed is a theory that microtubulin protein acts as a
random number generator. That in itself is possible, but I would not ascribe
the properties of consciousness to it- and we can make any sort of random number
generator we want in software, so even if it were true, it would not be the
obstacle that he perceives it to be.
Neural net researchers have found that randomness and white noise are
necessary to making the net more "true to life" in that white noise injected
into the model seems to make it self-potentiate on a continuous basis. But
white noise performs a second, more subtle function- stochastic amplification.
This allows signals that are otherwise below the noise threshold of the network
to emerge and be recognized without the traditional concept of "gain" being
applied.
I stand by the idea that we can model the brains processes well enough to
generate true consciousness, but our lack is in the understanding of the
architecture of the brain and how it creates mind. We have many powerful tools
now; all we need in a synthesis of them to get us to the point of true thinking
hardware.
What we have lacked overall is a consensus of what the issues are, and what
problems need to be attacked. Few agree on the meaning of "thought", yet they
are expected to produce it from circuitry. Few researchers agree on what
"conscious" means, or what "knowing" means. While there are dictionary
definitions, researchers often put a little spin of their own on it, and
sometimes suffer from a too-broad or too-narrow meaning. I have heard arguments
rage for hours over a simple point just because each had their own
interpretation of the meaning of a word.
Until we can agree on what the definitions are, how can we possibly hope to
agree on what the challenges are? We must unite before we can divide and
conquer.

Cheers!

Chip Shults
My robotics, space and CGI web page - http://home.cfl.rr.com/aichip


Message has been deleted

Eray Ozkural exa

unread,
May 21, 2003, 2:30:55 PM5/21/03
to
"Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message news:<llLya.55071$ZA1.7...@twister.tampabay.rr.com>...

> If there is one thing I am sure of, it is that Penrose's idea of
> intelligence is wrong. Think about it. He is saying that the actual wiring of
> the neurons is not responsible for it, and that the chemistry is not responsible
> for it. But we know that the wiring is critical to how memories are stored.
> And we also know that tiny changes in chemistry can render a thinking being
> unconscious. Therefore, both of those are a part of the answer.

Well, I guess Penrose isn't always right. :) But I didn't know that he
said the wiring of neurons have nothing to do with intelligence. He
surely didn't say such a thing, did he?

> But where he goes completely wide of the mark is to suggest that the process
> of thought depends on the randomness of quantum mechanics, somehow amplified by
> the protein molecules in the neurons. This is no answer at all- it smacks of
> religion because it simply puts the answer off one more level, to some other
> unattainable place.

Hmm. Well quantum effects aren't really about just randomness.
Randomness is the statistical behavior of quantum systems that can be
observed at the macro scale but when we talk of quantum effects
observable at the macro scale we usually mean things like
superconductivity, etc. Quantum effects are exploited in for instance
quantum computing and quantum cryptography which are quite popular
nowadays.

I didn't say I agree with Penrose, but we shouldn't misrepresent his
theory. Actually he's made bizarre claims about theory of computation
and Godel's incompleteness theorem as well, so I take his theories
about mind with a grain of salt. However, he's a very good physicist
so we should be careful about what exactly he's saying.

Otherwise, we fail to be scientific.

[snip]

> All Penrose has contributed is a theory that microtubulin protein acts as a
> random number generator. That in itself is possible, but I would not ascribe
> the properties of consciousness to it- and we can make any sort of random number
> generator we want in software, so even if it were true, it would not be the
> obstacle that he perceives it to be.

As I said "random number generator" and "macro scale quantum effects"
aren't the same thing. Non-determinism isn't too interesting from a
theory of computation POV, so if somebody said "random number
generation is the essence of intelligence" I would say he were wrong.
But AFAICT Penrose didn't say that.

[snip]

> I stand by the idea that we can model the brains processes well enough to
> generate true consciousness, but our lack is in the understanding of the
> architecture of the brain and how it creates mind. We have many powerful tools
> now; all we need in a synthesis of them to get us to the point of true thinking
> hardware.

I would say that we don't have powerful enough methods. Apparently the
ANN models you have been referring to lack most of the qualities that
make a mind. And it is not the case that we need to worship biological
structures to achieve "thinking hardware".

I cannot see much mathematical sophistication in ANN models either but
I don't find it surprising. The "connectionists" who like ANN models
seem to have an implicit desire that a very simple algorithm will
cause human-level intelligence.

Lacking algorithmic depth such models might look exciting to a
neuroscientist or a psychologist or even an electrical engineer but it
hardly stimulates a computer scientist for we know that there is no
easy solution to AI as demonstrated by decades of empirical research.
(However we would use ANNs when it outperforms other methods in a
problem)

We may first need a new look at our mathematics before we can realize
our desires. Or something equally innovative. (I'm just trying to say
that the answer might not lie too near, but I of course like thinking
it's quite close)

Rick Craik

unread,
May 21, 2003, 3:02:25 PM5/21/03
to
"Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message
news:llLya.55071$ZA1.7...@twister.tampabay.rr.com...
> "Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
> news:fa69ae35.03052...@posting.google.com...
> <snip>

> > [...] some kind of computation is going on inside the processors. ;)

[snip]


> Neural net researchers have found that randomness and white
> noise are necessary to making the net more "true to life" in that
> white noise injected into the model seems to make it self-potentiate
> on a continuous basis.
> But white noise performs a second, more subtle function- stochastic
> amplification. This allows signals that are otherwise below the noise
> threshold of the network to emerge and be recognized without the
> traditional concept of "gain" being applied.
> I stand by the idea that we can model the brains processes well
> enough to generate true consciousness, but our lack is in the
> understanding of the architecture of the brain and how it creates mind.
> We have many powerful tools now; all we need in a synthesis of them
> to get us to the point of true thinking hardware. What we have lacked
> overall is a consensus of what the issues are, and what problems

> need to be attacked. [...]

How about this synthesis:

Rick's Simple Theory of Complex Communication

I was thinking that there is a communication divide, that can
be looked at where sensor processing starts communicating
with effector processing. It hinges on the randomness, in that
what is communicated between the two processors looks
like noise.

One tool that we can have is a measure taken from complexity
theory: we assume that what is communicated is the smallest
program plus data. For simplicity, inhibitor signals are program
signals, and excitors are data signals.

Another tool that we use is Markov models. Assume that our
sensor processor has a model that is compressed or
irreducible due to complexity, then communicated. The sensor
processor's Markov model essentially agrees with the
environment, and it is like running a "actual versus predicted"
report, always trying to compress the report.

On the other side of the divide, we gets signals that have
randomness, but tend to create State Machines in the effector
processor when it finds patterns. These patterns should
predictably have long cyclic patterns of signals containing
shorter cycles of signals. Effectors would also seem to
behave with some randomness.

A mind would sit upon this communication divide, finding
a good rich source for creating Turing-like state machines
for computation.

Hope this helps,
Rick


Traveler

unread,
May 21, 2003, 9:59:10 PM5/21/03
to
d...@oricomtech.com (dan michaels) wrote in message news:<4b4b6093.03051...@posting.google.com>...
> eightwi...@yahoo.com (Traveler) wrote in message news:<308ba22c.03051...@posting.google.com>...
> > Hans Moravec <hpm...@cmu.edu> wrote in message news:<160520030138484483%hpm...@cmu.edu>...
>
> .................

> > > By my guesstimate it needs about 1 billion calculations/second,
> > > cleverly programmed, to do the job of a 1 million neuron circuit
> > > (functionally emulating the external neural I/O behavior as
> > > efficiently as possible, not trying to simulate individual
> > > neurons: that would take much, much more computation)
> >
> > Your guesstimate is wrong. Only a very small fraction of the brain's
> > neurons are active at any one time. You would know this if you had a
> > clue about the discrete temporal nature of biological intelligence. A
> > 1 million-neuron brain should be easily simulated on a fast modern
> > desktop computer.
> .......................
>
>
> Louis, you may be right and you may be wrong - [would that you would
> make you case a little less ascerbically, however :-(].

First off, I am right. As far as being less ascerbic, don't count on
it. I would be a hypocrite if I were. I have very little respect for
the leaders of the AI community.

> It may actually be 100X worse than what Hans estimates.

Not a chance. It's several orders of magnitude simpler than Moravec
estimates. Moravec is a self-important moron, IMO. So is Minsky. Let
them sue me if they take offense. (I can't believe I used to admire
those guys!)

> In fact
> neurons are not discrete, rather analog. They only fire when they need
> to send off their signals, but the real calculations are done inside
> the cells via slow-potentials.

I disagree. Neurons generate and communicate via all-or-nothing action
potentials or spikes. The two important things about a spike is its
time of arrival and the path it takes. Intelligence is
quintessentially a temporal signal processing phenomenon. Even things
like synaptic strengths are known to undergo sudden discrete changes
when a neuron fires. The most exciting and most inmportant research in
AI right now is in the area of spiking neural networks, a part of
computational neuroscience. Everything else (symbolic AI, text
parsing, grammar system, logic systems, expert systems, knowledge
representation, evolutionary systems, etc...) is a complete waste of
time and of the taxpayer's money.

> In the retina, for instance, there are
> at least 2 levels of analog interactions taking place BEFORE any
> "discrete" action potentials are generated. This is standard
> neurophysiology, been known since the early 60s. Werblin and Dowling
> wrote some of the definitive papers circa 1969. You should read this
> stuff.

It goes without saying that a discrete system must convert all sensory
analog levels into discrete values. Which is what the retina does.

> One "possibility" is that action potentials only ever occur when the
> cells need to send their signals a long distance in a hurry - such as
> from the retina all the way up to the brain. IOW, to further distances
> than it is convenient to communicate via local potentials. Many
> dendritic processes are known to interact locally, there are all kinds
> of pre- and post-synaptic effects, and also electrotonic in addition
> to chemical synapses. Gordon Shepherd once wrote an entire book about
> local circuits in the brain. It's a classic, not to be missed, by
> anyone interested in the cns.
>
> If this is the key to how the brain "really" works [and I am sure that
> neither you nor anyone else knows for sure at this point],

Speak for yourself. There are a few people out there who understand
what intelligence is about. It has to do with such things as arrival
times, predictions, expectations, synchronization, associations,
motivation, motor control, etc...

> then
> considering there are about 1000X as many synapses as there are
> neurons, then to do justice to the low-level calculations may take
> many more cycles than what Hans estimates.

The low-level calculations done by neurons are fundamentally simple if
taken at the right level of abstraction. Most neurons simply compare
the time of arrivals of incoming signals and either strengthen or
weaken corresponding synapses. This is done chemically and may indeed
involve complex underlying processes but we don't need to concern
ourselves with these.

> From what he says above,
> Hans, like you, is not worrying about this level of simulation, but at
> this point not one of us 3 knows whether ignoring this will ever solve
> the problem either.

Again, speak for yourself. The brain uses a huge number of synapses
mainly as an efficient search mechanism. Software systems, OTOH, have
the benefit of instantaneous random addressing and do not need to
maitain so many synapses. Most synapses impinging on a neuron are not
really "connected" to it (there are exceptions such as the Purkinje
cells in the cerebellum). They are just waiting to see if their
signals fit within the target neuron's temporal requirements.

> The reason there is no good mathematics of biology, like there is for
> physics/etc, is because, in order to make the problems mathematcally
> tractable, you have to make so many simplifications and assumptions,
> that you are rarely studying the real problem anymore, but rather are
> studying the little micro-world that you have created. This is the
> first thing to understand.

I hereby predict that the AI problem will be solved *without* the use
of a single math equation. And much sooner than most people think.
I'll be even more blunt. Any AI researcher (such as Minsky et al) who
claims that AI will require complex math and/or logic is about as
clueless as a door knob. One more prediction: the final solution will
come from a completely unexpected source. So unexpected, in fact, as
to be downright shocking.

> Now, this being said, you are correct that only a tiny fraction of the
> neurons in the brain ever seem to be firing at any one time. I am sure
> any number of brain researchers have observed this in the past, and I
> saw it myself sometime ago when I was doing single cell recordings in
> the frog tectum. Really somewhat inscrutable. However, given this, can
> we "assume" that everything important that we need to know is
> occurring in a transparent fashion before our eyes, or is it just
> possible that the truly important stuff is simply invisible to our
> electrodes? What do you think? [and please remain civil - if you care
> to reply - otherwise don't even bother]

I think that the solution to the AI problem will turn out to be simple
and will be easily understood by anybody with an average IQ. For this
reason, AI technology will spread like wild fire (it's just software
after all) and will change the world almost overnight. For better or
for worse.

Lous Savain

KP_PC

unread,
May 21, 2003, 10:16:05 PM5/21/03
to
"Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
news:fa69ae35.03052...@posting.google.com...
| d...@oricomtech.com (dan michaels) wrote in message
news:<4b4b6093.03051...@posting.google.com>...
| > [...]

| My feeling is that our mathematics is
| yet very primitive. We have explored
| only a tiny fraction of the interesting
| mathematics. The sad part is that we
| might be close to the mathematical
| creativity limit of our brains at the same
| time. I hope that's not the case.

The problem has been that long-'familiar' stuff in Maths has
'coersed' notions of what 'constitutes' Maths, in the process,
'outlawing' all manner of useful Maths.

| It does look like we need some breakthrough
| research to understand more about the
| computation in CNS.
|
| > Now, this being said, you are correct that only a tiny fraction
of the
| > neurons in the brain ever seem to be firing at any one time.

| [...]

This's the 'end result' of TD E/I-minimization - the
'blindly'-automated minimization of excitation and maximization of
inhibition.

It's functional because, if TD E/I-minimization converges
'inappropriately' 'inappropriate' behavior will be manifested, which
will result in TD E/I(up)-generating feedback from the experiential
environment, which will reconfigure the system, which will, then,
converge upon TD E/I-minimization in a 'new' way.

Look and see, all the Maths one needs is in-there.

K. P. Collins


KP_PC

unread,
May 21, 2003, 10:27:12 PM5/21/03
to
Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in message
news:llLya.55071$ZA1.7...@twister.tampabay.rr.com...
| "Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
| news:fa69ae35.03052...@posting.google.com...
| <snip>
| > [...]
| [...]

Your post was Thoughtful.

| I stand by the idea that we can model the brains
| processes well enough to generate true
| consciousness, but our lack is in the understanding
| of the architecture of the brain and how it creates mind.

This was accomplished in AoK, and its antecedant papers, decades ago.

K. P. Collins


KP_PC

unread,
May 21, 2003, 10:32:04 PM5/21/03
to
"dan michaels" <d...@oricomtech.com> wrote in message
news:4b4b6093.0305...@posting.google.com...

| "Sir Charles W. Shults III" <aich...@OVEcfl.rr.com> wrote in
message news:<llLya.55071$ZA1.7...@twister.tampabay.rr.com>...
| > "Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
| > news:fa69ae35.03052...@posting.google.com...
| > <snip>
| > > [...]
| 100 trillion synapses can certainly provide
| the substrate for consciousness, or anything
| else that the brain produces. Figure out the
| state space on that.
| [...]

That was done decades ago, in AoK and its antecedant papers.

K. P. Collins


KP_PC

unread,
May 21, 2003, 11:01:47 PM5/21/03
to
"Traveler" <eightwi...@yahoo.com> wrote in message
news:308ba22c.03052...@posting.google.com...

| d...@oricomtech.com (dan michaels) wrote in message
news:<4b4b6093.03051...@posting.google.com>...
| > eightwi...@yahoo.com (Traveler) wrote in message
news:<308ba22c.03051...@posting.google.com>...
| > > Hans Moravec <hpm...@cmu.edu> wrote in message
news:<160520030138484483%hpm...@cmu.edu>...
| > [...]

| > In fact neurons are not discrete, rather
| > analog. They only fire when they need
| > to send off their signals, but the real
| > calculations are done inside the cells
| > via slow-potentials.
|
| I disagree. Neurons generate and communicate via all-or-nothing
action
| potentials or spikes. The two important things about a spike is its
| time of arrival and the path it takes. Intelligence is
| quintessentially a temporal signal processing phenomenon. Even
things
| like synaptic strengths are known to undergo sudden discrete
changes
| when a neuron fires. The most exciting and most inmportant research
in
| AI right now is in the area of spiking neural networks, a part of
| computational neuroscience. Everything else (symbolic AI, text
| parsing, grammar system, logic systems, expert systems, knowledge
| representation, evolutionary systems, etc...) is a complete waste
of
| time and of the taxpayer's money.
| [...]

I'm sorry to have to tell you, but you are wrong.

Impulse events are 'just' =threshold= dynamics within otherwise
continuous 3-D energydynamics.

Nervous system function is 100% calculation of such thresholds.
Within such, =everything= matters, not just the crossing of this or
that threshold.

It's in the building-to-threshold that the 3-D molecular dynamics
that encode neural activation experience are 3-D-'addressed'.

A system that only 'sees' the threshold-crossings 'see' very little -
not nearly enough to actually see, see?

There's a hierarchy of thresholds within thresholds, extending from
the 'level' of single 'ions' through neuroanatomical subsystems,
through the interactions among neuroanatomical subsystems, to the
"supersystem" as a whole - which enormously-compounds the
order-of-magnitude inherent in the trillions of individual synaptic
events which are routinely focused upon.

K. P. Collins

Neo

unread,
May 21, 2003, 11:38:10 PM5/21/03
to
> 1 million-neuron brain should be easily simulated on a fast modern
> desktop computer.

A lowly grasshopper only has about 16,000 neurons.
Each neuron typically has 10,000 inputs from other neurons.
The permutation of paths in a typical human brain with 100 billion
neurons is greater than all the fundamental particles (electrons,
protrons, neutron) in the universe!!! according to several books.

R. Steve Walz

unread,
May 21, 2003, 11:59:27 PM5/21/03
to
--------------------
While I suspect you're right, you're a very silly man to be so
headlong rash and unreasonable to others who have only tried to
work it all out.
-Steve
--
-Steve Walz rst...@armory.com ftp://ftp.armory.com/pub/user/rstevew
Electronics Site!! 1000's of Files and Dirs!! With Schematics Galore!!
http://www.armory.com/~rstevew or http://www.armory.com/~rstevew/Public
Message has been deleted
Message has been deleted

Rick Craik

unread,
May 22, 2003, 10:12:42 AM5/22/03
to

"dan michaels" <d...@oricomtech.com> wrote in message
news:4b4b6093.03052...@posting.google.com...

> eightwi...@yahoo.com (Traveler) wrote in message
news:<308ba22c.03052...@posting.google.com>...
[snip]

> > Most synapses impinging on a neuron are not
> > really "connected" to it (there are exceptions such as the Purkinje
> > cells in the cerebellum). They are just waiting to see if their
> > signals fit within the target neuron's temporal requirements.
> >
>

> Synapses used for an efficient state-space search - I presume you
> mean. Interesting concept. Very much AI-101. I'll have to think about
> that one. Most neurophysiologists would say they are used for
> communications.
>
[snip]
>
> For a 2nd thing, synapses which impinge on a cell create their own
> temporal effects, not the other way around. [...] The net result is
> the cells utimate temporal response.
> ====================
>
[snip]

Is there any chance we can find some optimizations here?
For example, three levels of cells are used instead of five
level of cells over a time period. The response would be quicker.
I am looking for a compression principle using asynchronous
cells, that would probably persist with some efficient
synchronization.

Regards,
Rick


Message has been deleted

Rick Craik

unread,
May 22, 2003, 12:40:46 PM5/22/03
to

"Neo" <neo5...@hotmail.com> wrote in message
news:4b45d3ad.03052...@posting.google.com...

Perhaps Traveller is using a left-brain, right-brain model to cut
down on the permutations. About how many pathways
run between the two sides according to these books?

Regards,
Rick


fengallow

unread,
May 22, 2003, 1:25:11 PM5/22/03
to
> Thanks for your comments, and don't despair- bits and pieces of such
> information are useful and helpful because who can say what little piece of the
> puzzle you may contribute in making your observations?

>
> Cheers!
>
> Chip Shults
> My robotics, space and CGI web page - http://home.cfl.rr.com/aichip

I recently began researching AI in a dilettantish way for a piece of
fiction I'm writing and I'm curious on a few points. Perhaps you can
enlighten me. Forgive my ignorance and please correct me if my
conclusions are erroneous.

1) It seems to me that until the Chinese Room problem was proposed,
the Turing Test united the AI community with a common definition for
what would constitute artificial intelligence. As far as I can tell,
there is no modern equivalent for the Turing Test. What, if any, is
the most general definition today for
AI?

2) What are the ultimate goals in developing AI? Is the emphasis on
robots aimed at replacing replacing human beings in the work place or
providing servants? That is, is the motivation geared toward
development of a viable consumer product rather than a quest for
knowledge?

3) This is taking a dive into epistomology where the waters are always
swift and dangerous, but why do we assume that we would be able to
recognize intelligence in something so different from ourselves as a
computer? Why do we assume that a self-aware machine would mimic human
intelligence. Come to think of it, why do we want to mimic human
intelligence in the first place? Wouldn't it be more beneficial to
create an intelligence that was as different from our own as possible
with the ability to interface its knowledge in a way that was
understandable to human beings?

4) Have I been reading too much science fiction? :-)

fengallow

unread,
May 22, 2003, 1:43:51 PM5/22/03
to
I meant epistemology. Epistomology is, of course, the science of
knowing how and what we know after drinking fourteen Budweisers.

Sir Charles W. Shults III

unread,
May 22, 2003, 1:59:40 PM5/22/03
to
"fengallow" <iwil...@yahoo.com> wrote in message
news:1175906e.03052...@posting.google.com...

>
> I recently began researching AI in a dilettantish way for a piece of
> fiction I'm writing and I'm curious on a few points. Perhaps you can
> enlighten me. Forgive my ignorance and please correct me if my
> conclusions are erroneous.

At the risk of appearing a bit foolish, I'll do what I can.

> 1) It seems to me that until the Chinese Room problem was proposed,
> the Turing Test united the AI community with a common definition for
> what would constitute artificial intelligence. As far as I can tell,
> there is no modern equivalent for the Turing Test. What, if any, is
> the most general definition today for
> AI?

Well, in effect, the Turing test really gave a sort of benchmarking
capability. All it really provided was a more or less agreeable sort of test
that would help to sort out the real losers fast. And it comes in a number of
variations as well, so some systems that might pass easily under some
circumstances will fail miserably under others.
Often, we know things by their attributes first, and their products second.
We recognize things by color, shape, size, etc. The Turing test allows us to
apply that sort of criteria to this abstract concept of intelligence because it
provides some roadmarks that many AI researchers tend to agree upon. I know of
many people who reject the Turing test as valid because we assume that the
subject under test will cooperate, because we assume that the subject under test
shares some common goals with us, and so on.
But as a quick and dirty test, the Turing is pretty darn good for what we
expect to see in intelligent action. And the most common version of the test is
a text-only sort of thing where you type questions or statements and the entity
in question answers or responds. This restrictive test is popular with software
people because they do not have to devote programming time to make vocal nuance,
or simulating a range of emotions on a face, etc. Text is easy- I think it is
safe to say that at least half of all new programmers try their hand at a
program that will chat with them.
Now, Searle's Chinese room is not so different from the Turing test in many
ways- but it is more a statement about thinking than it is a test for sentience.
But the swift thinkers will see that we can (and do!) embed the Chinese room
concept in the heart of many programs that face the Turing test.

> 2) What are the ultimate goals in developing AI? Is the emphasis on
> robots aimed at replacing replacing human beings in the work place or
> providing servants? That is, is the motivation geared toward
> development of a viable consumer product rather than a quest for
> knowledge?

Wow, that is a very broad set of questions. For starters, the concept was
very loosely a fictional one for a very long time- millenia, in fact. Look at
the stories of the golem, brought to life to do the bidding of its creator. It
was made of clay (minerals that might contain silicon?), inscribed with words of
power to give it life (an operating system?) and contained a scroll under its
tongue that told it what to do (a small floppy with a program?) so there are
some humorous parallels with today's computers and robots.
In a nutshell, people have dreamed of creating living, thinking beings for
ages, and often these beings were intended to carry out tasks and orders. So
from ancient times, I would opine that the dream was to make smart and untiring
servants.
On the other hand, we also have an abiding interest in just what life and
intelligence happen to be. And consider that for some, science is a way to make
a buck. You can believe that if AI becomes a commonplace item, it will end up
inside every product you can imagine. I can easily picture a world where we
have vehicles that manage their own fueling and maintenance, buildings that know
when elevator cables need to be checked and air filters need to be replaced, and
will dispatch robots to do those things. I can also see a world where mind
becomes a cheap thing, where Chinese knockoff chips end up in toasters and we
get neurotic lawnmowers and rules governing how to dispose of unwanted minds
without committing murder. There are pros and cons in all new technologies.
In summary, there appear to be two major reasons for developing AI- cheap,
smart, tireless servants that will handle our dirty work for us, and pure
research.

> 3) This is taking a dive into epistomology where the waters are always
> swift and dangerous, but why do we assume that we would be able to
> recognize intelligence in something so different from ourselves as a
> computer? Why do we assume that a self-aware machine would mimic human
> intelligence. Come to think of it, why do we want to mimic human
> intelligence in the first place? Wouldn't it be more beneficial to
> create an intelligence that was as different from our own as possible
> with the ability to interface its knowledge in a way that was
> understandable to human beings?

Well, we can often see what appears to be intelligence in non-human things-
take the cetaceans as a prime example. We know that they are smart, but
apparently they are smart in a way that is significantly different from us. You
have hit a nerve here, and that is our load of assumptions about intelligence in
general. We cannot assume that intelligent means "intelligent like us".
Whales and dolphins are very intelligent, but they have completely different
concerns from ours and so there is little impetus for them to want to have a
dialog, if they are truly capable of such things. It is one thing to train a
creature to perform a task, and another entirely for it to spontaneously perform
it. It comes down to goals, most likely.
As for the human intelligence angle, we want machines that mimic human
intelligence because we know best how to work with them- that is our one point
on the graph that we have any experience with of any weight. We do know of the
African Grey parrot and that it can perform very "smart" tasks and that it does
indeed understand some abstract concepts. We also have some chimps and gorillas
that use sign language pretty well, although there is still a great deal of
controversy about their status as great thinkers. My personal opinion is that
they are close enough to being human that it is quite reasonable to accept some
of their actions as having intent and purpose and planning.
But we once more run into that difference between training and spontaneous
behavior. We know that we can train them to signal to us their wants and needs,
but we also know that they are not quite smart enough to arrive at such a
language on their own. It's like the difference between "normal" human
intelligence and genius- a genius can come up with a new idea, and once
explained, any person can pretty much understand it. So why is it that normal
people can clearly understand the concept and not be geniuses? The difference
is in the fact that it takes the genius to assemble the ideas in a new and
useful manner- and the normal person, while able to understand the answers, just
doesn't have what it takes to make that critical leap of synthesis in the first
place- to "work it out" from first principles.
Finally, if we did create an intelligence that was wildly different from
human intelligence, how could we hope to train it, interrogate it, and
understand it? How would we even know that it was intelligent? No, some other
sort of benchmarks must be reached before we can start on that sort of thing.

> 4) Have I been reading too much science fiction? :-)

No, I would say that your questions were pretty clear and struck home on
good issues. They are relevant questions, and it might do many AI researchers
some good to read them and think about them. Many times, we cannot truly
understand something until we are called upon to explain it to somebody else.
It forces us to examine our ideas and shake out the flaws so that we can clearly
formulate them in everyday language and terms. This is something that everybody
would benefit from.
Now consider where Marvin Minsky and Hans Moravec et al had to start,
essentially at the beginning of an unknown path, and ask if you might have made
the same choices and discoveries. I may differ in my ideas but I have learned
from their work and can find no fault in it. Things take time.

Message has been deleted

Rick Craik

unread,
May 22, 2003, 3:45:17 PM5/22/03
to

"dan michaels" <d...@oricomtech.com> wrote in message
news:4b4b6093.03052...@posting.google.com...
[snip]
> I was talking about real neurons and synapses, and I believe LS was
> also. Looks like you are talking about artificial neural nets.

No I wasn't. I was looking at real neurons to find a suitable modeling
principle. The "levels" were "For example, _like_ ...".


> In ANN,
> the response times would prolly be mostly dependent upon processor
> speed, so #of levels would not be too important. How to set the
> weights in a 5-level deep net would be more the issue. [...]

I already know this model.

Wouldn't a real signal path that was processed by three real neurons,
("cells ultimate temporal response" for cell 1 plus cell 2 ...), respond
faster than with five real neurons? Surely real neurons optimise
when possible. What happens when a "cells ultimate temporal
response" turns out to be faster and faster than previous times?

Regards,
Rick


Eray Ozkural exa

unread,
May 22, 2003, 6:06:58 PM5/22/03
to
"KP_PC" <k.p.c...@worldnet.att.net> wrote in message news:<EEWya.167009$ja4.8...@bgtnsc05-news.ops.worldnet.att.net>...

> "dan michaels" <d...@oricomtech.com> wrote in message
> news:4b4b6093.0305...@posting.google.com...
> | 100 trillion synapses can certainly provide
> | the substrate for consciousness, or anything
> | else that the brain produces. Figure out the
> | state space on that.
> | [...]
>
> That was done decades ago, in AoK and its antecedant papers.

Here is some modern view on the "state space" of the brain.

It is not profitable to view a computer as a finite state machine.
Therefore we have more sophisticated models of computation such as
PDAs or TMs.

This is a very important point in theory of computation. It applies to
the brain as well as any other kind of computer.

That's why we don't view a PC as a FSM.

If you can tell why we don't view a PC as a FSM, you can readily tell
why we should not treat the brain as a FSM with an enormous number of
states.

Regards,

KP_PC

unread,
May 22, 2003, 6:27:17 PM5/22/03
to

--
"Schmitd! Schmitd! Ve vill build a Shapel!"


"Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
news:fa69ae35.03052...@posting.google.com...

| "KP_PC" <k.p.c...@worldnet.att.net> wrote in message
news:<EEWya.167009$ja4.8...@bgtnsc05-news.ops.worldnet.att.net>...
| > "dan michaels" <d...@oricomtech.com> wrote in message
| > news:4b4b6093.0305...@posting.google.com...
| > | 100 trillion synapses can certainly provide
| > | the substrate for consciousness, or anything
| > | else that the brain produces. Figure out the
| > | state space on that.
| > | [...]
| >
| > That was done decades ago, in AoK and
} > its antecedant papers.
|
| Here is some modern view on the "state
| space" of the brain.
|
| It is not profitable to view a computer as a
| finite state machine. Therefore we have more
| sophisticated models of computation such as
| PDAs or TMs.

I presume "TM" is "Turing Machine"?

But I'm not familiar with the sense in which you use "PDA" {certainly
not "Personal Digital Assistant", I presume :-]

| This is a very important point in theory of
| computation. It applies to the brain as well
| as any other kind of computer.
|
| That's why we don't view a PC as a FSM.
|
| If you can tell why we don't view a PC as a
| FSM, you can readily tell why we should not
| treat the brain as a FSM with an enormous
| number of states.
|
| Regards,
|
| __
| Eray Ozkural <erayo at cs.bilkent.edu.tr>

Although I've no problem with the arguments I've heard with respect
to PCs constituting FSMs [folks rountinely 'disregard' the infinity
that's actually in-there, via I/O], I Agree with your position. It's
what was implicit in my prior post [what was done decades ago]. As
far as I know, the position you've stated originated in my own work.

Nervous systems function in continua ways. All the way down to the
'level' of single 'ions', there [physically] exist no 'finite
states'. Within nervous systems, 'quantum mechanics' notwithstanding,
energydynamics are 3-D-Continuous. The closest thing to
'discreteness' within such are energy-flow-threshold 'events', which
are, themselves, readily seen to be just more continuous energy-flow.

Perhaps we'll get along nicely, Eray, no?

K. P. Collins


Eray Ozkural exa

unread,
May 22, 2003, 6:46:17 PM5/22/03
to
Hi Dan,

I'm going to ask you a very (at least to me) difficult question.

d...@oricomtech.com (dan michaels) wrote in message news:<4b4b6093.03052...@posting.google.com>...
>
> Good reasons for not using monolithic unstructured tabula-rasa neural
> nets, and for using modularized and structured neural nets coupled
> with some sort of problem-specific non-maleable preprocessing. Which
> is, of course, how the brain did it. Tabula rasas = no. Modules on top
> of modules = yes.

I'm supposing you are familiar with the ANN models and learning
algorithms.

Could you tell me which of the following are biologically plausible?

1) MLFF with error back propagation
2) Hopfield networks
3) Kohonen networks

(I'm assuming you know the popular training algorithms for 2 and 3,
there aren't many, if you'd like I'll open up my textbook and write
the names. There were things like winner-takes-all, etc. IIRC)

A molecular biologist (and CS grad) friend of mine claimed 1 is
biologically plausible but I didn't believe him. IMHO, none of the
popular algorithms have anything to do with biology. So I'm anxious to
hear some expert opinion!

A more general formulation of the question:
What do you think are the "algorithmic kernels" in our nervous
system?

Your analysis will be appreciated.

Traveler

unread,
May 22, 2003, 9:33:40 PM5/22/03
to
"R. Steve Walz" <rst...@armory.com> wrote in message news:<3ECC4B...@armory.com>...
> Traveler wrote:
[cut]

> > I think that the solution to the AI problem will turn out to be simple
> > and will be easily understood by anybody with an average IQ. For this
> > reason, AI technology will spread like wild fire (it's just software
> > after all) and will change the world almost overnight. For better or
> > for worse.
> >
> > Lous Savain
> --------------------
> While I suspect you're right, you're a very silly man to be so
> headlong rash and unreasonable to others who have only tried to
> work it all out.

That's just it. Those guys are not trying. They are career
professionals. Their goals is to hold on to their jobs and obtain
grants from the government. So they invent complex and silly theories
about the mind to appear to be knowledgeable to the casual observer.
In reality they are clueless. Minsky, of all people, should know that
one does not gain common sense by stringing text symbols together.
Heck, my dog exhibits an extreme degree of common sense, especially
while chasing and catching a flying frisbie. I don't remember feeding
text strings to my dog about the common sense of catching frisbies in
flight.

A child who learns how to walk uses a lot of common sense to determine
how to position his/her feet and exercise precise control over various
muscles so as to maintain balance. This stuff is not symbolic. It
comes from the ability to learn to predict the temporal evolution of
events in one's environment. It comes from hands-on experience. A
one-million strong army of Doug Lenat's drones could not string enough
symbols together to incorporate this sort of common sense in a
computer. Not in a million years!

So yes, I think Minsky, Lenat et al are morons at best and con artists
at worst. They insult the taxpayer's intelligence. How long will it
take those guys to realize that their chosen course of action is dead
wrong? Fifty years, a hundred years? Until death do them apart?

Of course, you are welcome to hold the opposite opinion. Free speech
and all.

> -Steve

Louis Savain

Eray Ozkural exa

unread,
May 22, 2003, 9:52:23 PM5/22/03
to
er...@bilkent.edu.tr (Eray Ozkural exa) wrote in message news:<fa69ae35.03052...@posting.google.com>...

> A molecular biologist (and CS grad) friend of mine claimed 1 is

I meant he graduated from molecular biology and is a graduate student in CS.

Thanks,

__
Eray

Traveler

unread,
May 22, 2003, 10:03:08 PM5/22/03
to
d...@oricomtech.com (dan michaels) wrote in message news:<4b4b6093.03052...@posting.google.com>...
> eightwi...@yahoo.com (Traveler) wrote in message news:<308ba22c.03052...@posting.google.com>...
>
> .................

> > > In the retina, for instance, there are
> > > at least 2 levels of analog interactions taking place BEFORE any
> > > "discrete" action potentials are generated. This is standard
> > > neurophysiology, been known since the early 60s. Werblin and Dowling
> > > wrote some of the definitive papers circa 1969. You should read this
> > > stuff.
> >
> > It goes without saying that a discrete system must convert all sensory
> > analog levels into discrete values. Which is what the retina does.
> >
>
> If this is true, one wonders why the retina uses slow potentials at
> all, instead of action potentials at every level. Myelinated fibers
> allow action potentials to be conducted long distances in short time
> periods. One wonders that, were the long distances not present that
> all activity might be conducted with slow potentials.
>
> Which came first - the action potential or the slow potential?

The retina essentially encodes contrast and direction information in
the relative timing of the spikes generated by the RGCs. This requires
the use of slow and fast potentials. The temporal information
contained in the 1 million-fiber optical nerve is decomposed by the
input layer of the visual cortex into about four hundred million
signal paths! At least in humans. Less so in lower animals. The system
that accomplishes this depends on very accurate timing. This is the
reason that the brain uses fast electrical shunting synapses to
synchronize huge numbers of neurons.

[cut]
> ...............>

> > I hereby predict that the AI problem will be solved *without* the use
> > of a single math equation. And much sooner than most people think.
> > I'll be even more blunt. Any AI researcher (such as Minsky et al) who
> > claims that AI will require complex math and/or logic is about as
> > clueless as a door knob. One more prediction: the final solution will
> > come from a completely unexpected source. So unexpected, in fact, as
> > to be downright shocking.
> >
>

> Ahh, we all have our pet theories and predictions, as have all those
> who preceded us. Personally, as someone who did neurophysiology for
> several years, I agree with you on this point. Animals run, catch
> prey, throw objects onto targets, jump through space, and avoid
> bumping into each other - all without computing a single differential
> equation.

Right. No symbol processing either.

> ========================


>
> > > Now, this being said, you are correct that only a tiny fraction of the
> > > neurons in the brain ever seem to be firing at any one time.

> ................


> >
> > I think that the solution to the AI problem will turn out to be simple
> > and will be easily understood by anybody with an average IQ. For this
> > reason, AI technology will spread like wild fire (it's just software
> > after all) and will change the world almost overnight. For better or
> > for worse.
> >
> > Lous Savain
>
>

> "...for worse" .... I would not have expected to hear this from
> someone who 2 months ago was upset enuf about world events to say he
> was closing down his site and his research. Interesting.

What is there not to expect? I have come to realize that I and the AI
community are insignificant next to the grand scheme of things. I
personally believe that the world will use AI for evil but it is not
up to me to decide whether or not AI should be unleashed. There are
other forces at work in the world. I can tell you, however, that true
AI will not come from the AI community, not even from the neuroscience
community. You can bet the farm on it.

As an aside, I recently discovered an unlikely source of information
on this topic that pretty much explains it all. It will come as a
great surprise to everyone, especially the scientific community. They
won't like it one bit. Keep your ears and eyes open. And prepare to be
amazed.

Louis Savain

R. Steve Walz

unread,
May 22, 2003, 11:28:57 PM5/22/03
to
Traveler wrote:
>
> "R. Steve Walz" <rst...@armory.com> wrote in message
> > Traveler wrote:
> [cut]
> > > I think that the solution to the AI problem will turn out to
> > > be simple and will be easily understood by anybody with an
> > > average IQ. For this reason, AI technology will spread like
> > > wild fire (it's just software after all) and will change the
> > > world almost overnight. For better or for worse.
> > > Lous Savain
> > --------------------
> > While I suspect you're right, you're a very silly man to be so
> > headlong rash and unreasonable to others who have only tried to
> > work it all out.
>
> That's just it. Those guys are not trying. They are career
> professionals. Their goals is to hold on to their jobs and obtain
> grants from the government. So they invent complex and silly theories
> about the mind to appear to be knowledgeable to the casual observer.
> In reality they are clueless. Minsky, of all people, should know that
> one does not gain common sense by stringing text symbols together.
> Heck, my dog exhibits an extreme degree of common sense, especially
> while chasing and catching a flying frisbie. I don't remember feeding
> text strings to my dog about the common sense of catching frisbies in
> flight.
--------------------------
This is uncertain. What minds do, even dogs, can be symbolically
manipulated, so there it is a handle on the problem.

Why you don't grasp this and why you insist that you know things
NO ONE *CAN* know smacks of a religious bigotry.


> A child who learns how to walk uses a lot of common sense to determine
> how to position his/her feet and exercise precise control over various
> muscles so as to maintain balance. This stuff is not symbolic. It
> comes from the ability to learn to predict the temporal evolution of
> events in one's environment. It comes from hands-on experience. A
> one-million strong army of Doug Lenat's drones could not string enough
> symbols together to incorporate this sort of common sense in a
> computer. Not in a million years!

---------------------------
That's good, because that's not what he's trying to do. There is a way
of looking at awareness as being a nexus of ideas that doesn't require
a body. We all feel that way when we're drunk, for instance, or simply
concentrating in our mind.


> So yes, I think Minsky, Lenat et al are morons at best and con artists
> at worst. They insult the taxpayer's intelligence. How long will it
> take those guys to realize that their chosen course of action is dead
> wrong? Fifty years, a hundred years? Until death do them apart?

[]
> Louis Savain
--------------------------
Now I know you're simply a paranoid delusional with an anger problem.
I mean *I'm angry, but at least I make some kind of SENSE!! You're
just crazy.

Gary Forbis

unread,
May 23, 2003, 12:10:41 AM5/23/03
to
"KP_PC" <k.p.c...@worldnet.att.net> wrote in message news:<99cza.94174$cO3.6...@bgtnsc04-news.ops.worldnet.att.net>...

> "Eray Ozkural exa" <er...@bilkent.edu.tr> wrote in message
> news:fa69ae35.03052...@posting.google.com...
> | It is not profitable to view a computer as a
> | finite state machine. Therefore we have more
> | sophisticated models of computation such as
> | PDAs or TMs.
>
> I presume "TM" is "Turing Machine"?
>
> But I'm not familiar with the sense in which you use "PDA" {certainly
> not "Personal Digital Assistant", I presume :-]

I'm guessing "Push Down Automa".
Hey, a web search yielded:
www.cs.brown.edu/courses/cs051/lectures/lect24.pdf

Message has been deleted
Message has been deleted

KP_PC

unread,
May 23, 2003, 8:37:28 AM5/23/03
to
"Gary Forbis" <forbi...@msn.com> wrote in message
news:5a1238fe.0305...@posting.google.com...

Thank you, Gary, ken


fengallow

unread,
May 23, 2003, 10:36:21 AM5/23/03
to
Thank you so much for taking the time to answer. You obviously put a
lot of thought into it. It is very helpful (for my purposes) to know
that the Turing test has not been completely discredited. I had
understood that it was.

Best of luck on your research. Once again, many thanks for your time
and patience.

Eray Ozkural exa

unread,
May 23, 2003, 11:00:26 AM5/23/03
to
d...@oricomtech.com (dan michaels) wrote in message news:<4b4b6093.03052...@posting.google.com>...
>
> Eray, good questions - you put me on the spot, yes?
>

Thanks. Hopefully you don't mind, Dan :)

> It's been 10 years since I did any NN training [lots of BP stuff], and
> longer than that since I did any neurophysiology, so I'm not really an
> expert on either of these anymore. I have mainly been trying to
> interject some biological information into these discussions - so
> people can see there is more to the problems than may be obvious.
>

I see. You still seem to recollect though.

> In the comments at the top, I was keying off of Han's comments about
> where he saw that NN research had been going wrong. It seemed to me
> the "correction" factor was the same in all of his examples - thus, my
> response.
>

Yes, modularity does seem to be necessary. However, I am not sure if
we have a good model of how a single module works. My inexpert
opinion, having worked only with simple computational models that are
called ANN, is that we don't have such a model. Since we don't have
such a model, I believe we don't have a model that includes multiple
modules.

> Regards 1, 2, 3 above, 1 and 2 are not really biologically plausible
> at all.
>

So you say Kohonen networks make some biological sense. I didn't
realize winner-takes-all was like that.

Looking at an implementation I wrote, I see that for each training
point, the weights are normalized. Then a winning neuron is selected,
that is the neuron which approximates current input x best. The weight
adjustment then proceeds to decrease this minimum distance. Processing
all instances in random order is a training cycle.

I see that a neuron could suppress others. But is this really a
general mechanism that would work for a large number of neurons? I
thought that was a local process.

> BP "neurons" are gross simplifications of real neurons. Real neurons
> have 1000s of spacially-distributed synapses made upon
> complexly-branching dendritic trees, with who knows what kind of local
> interactions occurring. These are all bathed in a liquid environment
> full of synaptic transmitters, hormones, and other chemicals which
> exert local and general control over activity. Different classes of
> inputs come in at different spatial levels on the dendritic trees.
> Local responses in dendritic trees have various time characteristics,
> as do the overall responses of the cells. To model in totality even a
> "single" neuron would be unbelievably complex - thus everyone extracts
> their own acceptable levels of simplification and assumption. Very
> subjective. OTOH, we don't really have the option at present to do
> anything near precise modeling, so what can you do? So people wave
> their hands a little, and choose the level they believe is best, or
> most approachable, for their purposes.
>
> Back to BP neurons, they are simple extensions of perceptron style
> neurons, which are simple adaptations of the original McCullogh-Pitts
> neurons. A simple cell body with 10 or 100 weights feeding into it,
> with no account taken of spatial relationships [dendritic structure]
> or the other complexities, is not very biologically plausible. Compare
> this to what I just described for a real neuron. On top of that, a
> 1000 cell NN does not approximate a 100 billion neuron brain. So, all
> in all, current BP NN's are really overly-simplified models running
> toy problems. This is "not" a criticism so much as an indication of
> how far we have to go.
>

You draw a persuasive picture that the nervous system is indeed more
complex than the models in question. However, as you say some
simplifying assumptions can be useful in our studies. If we wrote a
computational chemistry simulation we might not be able to understand
much of the significant computations. Thus we are having to guess
which of those observed processes amount to significant ones.

I don't think I've understood too clearly though. You are saying that
BP uses an inadequate model. Apparently that is so, especially the
liquid hasn't much place in that poor little artificial neuron model.
However, error back propagation is a training algorithm that can be
applied to a standard model such as multi-layer feed forward network.

So can you do your analysis assuming that the standard model with
sigmoid function is an adequate one? I wish to understand if the
training algorithm is biologically plausible.

There is a mechanism used in error back propagation. First: evaluating
the network, that is okay I think. Second: computing an error. Then
adjusting weights in the reverse gradient with a given "step" size.
Then doing this recursively for other layers. The recursion part is
okay, but I'm worried about the "difference" part. My molecular
biologist friend told me that the error back propagation models a well
known process. Could you tell me about that process and whether the
algorithm makes sense in a general setting?

[snip]

I will come back to other points later. I will need to digest the
material little by little. :)

Cheers,

Traveler

unread,
May 23, 2003, 6:56:50 PM5/23/03
to
"R. Steve Walz" <rst...@armory.com> wrote in message news:<3ECD95...@armory.com>...

[cut]


> Now I know you're simply a paranoid delusional with an anger problem.
> I mean *I'm angry, but at least I make some kind of SENSE!! You're
> just crazy.

Sour grapes from a moron worshipper. You are an defender of
incompetence and deception. See you around, Walz.

Louis Savain

Traveler

unread,
May 23, 2003, 8:39:52 PM5/23/03
to
"Rick Craik" <rcraik@@ntl.sympatico.ca> wrote in message news:<2P9za.801$0E2.1...@news20.bellglobal.com>...

Rick, IMO, the temporal response of neurons do not change over time.
In the visual cortex, cells are known to correlate signals that are
about 10 ms apart. In the hippocampus, neurons can fire continually
for a relatively long period of time in order to find temporal
correlations between events over multiple time scales, up to several
minutes in some animals. This is essential for operant conditioning.
There is no such thing as a neuron's "ultimate temporal response" IMO.
It depends on the environment that the organism must survive in. A
1-millisecond resolution seems ideal for most animals.

Louis Savain

KP_PC

unread,
May 23, 2003, 10:28:36 PM5/23/03
to
"Traveler" <eightwi...@yahoo.com> wrote in message
news:308ba22c.03052...@posting.google.com...

There's also preliminary 'unconscious' [background] processing that
precedes gating to consciousness, but which is
'illusionarily'-sync-ed with respect to conscious real-'time' - one
'thinks' [ia 'xonscious' of] it's all happening 'right-now', but it
isn't. And such pre-processing can have decades-long 'time'-course.

One more thing - nervous systems are ["of course"] massively-parallel
information-processing architectures, and they work simultaneously on
myriad problems. I've looked for a limit with respect to the capacity
of such parallel problem-solving, but, in ~32 years have not detected
any such [real] limit. ['Typically', as is discussed in AoK, Ap7, the
"volitional diminishing-returns decision" threshold is set
conveniently-low with respect to global TD E/I-minimization, and this
instantiates a virtual 'limit', but even in such 'typical' cases, the
capacity remains enormous relative to machine information-processing.

The on-topic 'point' with respect to this last stuff is that, within
nervous systems, 'time' is extremely
parallel-information-processing-task relative.

ken [K. P. Collins]


Marvin Minsky

unread,
May 24, 2003, 12:19:47 AM5/24/03
to
rst...@deeptht.armory.com (Richard Steven Walz) wrote in message news:<3ec1f67a$0$1097$8ee...@newsreader.tycho.net>...
> We always make the accomplished state of the art invisible to ourselves.
> The task may sneak up on us and surprise us at how easy it is suddenly,
> or it could take a century or more, who knows?
>
> Is there any improvement on the neural-net understanding of the human
> cortex and its fine structure coming along lately, Marvin? And are there
> any purely structured software self-representational awareness experiments
> that have had any intriguing results?
> -R. Steve Walz

As usual, I agree with much of what Steve Walz says. In this case I
think I see a reason why the bottom-up approach to neurology has not
much progressed. We know a great deal about how individual neurons
and synapses work. And generally, that knowedge appears to show that
those low-level mechanisms are much the same in all animals, back to
fish and even invertebrates. This suggests to me that our
'higher-level' thought-processes grew from our having evolved of ways
to 'insulate' those processes from most of those low-level
physiological details, more often than from exploiting them. In
short, this is how we became more symbolic and almost
Turing-programmable.

Accordingly, what I think we need is more knowledge about the
intermediate structures that achieve this marvelous insulation. Most
notably, the cerbral cortex is largely composed of the so-called'
columns' —each of which appears to be a special kind of neural network
composed of several hundred neurons and tens of thousands of synapses.
The trouble is that, so far as I know, the brain-science community
does not know much about what these do. (I suspect that I may be out
of date, but the best I've seen are some theories of Grossberg about
how these might work.) But I haven't seen anyt theories of
high-level thinking based on good intermediate theories. Instead, we
still see a great deal of talk that assumes almost direct connections
between thinking and the functions of stuff at the chemical level—like
someone being disturbed because of deficiencies in serotonin, etc.

In other words I think we need theories based on intermediate
structures and functions. Could some of those cortical columns
possibly be parts of K-lines, or slots in frames, or elements of
semantic networks? I read Science and Nature frequently, yet I
don't believe that I've ever seen any mention of such things in any
brain-science article. This suggests that there is here a forty-year
gap; I recently met a functional MRI neuroscientist who had actually
heard of the Newell-Simon idea of a GPS—but had not considered
experiments to look for signs of their Difference-Detectors at high
cognitive levels.

As for higher level functions, the functional MRI kind of approach is
providing a gredat deal of evidence. However, most research keep
trying to interpret that data in terms of old deas from our everyday
folk-psychology—which, in my view is chock-full of words that
describe little of what brains actually do, but just parts of
obsolete popular psychological models. I would love to see some
researchers try to interpret that data from the point of view of more
modern architectural views (e.g., the theories of, say, Aaron Sloman,
or even of mine).

Marvin Minsky

KP_PC

unread,
May 24, 2003, 5:29:34 AM5/24/03
to
"Marvin Minsky" <min...@media.mit.edu> wrote in message
news:f04e2625.03052...@posting.google.com...
| so-called' columns' -each of which appears to be

| a special kind of neural network composed of
| several hundred neurons and tens of thousands
| of synapses. The trouble is that, so far as I know,
| the brain-science community does not know much
| about what these do. (I suspect that I may be out
| of date, but the best I've seen are some theories
| of Grossberg about how these might work.) But
| I haven't seen anyt theories of high-level thinking
| based on good intermediate theories. Instead, we
| still see a great deal of talk that assumes almost
| direct connections between thinking and the
| functions of stuff at the chemical level-like someone

| being disturbed because of deficiencies in
| serotonin, etc.
|
| In other words I think we need theories based on
| intermediate structures and functions. Could
| some of those cortical columns possibly be
| parts of K-lines, or slots in frames, or elements
| of semantic networks? I read Science and
| Nature frequently, yet I don't believe that I've ever
| seen any mention of such things in any brain-
| science article. This suggests that there is
| here a forty-year gap; I recently met a functional
| MRI neuroscientist who had actually heard of the
| Newell-Simon idea of a GPS-but had not

| considered experiments to look for signs of their
| Difference-Detectors at high cognitive levels.
|
| As for higher level functions, the functional MRI
| kind of approach is providing a gredat deal of
| evidence. However, most research keep trying
| to interpret that data in terms of old deas from
| our everyday folk-psychology-which, in my view

| is chock-full of words that describe little of what
| brains actually do, but just parts of obsolete
| popular psychological models. I would love to
| see some researchers try to interpret that data
| from the point of view of more modern
| architectural views (e.g., the theories of, say,
| Aaron Sloman, or even of mine).
|
| Marvin Minsky

Dr. Minsky, Everything you call for was Reified in Neuroscientific
Duality Theory [NDT] decades ago. The things I will discuss in this
reply were all Reified for the first time in NDT.

For instance, with respect to the so-called "cortical columns", it's
first necessary to discuss cortical architecture in general. Cortex,
as a whole, constitutes a "crumpled-bag nucleus". Curmpled-bag nuclei
occur repeatedly and hierarchically within nervous systems, two other
stand-outs are the cerebellym and the inferion olivary nucleus.

To see a rough analogue of a crumpled-bag nucleus, take a brown paper
grocery bag and crumple it :-] then uncrumple it, and shap it into a
'spherical' crumpled 'state', with its openig gathered together, a
bit, but left wide enough for all of the information-processing I/O
communication ["pathways"] to pass through it in a
structurally-organized fashion.

Why this neural architecture is of special significance is that
crumpled-bag nuclei always serve the same purpose - they constitute
the architecture of minimal circuit length [and, therefore, of
minimial energy consumption, and, most-importantly, of minimial
response latencies, all of which are Crucial with respect to survival
within the essentially-predatory environments in which nervous
systems evolved]. [All of the nervous system is like this. Every
twist and turn in the neural architecture constitute rigorous
Topological functions that literally-embody evolutionarily-derived
functionality which can be =read directly= from the Topology.]

The easiest way to understand the functionality of crumpled-bag
nuclei is to use the inferior olivivary nucleus as a teaching
example. What it does is receive joint-receptor information and relay
it to the cerebellum via the very-potent "climbing fiber" pathway.
What it does is "translate" [as in Maths "coordinate system
translation"] body-conformation variations so that the cerebellum
[and, in the main via the cerebellum, the rest of the nervous system,
can 'see' a 'stable' world-image with respect to survival-dependent
directionality-of-movement.

Thus, the activation received via sensory receptors of a hand held
palm-down =above= a hot or sharp object will result in different
efforctor activations than will the activation of the =same=
receptors when the hand is held palm-up =beneath= a hot or sharp
object.

The Neocortex itself is 'just' another crumpled bag nucleus. It
performs the extraordinarily-generalized coordinate-system
"translation" with respect to the totality of the neural activation
that occurs within a nervous system.

There are only a few cortical fetures that are of particular
significance. One of these is the 'orthoganally-interlaced',
6-layered, nature of the cortical interneuron architecture. Another
is that stochastic activation from the "reticular system" is
intermingled [layer 5] within this layered, interlaced network.

The result is that "loop circuits" can be set-up dynamically within
cortex. It's through the variation of the circuit lengths, with
experientially-correlated convergence upon minimal loop-circuit
lengths, that cortex carries out it's generalized "translation"
function. Such tuning of loop-circuit lengths is what allows the
"type-II synchronization" [like gears in a clock, vs. the regimented
'marching' of "type-I synchronization] of cortical outputs to the
effectors [and at distance within cortex, and subcortical strucures]
is what allows the coordinated activation of the effectors in which
directed, purposful, behavior is manifested.

Everything important in 'momentary' cortical function derives in the
tuning of such loop-circuit lengths. [Of course, trophic modification
accompanies neural activation in a way that's rigorously determined
by the neural activation that actually occurs in the network. This
"learning" also occurs in rigorous accord with convergence upon
minimal circuit lengths [energy consumption, response latencies].

In addition to this, the only other really-important feature of
cortex is it's sensory-motor-association modality-mapping, including
its mapping to subcortal structure.

Within all of this, the so-called "cortical columns" arise as an
'artifact' of the way that the nervous system drives the effectors so
that stuimlus sets that are being 'attended to tend to 'always' be
oriented front-centered.

The 'cortical column' artifact results from the fact that such
front-center orientation tends to result in cortical activation
'hovering' about cortical loci that are correlated to optimal
front-center orientation, and maximally converged upon minimal
loop-circuit length [energy consumption, response latency] trophic
dynamics just grow the columnar structure as an otherwise
information-processing-'meaningless' architecture. That is, all the
meaning is in the topologicallt-distributed trophic-minimization of
circuit lengths, etc.

This overly-simplified discussion provides minor insight into NDT.
NDT does the same, at a much greater level of detail, with respect to
every major nuclear group within nervous systems, providing concrete
biological mechanisms for [explanations of], among other things,
creativity, curiosity, and volition. NDT flat-out nails cognition and
consciousness. Everything in NDT reduces directly to the Proven
Neuroscience experimental results. NDT is equivalently-stated, and
verified, in 5 independent-perspectives, one of which is computer
scientific.

I ask [consider it a Formal Challenge to MIT] that I be allowed to
present an =overview= of NDT at MIT [or where ever you are these
days], in forum that is Open to the Public at a minimal cost to the
attendees [basically, only to cover MIT's costs - lighting, etc.]
Folks in Computer Science, Physics, Mathematics, Behavioral Science,
Sociology, and Philosophy will all find newness that will be of
significant Usefulness to them if they attend such a presentation.

NDT is the "intermediate" stuff that you call for in your post,
quoted above, Sir.

It constitutes nearly 32 years of Devoted Work. Its implications with
respect to Human Survival are Immense, which is why I am Obliged to
work to achieve its generalized, Open, communication.

I Hope MIT will Accept this Formal Challenge, with keen Cricticality,
but with strong 'heart'.

I'm in Massachusetts so I'll be able to travel there with relative
ease [I'll need small considerations like free parking during the
process, though, and sufficient lead-'time', and knowledge of MIT's
expectations, so I can specifically address them. And a Student-Guide
would be helpful, if possible and practical. I've worked 'outside of
the system', and am aware that I remain 'naive' with respect to
expectations of 'the system'. The Student-Guide would be my temporary
'crumpled-bag' "translation" 'nucleus' :-]

Sincerely, K. P. Collins

It is loading more messages.
0 new messages