Oli Mpele <
oli....@gmail.com> wrote:
> This means that language is simply the application of these *much more
> high= ly generic algorithms* to a specific domain.
Yes you are preaching to the choir on this issue with me. I totally
believe that there are fundamental principles in the brain that are equally
at work allowing us to learn to walk, as it is allowing us to learn to
talk.
> Consider for example the sentence: "let me know when you get there". This
> s= entence implies a future situation, with certain properties, and the
> capabi= lity of the addressee to recognize that situation when it occurs
> in the fut= ure, and to *then* remember this conversation, and then
> execute the action = of ... etc.
>
> All of what I just described are actions that are really "physical"
> thought= , in the way you described. Picturing, perceiving, recognizing,
> remembering= , ~contacting someone~, etc.=20
>
> The fact that they are "expressed" or rather, "referred to" by the
> lingual = expression, does nothing to change that.
>
> That is the key point.
>
> The underlying capabilities need to be there. The lingual expression, and
> m= apping (i.e. interpreting) the sentence to map to those capabilities,
> are a= nother and separate (i.e. additional) issue.
Oh, how I might describe it, if we learn that cookie jars have cookies in
them, it means we are triggered to look inside the jar to see if we can
find and eat a cookie.
The vision of the cookie jar, is language to us. It means "cookies likely
to be found here". If a robot can learn to recognize in a cookie jar and
use that recognition to find cookies, then it's demonstrated its ability to
learn a very simple language. Our more complex language is only an
extension of these basic abilities we need to do even the most simple
interactions with the environment.
> I do very much suppose [this] includes the "capability" to learn to ride
> a = bike, because this is simply the capability to use the muscles
> associated w= ith that action.=20
>
> Well. At least, talking about robots.=20
>
> Of course the human implementation may use more lower-level, distributed
> an= d decentralized learning mechanisms, with reflexes being encoded in
> the per= ipheral nervous system. But that is merely a detail of the
> implementation! = It is perfectly possible to learn to control a body,
> and execute certain ac= tions, using only abstract thought - the way we
> might learn to remote-contr= ol a puppet using such a strange thing as a
> keyboard or mouse interface.=20
The basic distributed hardware of the brain translates sensory data into
actions. To do this, the brain must classify the sensory data into
categories like "cat", or "cookie jar", or "bike" or "falling down". The
same principle carries all the way through to action. Each action, is
nothing more than a classification of sensory data.
If a rock is flying towards us through the air, the brain must classify
that as "rock about to hit us", which then gets classified as "run you
fool!".
The entire process of turning sensory data into the appropriate action, is
a classification problem. It's the same classification problem all he way
through the system. If we look at from the input side, we tend to call it
something like image recognition. If we look at from the output side, we
like to call it things like action selection. It is highly likely that the
brain is using the same fundamental types of circuits to do all of it,
making the entire process one of mapping sensory data flows, into output
action flows.
Whether that output action flow might be called riding a bike or talking,
it's still just complex behavior triggered by our sensory data.
The word "thought", as we use it in informal English however, is reserved
for a very specific type of brain mapping. It's when our brain, is
representing a state of the environment, that is false -- that doesn't
actually exist as part of the environment.
When I talk to myself in my head, the brain has partially coded a state of
the enviornment as "me saying some words". It's what the brain should do,
in response to me actually talking -- moving my lips and making sounds.
But in this case of "talking to myself in my head", I'm not actually making
sounds. My ears are not picking up those sounds. My lips are not
producing those sounds. But yet, my brain has created an internal state
that signals these things are happening.
My brain in this case, is creating a FALSE and UNTRUE state representation
of the external environment. It's an action, (talking), and sensory
perception (hearing) that the brain has half attempted to create, but
failed.
It's best understood in my view, as a delusion, or illusion the brain is
creating.
When our brain does this, we are mostly able to detect that the brain is
doing it. We know the voice is not in the environment, even though it is
in our head. We know our lips are not moving, and our ears are not really
hearing us speak, so we know the brain is just just "hearing something that
doesn't exist" in the environment.
When the brain gets into this delusional/illusional mode, we call it
"thinking", and "thought".
It's highly useful because it allows us to experiment with actions without
having to actually perform them and use the innate low level power of the
brain to predict how the environment will respond, before we do some action
for real.
But our thoughts (and memories), are just that. They are illusions the
brain is creating that have proven to be useful to us.
Of course, at times, the illusions become so powerful, that we can't tell
whether they are real or not, and then that's where they cross over from
being useful, to being harmful to us. That's where we stop calling them
thoughts and memories, and start to call them delusions and hallucinations,
or just dreams. But it's the same brain mechanism for all of this.
It's a low level, very simple, but also parallel and distributed, real
time, continuous, translation of sensory data, into effector actions.
> The capabilities of the abstract thought supersede that of the reflexive
> em= bodied "thought".
abstract thought is just pattern recognition or, sensory data
classification. It's all the same thing.
The concept of "cat" is just an abstraction created from all the sensory
data about cats we have experienced in our lives.
> Though, I admit, I haven't given this aspect much attention at all: the
> rea= lly interesting part *for me* is in the designing of such things.=20
>
> Specifically, I have been trying to create a strong AI design that is
> capab= le of *engineering*.
I've been trying to create a design that is capable of doing all the things
humans do for 30+ years now.
If in fact there are fundamental low level mechanisms that make it all
happen, as I believe, then it's wrong to focus on only one ability. When
we focus only on playing chess, we produce really good chess playing
algorithms, which are of now use for general AI. That is, of no use for all
the other things humans do.
If the part of the brain we use for playing chess is a very different type
of system, than the part we use for driving a car, of for doing
engineering, then we can look at, and solve each problem class separately
and make good progress.
But if they are all solved by once common set of features, then we must
find those common features, and not overly focus on only one task.
And I certainly believe, that the brain does solve them all with only a
very small set of fundamental types of information processing, and that
until we figure out what that is, we won't have creates strong AI, or
commonly these days, also called AGI.
> >=20
> > Intelligence isn't about language. It isn't about thought. It's about
> >=20
> > physical behavior. That's why animals have brains. If your approach,
> >and =20
> > system architecture isn't structured for solving the problems of
> >physical =20
> > behavior, it will never be intelligent in my view.
> >=20
> >=20
> >=20
> > Cyc, and other language based knowledge graphs aren't structured in a
> >way =20
> > that works for solving the problem of intelligent behavior. As such,
> > the=
> y
> >=20
> > never become intelligent. They are just large knowledge storage systems
> >=20
> > that can be queried -- like Watson.
> >=20
> >=20
> >=20
> > If you want to create true intelligence, what the database must store,
> >is =20
> > information about HOW to ACT.
> >=20
> >=20
> >=20
> > Storing the fact that DOG is a type of Animal, gives the system no
> >=20
> > information about when it's correct to speak the word DOG or when it's
> >=20
> > correct to speak the word ANIMAL, for example.
> >=20
> >=20
> >=20
> > If I see a dog, how should I act? If I see a lion charging me with
> >it's =20
> > mouth open, how should I act? If I see a baseball flying towards my
> > head=
> ,
> >=20
> > how should I act? If I see my friend Bob, how should I act? These are
> > t=
> he
> >=20
> > questions that an intelligent agent must be able to answer instantly in
> >=20
> > response to these and any other situation an intelligent agent must be
> >=20
> > faced with.
> >=20
> >=20
> >=20
> > Does your database allow a computer to know how to act in these, and a
> >=20
> > million other possible situations it might be faced with in the future?
> >=20
>
> Yes.. Or, at least it can figure it out. It might die first, of course.
> But= , if it has enough time to study ahead of time, it will be OK.
>
> >=20
> > If not, it's not an intelligent computer. It's just another type of
> >=20
> > information storage system.
> >=20
>
> I am aware of the requirements of strong AI, and I am definitely claiming
> t= he entire range. Since this is the first time (this month) that I
> discuss m= y research with others, I am excited to see the kind of
> questions that pop = up.
I worked and thought on my own for many decades. It was mostly just a fun
puzzle to work on in my spare time. Then I started to discuss with others,
oh, about 10 years ago. I learned a lot. But mostly, what I found was
that the world was full of idiots. :)
> I hope to take all your concerns into account and write my answers to
> these=
> questions up into a paper, which will hopefully present a more
> insightful = introduction to the paradigm.
Whatever works for you. What you will find in AI, is that if you take 100
people that think they know the solution, or the approach to finding the
solution, you will have 100 different opinions.
It's fascinating how totally in disagreement the entire field is.
> In the end though, the problem space is vast, and I do not expect to
> cover = the solutions, except in actual code, when the time comes.
I think the solution is simple, and when found, will represent very little
code. I think it's a reinforcement based learning algorithm. I think it's
the type of algorithm that can be expressed and explained in one page, as
is typical of all the machine learning algorithms.
Though our brains are capable of learning great complexity of behavior, I
believe the underlying mechanism that allows that complexity, is trivially
simple. I think one of the greatest mistakes of AI research is to fail to
recognize this, and for people to make the assumption that if our behavior
is complex, the underlying mechanism of our brain, must also be massively
complex.
When an AI project fails to reach it's goals, my argument is always that
the project is too complex, and too specialized. I think we will solve the
problem of Strong AI not when we figure out how much more to add to our
systems, but when we figure out how to leave most of it out.
I think the solution is very graph like as well. But I think the graph
needs to be one that produces quick answers to mapping sensory input data
streams, into effector output streams.
I think it's best understood as a signal processing problem of data flowing
from the inputs, to the outputs, and being transformed as it flows. This
takes the form of a very different implementation from a language like
knowledge graph, bu yet, still has a good bit in common with it if you can
just squint your eyes and tilt your head as you look at it. :)