Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why don't we have a stong AI by now? (To Curt Welch and other)

21 views
Skip to first unread message

Burkart Venzke

unread,
Aug 6, 2011, 4:25:49 AM8/6/11
to
Why don't we have a stong(er) AI by now?
What is missing?
(A) Concept(s)?
Or is it too complex, too voluminous?

What do you think?

Doc O'Leary

unread,
Aug 6, 2011, 12:23:41 PM8/6/11
to
In article <j1itqe$bun$1...@news.albasani.net>,
Burkart Venzke <b...@gmx.de> wrote:

> Why don't we have a stong(er) AI by now?

Why *should* we have it by now? Just because 55 years ago some guys
thought it would be easy? It took thousands of years to get from Icarus
to the Wright brothers and, in the end, AF looked nothing like natural
flight. Why assume that intelligence, which took millions more years to
evolve to even our feeble level, is going to be an easier nut to crack?

> What is missing?
> (A) Concept(s)?
> Or is it too complex, too voluminous?

No, rather that it is still too nebulous. The better the A gets, the
worse the I gets. Perhaps there is a parallel to the Heisenberg
uncertainty principle when it comes to AI.

> What do you think?

I think technology has taken us down a bad path when it comes to AI. It
has lead to too many brute-force solutions that involve complex *human*
intelligence to implement. What we lost, or perhaps never had, was an
approach that succeeded in deconstructing "intelligence" into some kind
of basic building blocks that could then be used by machines. I mean,
we study all kinds of related subjects like "learning", but it is by no
measure a solved problem such that you can just wrap it up into a
software library and sell it for all manner of uses.

I think we're still approaching AI by sticking feathers together with
wax and waving our arms around. Following Moore's Law, we've managed to
build bigger wings and flap our arms faster, but that doesn't mean we're
*really* any closer to flying. What I'm pretty sure will be discovered
to be the missing piece is not some complex concept, but (in retrospect)
some simple principle that nobody bothered to think about.

--
iPhone apps that matter: http://appstore.subsume.com/
My personal UDP list: 127.0.0.1, localhost, googlegroups.com, astraweb.com,
and probably your server, too.

Curt Welch

unread,
Aug 7, 2011, 10:53:14 AM8/7/11
to
Burkart Venzke <b...@gmx.de> wrote:
> Why don't we have a stong(er) AI by now?

Because no one has solved the puzzle yet. :)

It's been said that the field of AI is still trying to define what AI is.
There is still no general agreement on what the brain is doing, or how it's
doing it.

> What is missing?
> (A) Concept(s)?
> Or is it too complex, too voluminous?
>
> What do you think?

You can ask 100 people and get 100 answers to that one. I have my answer.

I think it's a generic reinforcement trained learning machine. Though we
know all about solving low dimension reinforcement learning problems, no
one yet has good solutions to solving high dimension real time
reinforcement learning problems - which is what the brain does in my view.

I think the concept of what needs to be done is perfectly defined and
fairly well understood - it's just an engineering problem to find good
workable solutions to equal what the brain is doing.

The reason I think we have made so little progress in all this time is
because most people working on the problem don't (or didn't) believe human
behavior was something that could be explained by a learning. Though
researching learning was very popular in the beginning, when people were
still listening to the behaviorists like Skinner, and operant and classical
conditioning was "the thing". But then people tried to implement it and
failed. And in that failure, they concluded learning was "too simple" and
could never work.

So they turned away from the idea of generic learning, and started working
on hard-coding intelligent modules. Probably something like 95% of all AI
work over the past 60 years is hard coded modules attempting to duplicate
some small aspect of what a human has learned - like playing chess or
driving a car.

I think that approach is doomed to failure because human behavior is too
complex to be hard-coded by a human engineer like that. Those attempts
never had any chance of doing anything other than the type of things we
have seen AI turn out - machines that do really neat things, but yet, are
no where near the general "intelligence" of a human.

We can understand this when we look at our neural network learning
algorithms. They can be trained to do things, no one is smart enough to
hand-code. They work by calculating large complex probabilistic based
mapping functions from the inputs to the outputs. It's not a unction a
human can "understand" and "hard code" manually. It's a function that must
be calculated by a learning algorithm because it's basically a formula with
a millions of parameters that all interact with each other. It's far far
far too complex to hand code.

All we can hand-code, is little tiny subsets of the total behavior.

So the main reason AI hasn't yet been solved, is because they were working
on the wrong problem - because they thought they had proven learning was
impossible.

The problem is that the type of machine the learning algorithm must "build"
as it learns, is unlike anything we would hand-create as engineers. It's a
machine that's too complex for us to understand in any real sense. So to
solve AI, we have to build a learning algorithm, that builds for us, a
machine, we can't understand. Building working machines is hard enough,
but building a learning algorithm that is supposed to build something we
can't even understand? That's even harder.

I think in the end, the solution of how these sorts of learning algorithms
work, will be very easy to understand. I think they will turn out to be
very simple algorithms that create through experience machines that are too
complex for any human to understand.

The nature of this type of algorithm is why AI has been so hard to solve.
They create machines that are too complex for us to understand, and we have
spent far too much energy trying to understand a complexity we never had
any hope of understanding, instead of studying and undersigned how that
complexity can emerge from a generic learning machine.

But over the past few decades, the field of machine learning is becoming a
focus of AI, and I think the AI community is more on the right track today
than it ever has been.

I don't think it will be long now before some strong generic learning
algorithms start to emerge and get people really excited about the
potential of that path. And once that happens, It won't be long until we
see real progress with machines that are finally approaching and eventually
exceeding all human abilities.

I think we are just a few decades away from real advances in machine
learning happening which will finally produce the dreams that everyone in
AI had 60 years ago. A lot of work looking in the wrong places just had to
be done before the right path could be found.

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

Burkart Venzke

unread,
Aug 7, 2011, 5:18:39 PM8/7/11
to
>> Why don't we have a stong(er) AI by now?
>
> Why *should* we have it by now?

Why not? ;)
Some persons are quite optimistic.

> Just because 55 years ago some guys thought it would be easy?

Not really.
But a strong(er) AI could be a hope for a better world (yes, a dream...).

> It took thousands of years to get from Icarus to the Wright brothers

But it took only some years from planes to the moon by a rocket.

> and, in the end, AF looked nothing like natural flight.

Right. Therefore it is not necessary that AI can think with a brain with
neorons etc.

> Why assume that intelligence, which took millions more years to
> evolve to even our feeble level, is going to be an easier nut to crack?

Like flying, artificial intelligence need not to the same as human
intelligence.

>> What is missing?
>> (A) Concept(s)?
>> Or is it too complex, too voluminous?
>
> No, rather that it is still too nebulous.

We define what AI is, so it is not necessarily nebulous.

> The better the A gets, the worse the I gets.

Would do you mean? It is only your theory?

> Perhaps there is a parallel to the Heisenberg
> uncertainty principle when it comes to AI.
>
>> What do you think?
>
> I think technology has taken us down a bad path when it comes to AI. It
> has lead to too many brute-force solutions that involve complex *human*
> intelligence to implement. What we lost, or perhaps never had, was an
> approach that succeeded in deconstructing "intelligence" into some kind
> of basic building blocks that could then be used by machines.

I agree.

> I mean,
> we study all kinds of related subjects like "learning", but it is by no
> measure a solved problem such that you can just wrap it up into a
> software library and sell it for all manner of uses.

I think we should first develop a basical, general model of learning as
far as possible before we build up software libraries of it.

> I think we're still approaching AI by sticking feathers together with
> wax and waving our arms around.

Yes, too much.

> Following Moore's Law, we've managed to
> build bigger wings and flap our arms faster, but that doesn't mean we're
> *really* any closer to flying. What I'm pretty sure will be discovered
> to be the missing piece is not some complex concept, but (in retrospect)
> some simple principle that nobody bothered to think about.

Right, a good theory is needed.

Burkart Venzke

unread,
Aug 7, 2011, 5:46:23 PM8/7/11
to
Am 07.08.2011 16:53, schrieb Curt Welch:
> Burkart Venzke<b...@gmx.de> wrote:
>> Why don't we have a stong(er) AI by now?
>
> Because no one has solved the puzzle yet. :)

A good answer.

> It's been said that the field of AI is still trying to define what AI is.
> There is still no general agreement on what the brain is doing, or how it's
> doing it.

It is not necessary to know how the brain work if we define AI in
another way.

>> What is missing?
>> (A) Concept(s)?
>> Or is it too complex, too voluminous?
>>
>> What do you think?
>
> You can ask 100 people and get 100 answers to that one.

Yes, a big problem like problems in philosophy.

> I have my answer.

> I think it's a generic reinforcement trained learning machine. Though we
> know all about solving low dimension reinforcement learning problems, no
> one yet has good solutions to solving high dimension real time
> reinforcement learning problems - which is what the brain does in my view.
>
> I think the concept of what needs to be done is perfectly defined and
> fairly well understood - it's just an engineering problem to find good
> workable solutions to equal what the brain is doing.

Then, we nearly only have to wait for the engineers?

> The reason I think we have made so little progress in all this time is
> because most people working on the problem don't (or didn't) believe human
> behavior was something that could be explained by a learning.

You mean that they are working only on weak AI?

> Though
> researching learning was very popular in the beginning, when people were
> still listening to the behaviorists like Skinner, and operant and classical
> conditioning was "the thing". But then people tried to implement it and
> failed. And in that failure, they concluded learning was "too simple" and
> could never work.

Perhaps they are able to change their mind.

> So they turned away from the idea of generic learning, and started working
> on hard-coding intelligent modules. Probably something like 95% of all AI
> work over the past 60 years is hard coded modules attempting to duplicate
> some small aspect of what a human has learned - like playing chess or
> driving a car.
>
> I think that approach is doomed to failure because human behavior is too
> complex to be hard-coded by a human engineer like that. Those attempts
> never had any chance of doing anything other than the type of things we
> have seen AI turn out - machines that do really neat things, but yet, are
> no where near the general "intelligence" of a human.

I agree with you.

> We can understand this when we look at our neural network learning
> algorithms. They can be trained to do things, no one is smart enough to
> hand-code. They work by calculating large complex probabilistic based
> mapping functions from the inputs to the outputs. It's not a unction a
> human can "understand" and "hard code" manually. It's a function that must
> be calculated by a learning algorithm because it's basically a formula with
> a millions of parameters that all interact with each other. It's far far
> far too complex to hand code.
>
> All we can hand-code, is little tiny subsets of the total behavior.
>
> So the main reason AI hasn't yet been solved, is because they were working
> on the wrong problem - because they thought they had proven learning was
> impossible.

Not the best goal to prove ;)

> The problem is that the type of machine the learning algorithm must "build"
> as it learns, is unlike anything we would hand-create as engineers. It's a
> machine that's too complex for us to understand in any real sense. So to
> solve AI, we have to build a learning algorithm, that builds for us, a
> machine, we can't understand. Building working machines is hard enough,
> but building a learning algorithm that is supposed to build something we
> can't even understand? That's even harder.

Hm, you knows... I am not a fan of rebuilding the brain respectively its
neural structures where the details really cannot be understood in every
detail.
But you are right, it is not necessary to understand all details precisely .

> I think in the end, the solution of how these sorts of learning algorithms
> work, will be very easy to understand. I think they will turn out to be
> very simple algorithms that create through experience machines that are too
> complex for any human to understand.

Could the be symbolic (in opposite to neural) in your mind?

> The nature of this type of algorithm is why AI has been so hard to solve.
> They create machines that are too complex for us to understand, and we have
> spent far too much energy trying to understand a complexity we never had
> any hope of understanding, instead of studying and undersigned how that
> complexity can emerge from a generic learning machine.
>
> But over the past few decades, the field of machine learning is becoming a
> focus of AI, and I think the AI community is more on the right track today
> than it ever has been.
>
> I don't think it will be long now before some strong generic learning
> algorithms start to emerge and get people really excited about the
> potential of that path. And once that happens, It won't be long until we
> see real progress with machines that are finally approaching and eventually
> exceeding all human abilities.
>
> I think we are just a few decades away from real advances in machine
> learning happening which will finally produce the dreams that everyone in
> AI had 60 years ago. A lot of work looking in the wrong places just had to
> be done before the right path could be found.

Let us hope we will still live to see it.

Burkart Venzke

Curt Welch

unread,
Aug 8, 2011, 10:36:49 AM8/8/11
to
Burkart Venzke <b...@gmx.de> wrote:
> Am 07.08.2011 16:53, schrieb Curt Welch:
> > Burkart Venzke<b...@gmx.de> wrote:
> >> Why don't we have a stong(er) AI by now?
> >
> > Because no one has solved the puzzle yet. :)
>
> A good answer.
>
> > It's been said that the field of AI is still trying to define what AI
> > is. There is still no general agreement on what the brain is doing, or
> > how it's doing it.
>
> It is not necessary to know how the brain work if we define AI in
> another way.

Well, to me the "real AI" we are after is making machines that can replace
humans at any task that currently only a human can do. If a company has to
hire a human to do a job, because no one knows how to make a machine that
can perform the same job, then we have not yet solved the AI problem.

And in that definition, I choose to rule out the biological functions
humans are hired to do, like donate blood, and only include the tasks that
a machine with the right control system should, in theory, be able to do.

We don't need to know how the brain works to solve this problem, but we do
need to build a machine that is as good as the brain - and odds are, by the
time we solve this problem, we will at the same time, have figured out most
of how the brain works.

> >> What is missing?
> >> (A) Concept(s)?
> >> Or is it too complex, too voluminous?
> >>
> >> What do you think?
> >
> > You can ask 100 people and get 100 answers to that one.
>
> Yes, a big problem like problems in philosophy.
>
> > I have my answer.
>
> > I think it's a generic reinforcement trained learning machine. Though
> > we know all about solving low dimension reinforcement learning
> > problems, no one yet has good solutions to solving high dimension real
> > time reinforcement learning problems - which is what the brain does in
> > my view.
> >
> > I think the concept of what needs to be done is perfectly defined and
> > fairly well understood - it's just an engineering problem to find good
> > workable solutions to equal what the brain is doing.
>
> Then, we nearly only have to wait for the engineers?

I think so.

> > The reason I think we have made so little progress in all this time is
> > because most people working on the problem don't (or didn't) believe
> > human behavior was something that could be explained by a learning.
>
> You mean that they are working only on weak AI?

Ah, I totally missed the fact that you used the word "strong AI" in your
subject and that you might actually have been asking about the mind body
problem.

I don't believe in the strong vs weak AI position. Humans are just
machines we are trying to duplicate the function of.

> > Though
> > researching learning was very popular in the beginning, when people
> > were still listening to the behaviorists like Skinner, and operant and
> > classical conditioning was "the thing". But then people tried to
> > implement it and failed. And in that failure, they concluded learning
> > was "too simple" and could never work.
>
> Perhaps they are able to change their mind.

Of course. And if I'm right, they will change their mind very quickly when
they are shown the obvious proof of a generic learning machine that acts
"alive". People have no problem changing their mind when shown obvious
strong evidence.

> > So they turned away from the idea of generic learning, and started
> > working on hard-coding intelligent modules. Probably something like
> > 95% of all AI work over the past 60 years is hard coded modules
> > attempting to duplicate some small aspect of what a human has learned -
> > like playing chess or driving a car.
> >
> > I think that approach is doomed to failure because human behavior is
> > too complex to be hard-coded by a human engineer like that. Those
> > attempts never had any chance of doing anything other than the type of
> > things we have seen AI turn out - machines that do really neat things,
> > but yet, are no where near the general "intelligence" of a human.
>
> I agree with you.
>
> > We can understand this when we look at our neural network learning
> > algorithms. They can be trained to do things, no one is smart enough
> > to hand-code. They work by calculating large complex probabilistic
> > based mapping functions from the inputs to the outputs. It's not a
> > unction a human can "understand" and "hard code" manually. It's a
> > function that must be calculated by a learning algorithm because it's
> > basically a formula with a millions of parameters that all interact
> > with each other. It's far far far too complex to hand code.
> >
> > All we can hand-code, is little tiny subsets of the total behavior.
> >
> > So the main reason AI hasn't yet been solved, is because they were
> > working on the wrong problem - because they thought they had proven
> > learning was impossible.
>
> Not the best goal to prove ;)

Well, more accurate is probably for me to say "they felt it highly unlikely
that generic learning alone was the answer, so they followed the path that
looked more likely to them."

> > The problem is that the type of machine the learning algorithm must
> > "build" as it learns, is unlike anything we would hand-create as
> > engineers. It's a machine that's too complex for us to understand in
> > any real sense. So to solve AI, we have to build a learning algorithm,
> > that builds for us, a machine, we can't understand. Building working
> > machines is hard enough, but building a learning algorithm that is
> > supposed to build something we can't even understand? That's even
> > harder.
>
> Hm, you knows... I am not a fan of rebuilding the brain respectively its
> neural structures where the details really cannot be understood in every
> detail.
> But you are right, it is not necessary to understand all details
> precisely .
>
> > I think in the end, the solution of how these sorts of learning
> > algorithms work, will be very easy to understand. I think they will
> > turn out to be very simple algorithms that create through experience
> > machines that are too complex for any human to understand.
>
> Could the be symbolic (in opposite to neural) in your mind?

Well depends on what you mean by "symbolic". Digital computers are
symbolic from the ground up (1 and 0 symbols) so everything they do,
including neural nets, are symbolic at the core.

The "symbols" that make up our language (words) are not a foundation of the
brain, they are a high level emergent behavior of the lower level
processing that happens. But we can argue that neural pulses are
themselves low level symbols. You could even go lower and argue the
molecules are even lower level symbols in the machines.

I don't think the word "symbol" has much useful meaning here to define a
general class of machines.

> > The nature of this type of algorithm is why AI has been so hard to
> > solve. They create machines that are too complex for us to understand,
> > and we have spent far too much energy trying to understand a complexity
> > we never had any hope of understanding, instead of studying and
> > undersigned how that complexity can emerge from a generic learning
> > machine.
> >
> > But over the past few decades, the field of machine learning is
> > becoming a focus of AI, and I think the AI community is more on the
> > right track today than it ever has been.
> >
> > I don't think it will be long now before some strong generic learning
> > algorithms start to emerge and get people really excited about the
> > potential of that path. And once that happens, It won't be long until
> > we see real progress with machines that are finally approaching and
> > eventually exceeding all human abilities.
> >
> > I think we are just a few decades away from real advances in machine
> > learning happening which will finally produce the dreams that everyone
> > in AI had 60 years ago. A lot of work looking in the wrong places just
> > had to be done before the right path could be found.
>
> Let us hope we will still live to see it.

I certainly hope that's true.

> Burkart Venzke

Doc O'Leary

unread,
Aug 8, 2011, 11:41:19 AM8/8/11
to
In article <j1mvfg$hh7$1...@news.albasani.net>,
Burkart Venzke <b...@gmx.de> wrote:

> >> Why don't we have a stong(er) AI by now?
> >
> > Why *should* we have it by now?
>
> Why not? ;)
> Some persons are quite optimistic.

Science needs to be about evidence, not blind optimism.

> > It took thousands of years to get from Icarus to the Wright brothers
>
> But it took only some years from planes to the moon by a rocket.

The current path of AI is not planes to rockets. Like I said, it's
still at the wax and feathers stages.

> > and, in the end, AF looked nothing like natural flight.
>
> Right. Therefore it is not necessary that AI can think with a brain with
> neorons etc.

Wrong angle. My intent was to point out that the *methods* for
achieving flight differed from what humans naively assumed from the
start. Likewise, AI has come to employ a lot of problem solving
techniques that really have little to do with uncovering the nature of
intelligence.

> > Why assume that intelligence, which took millions more years to
> > evolve to even our feeble level, is going to be an easier nut to crack?
>
> Like flying, artificial intelligence need not to the same as human
> intelligence.

But it still requires an understanding of what intelligence is, such
that we can say any particular system, human or machine, has it.

> We define what AI is, so it is not necessarily nebulous.

Hardly. Intelligence is still in the realm of pornography: we know it
when we see it. We still have no formal definition for generic
intelligence that allows us to reasonably approach the problem of
building an artificial system to represent it.

> > The better the A gets, the worse the I gets.
>
> Would do you mean? It is only your theory?

No, it is my observation. From Deep Blue to Watson on Jeopardy (hate to
pick on IBM, but they deserve it most), the AI community should be
*embarrassed* about how little scientific value is extracted from those
endeavors. That is to say, we have thrown all sorts of modern hardware
and human intelligence into writing programs to *win* at chess, but to
what scientifically useful end? Did the machines teach us more about
chess as a result? Or game playing techniques in general? Did we learn
*anything* about intelligence in general? No, we just threw a lot of A
at the problem of winning one particular game, and as a result got that
much less I in return.

> > I mean,
> > we study all kinds of related subjects like "learning", but it is by no
> > measure a solved problem such that you can just wrap it up into a
> > software library and sell it for all manner of uses.
>
> I think we should first develop a basical, general model of learning as
> far as possible before we build up software libraries of it.

Of course. My point is that we haven't gotten even that far. We have
nebulous words like "learning" for what we think is part of intelligent
behavior, but we have no solid understanding about what mechanisms will
achieve it such that it results in strong AI.

Curt Welch

unread,
Aug 8, 2011, 4:22:04 PM8/8/11
to

That's not really true. We have a few of them. What we don't have, is
consensus on whether they are correct. They are correct however. :)

http://www.hutter1.net/ai/uaibook.htm

> > > The better the A gets, the worse the I gets.
> >
> > Would do you mean? It is only your theory?
>
> No, it is my observation. From Deep Blue to Watson on Jeopardy (hate to
> pick on IBM, but they deserve it most), the AI community should be
> *embarrassed* about how little scientific value is extracted from those
> endeavors. That is to say, we have thrown all sorts of modern hardware
> and human intelligence into writing programs to *win* at chess, but to
> what scientifically useful end? Did the machines teach us more about
> chess as a result? Or game playing techniques in general? Did we learn
> *anything* about intelligence in general? No, we just threw a lot of A
> at the problem of winning one particular game, and as a result got that
> much less I in return.
>
> > > I mean,
> > > we study all kinds of related subjects like "learning", but it is by
> > > no measure a solved problem such that you can just wrap it up into a
> > > software library and sell it for all manner of uses.
> >
> > I think we should first develop a basical, general model of learning as
> > far as possible before we build up software libraries of it.
>
> Of course. My point is that we haven't gotten even that far. We have
> nebulous words like "learning" for what we think is part of intelligent
> behavior, but we have no solid understanding about what mechanisms will
> achieve it such that it results in strong AI.

We had it 50 years ago when the behaviorists uncovered the principles of
operant and classical conditioning. The correct definition of intelligence
has been around longer than the field of AI has. People just refuse to see
the truth because they are deceived about what their are. They aren't
willing to accept that they are nothing more than rather trivially simple
operant conditioned machine.

"Learning" is not in any sense a nebulous word when it's used formally such
as in Hutter's book, or any book that defines reinforcement learning, or in
the context of operant and classical conditioning. They are precisely
defined concepts.

We are not lacking the high level concepts of what intelligence is. We are
only lacking workable engineering solutions, and a consensus that these
concepts are the correct ones.

AI seems to be much like breaking a cipher by trying to guess passwords.
There is nothing to show you how close you are. You can have 19 of the 20
letters of the password correct, and it looks just as far off base, as when
you have every letter wrong. Close doesn't seem to count in AI. Either
you have the answer right, or you have almost nothing to show.

I think this effect also misleads people into assuming we still have a very
long way to go, because they see no real "intelligence" in all the work
done in AI. That's only true if the solution is found by following a long
general slope up to the answer. But I don' think AI is like that. I think
it's a pole hidden in the middle of the solution plane. I think some
people are very close to the solution already, but what they have achieved
doesn't "look" intelligent yet because one out of 20 letters is still
wrong.

I think "real AI" is going to catch most people off guard because of this.
I think it's going to explode into society far faster that almost everyone
expects.

Doc O'Leary

unread,
Aug 9, 2011, 12:28:15 PM8/9/11
to
In article <20110808162204.651$e...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> > Hardly. Intelligence is still in the realm of pornography: we know it
> > when we see it. We still have no formal definition for generic
> > intelligence that allows us to reasonably approach the problem of
> > building an artificial system to represent it.
>
> That's not really true. We have a few of them. What we don't have, is
> consensus on whether they are correct. They are correct however. :)

You don't need consensus when the science works. And, like I said, the
starting point needs to focus on *intelligence*, not any particular
system of computation or mathematics. From a philosophical standpoint,
even the Turing Test is a failed measure of intelligence, because it
depends on a human of questionable intelligence to be the judge.

More to the point, I'll believe "they are correct" when I start seeing
results. I don't even need strong AI! Where are the AI systems that
just play *games* well, rather than being hard-coded to play particular
games like checker or chess or go? Where are the even chess-centric AI
that can teach *me* how to play more intelligently? Surely that's not
asking too much from anyone who claims to have solved the intelligence
problem.

> > Of course. My point is that we haven't gotten even that far. We have
> > nebulous words like "learning" for what we think is part of intelligent
> > behavior, but we have no solid understanding about what mechanisms will
> > achieve it such that it results in strong AI.
>
> We had it 50 years ago when the behaviorists uncovered the principles of
> operant and classical conditioning. The correct definition of intelligence
> has been around longer than the field of AI has. People just refuse to see
> the truth because they are deceived about what their are. They aren't
> willing to accept that they are nothing more than rather trivially simple
> operant conditioned machine.

Acceptance is not necessary if you can produce results. Your claims of
it being a solved problem for 50 years is bunk, because we don't appear
to be any closer to AI despite *huge* advances in hardware. It is you
who is refusing to see that intelligence is beyond your grasp, and your
certainty that you've found "the truth" is ironically the thing that
will keep you from actually finding it.

> "Learning" is not in any sense a nebulous word when it's used formally such
> as in Hutter's book, or any book that defines reinforcement learning, or in
> the context of operant and classical conditioning. They are precisely
> defined concepts.

No, they are circular definitions when formulated that way. From the
perspective of intelligent behavior, learning is a multi-layered
concept. It even seems like learning may be closely tied to a system's
knowledge representation, such that you *can't* isolate them (other than
as an abstraction).

> We are not lacking the high level concepts of what intelligence is. We are
> only lacking workable engineering solutions, and a consensus that these
> concepts are the correct ones.

Then, do tell, what is intelligence? Does it in some way involve your
precious "consensus" opinions, much like the world being flat or at
center of the Universe? Or does it cover those areas where true
intelligence differs from said consensus?

> AI seems to be much like breaking a cipher by trying to guess passwords.
> There is nothing to show you how close you are. You can have 19 of the 20
> letters of the password correct, and it looks just as far off base, as when
> you have every letter wrong. Close doesn't seem to count in AI. Either
> you have the answer right, or you have almost nothing to show.

It is convenient to believe that when you have nothing to show, but that
doesn't make it true. I think we could all agree that a dog would not
pass the Turing Test, but that doesn't mean they don't have a degree of
intelligence, and even that different dogs will show different degrees.

I say that, done properly, AI is nothing like breaking a cipher.
Intelligence is rated on a scale, not flipped like a switch. If you're
not seeing that, you might be doing some nifty mathematical work or
coding some very powerful problem solving algorithms, but you're not
doing AI.

> I think "real AI" is going to catch most people off guard because of this.
> I think it's going to explode into society far faster that almost everyone
> expects.

You are wrong. The only way that is going to happen is if a completely
new breakthrough happens that discards all the nonsense that has gone
into the hard-wired solutions we have today. I'm not sure anyone is
actually working on intelligence anymore; too many, like you, think it
is a solved problem, when it is nothing of the sort.

Curt Welch

unread,
Aug 9, 2011, 2:26:40 PM8/9/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110808162204.651$e...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > > Hardly. Intelligence is still in the realm of pornography: we know
> > > it when we see it. We still have no formal definition for generic
> > > intelligence that allows us to reasonably approach the problem of
> > > building an artificial system to represent it.
> >
> > That's not really true. We have a few of them. What we don't have, is
> > consensus on whether they are correct. They are correct however. :)
>
> You don't need consensus when the science works. And, like I said, the
> starting point needs to focus on *intelligence*, not any particular
> system of computation or mathematics. From a philosophical standpoint,
> even the Turing Test is a failed measure of intelligence, because it
> depends on a human of questionable intelligence to be the judge.

Yes. The Turing test was only an attempt by Turing to get people away from
the mind body problem and to see intelligence as a problem of behavior, not
a problem of having a soul. His test defines intelligence totally in terms
of behavior. It uses the "soul" of one person, to test the "intelligence"
of a machine. It was a thought experiment to make it clear that any notion
of "soul" was not important to defining intelligence, and that it could be
defined 100% in terms of behavior.

> More to the point, I'll believe "they are correct" when I start seeing
> results.

Which is exactly the point of the Turing test. It was based on the
concept that a human could know "intelligence" just be looking at behavior.

> I don't even need strong AI! Where are the AI systems that
> just play *games* well, rather than being hard-coded to play particular
> games like checker or chess or go? Where are the even chess-centric AI
> that can teach *me* how to play more intelligently? Surely that's not
> asking too much from anyone who claims to have solved the intelligence
> problem.

I made no such claim. AI has not been solved, which is why you do not yet
see any of those things you mentioned. I said the problem that needs to be
solved has been well DEFINED - not solved. We know what the problem of
"intelligence" is. At least I do, and a few others.

> > > Of course. My point is that we haven't gotten even that far. We
> > > have nebulous words like "learning" for what we think is part of
> > > intelligent behavior, but we have no solid understanding about what
> > > mechanisms will achieve it such that it results in strong AI.
> >
> > We had it 50 years ago when the behaviorists uncovered the principles
> > of operant and classical conditioning. The correct definition of
> > intelligence has been around longer than the field of AI has. People
> > just refuse to see the truth because they are deceived about what their
> > are. They aren't willing to accept that they are nothing more than
> > rather trivially simple operant conditioned machine.
>
> Acceptance is not necessary if you can produce results. Your claims of
> it being a solved problem for 50 years is bunk, because we don't appear
> to be any closer to AI despite *huge* advances in hardware.

You failed to read my words correctly.

The "It" I said "we had" was NOT A SOLUTION TO AI. I was responding to
your remark when you said " We still have no formal definition".

We had a formal definition 60 years ago. Not the solution as how to build
it.

> It is you
> who is refusing to see that intelligence is beyond your grasp, and your
> certainty that you've found "the truth" is ironically the thing that
> will keep you from actually finding it.

It will be funny to see you have to eat you words (if we both live long
enough to get to that point).

> > "Learning" is not in any sense a nebulous word when it's used formally
> > such as in Hutter's book, or any book that defines reinforcement
> > learning, or in the context of operant and classical conditioning.
> > They are precisely defined concepts.
>
> No, they are circular definitions when formulated that way.

There is nothing in the least bit circular about them. Show me how the
definition of reinforcement learning is circular:

http://en.wikipedia.org/wiki/Reinforcement_learning

The basic reinforcement learning model consists of:
a set of environment states S;
a set of actions A;
rules of transitioning between states;
rules that determine the scalar immediate reward of a transition; and
rules that describe what the agent observes.

Where exactly in such a model is the circular definition you speak of?

You are speaking nonsense.

The basic reinforcement learning model consists of:
a set of environment states S;
a set of actions A;
rules of transitioning between states;
rules that determine the scalar immediate reward of a transition; and
rules that describe what the agent observes.


> From the
> perspective of intelligent behavior, learning is a multi-layered
> concept. It even seems like learning may be closely tied to a system's
> knowledge representation, such that you *can't* isolate them (other than
> as an abstraction).

May be closely tired? It's not just "closely tied" it's one and the same
thing. You can't full specify any learning system without at the same time
fully specifying its internal knowledge presentation system. Knowledge and
hardware are one and the same thing, they can not be separated (despite all
the nonsensical confusion that exists in our society over dualism).

When learning is specified at the high level as operant conditioning, there
is no ERROR of any type in the specification, nor is it a circular
definition. There is only a lack of detail. It's the same lack of detail
we use in in ALL specifications. If I specify how a steam engine works, at
the high level by talking about water being heated which creates pressure,
I've left out all the details about how to build a working steam engine.
But I have made no error in my specification.

Operant conditioning and reinforcement learning specifications are the
same. They are high level specifications of the learning system that needs
to be built, but they leave out important details.

It so happens, that the missing details of these high level description are
critical to building learning machines that match human brain performance.
Without workable solutions to fill in these missing details, we don't have
a machine that looks in much of any sense "human like". And that's the
missing "proof" someone like you needs. If you don't see it "acting like a
human" you don't believe it's right.

It's totally valid to be skeptical because those details are missing. But
it's invalid to do what you are doing here - denying the facts we do have
and making up nonsense to try and discount the truth to fit your personal
views.

> > We are not lacking the high level concepts of what intelligence is. We
> > are only lacking workable engineering solutions, and a consensus that
> > these concepts are the correct ones.
>
> Then, do tell, what is intelligence?

Haven't I already answered that in this thread? Did you bother to read
what I wrote before you started to make fun of me?

I broaden the concept to what best fits reality. I define it as "any
reinforcement learning machine". That broadens it to fit many machine that
almost no one but me, would call "intelligent" - so my definition of the
word is not a good fit to common usage.

Common usage defines it as basically "what humans can do". Only when we
see machines acting like humans, will most people then say "that machine
seems intelligent".

The definition of intelligence from common usage (how the word is actually
defined in our society), is of no use at all in building machines that act
like humans, nor is it of any use in describing what is unique about such
machines.

> Does it in some way involve your
> precious "consensus" opinions, much like the world being flat or at
> center of the Universe? Or does it cover those areas where true
> intelligence differs from said consensus?

Dualism is an illusion that many people are tricked by. They think there
is something in them (call it a soul, or subjective experience, or
consciousnesses - it makes no difference), that is separate from their
body.

The belief that there is ANY type of separation, is an illusion and does
not in fact exist in any form in this universe. Only the illusion itself
exists. And as such, can not, and will never be part of the AI hardware we
build.

The solution to AI will just be a reinforcement trained robot that has the
learning powers of humans - and with that power, it will learn to act in
the same complex ways humans learn to act - as we are acting as we write
the messages for example.

> > AI seems to be much like breaking a cipher by trying to guess
> > passwords. There is nothing to show you how close you are. You can
> > have 19 of the 20 letters of the password correct, and it looks just as
> > far off base, as when you have every letter wrong. Close doesn't seem
> > to count in AI. Either you have the answer right, or you have almost
> > nothing to show.
>
> It is convenient to believe that when you have nothing to show, but that
> doesn't make it true.

We have TONS to show. But some people, aren't able to understand the
facts, and are of the type who can't deal well with abstractions, and need
to "see it" before they will understand it.

I'm not one of those people. But most people are in fact like that. Most
people have fairly weak powers of abstraction and are "show me" types of
people.

Those people, will have to wait until AFTER it's solved by someone else,
before they will be able to understand the solution.

> I think we could all agree that a dog would not
> pass the Turing Test, but that doesn't mean they don't have a degree of
> intelligence, and even that different dogs will show different degrees.

Yes, because you are using the the "acts like a human" definition that the
"show me" people like to use. And we all agree, that dogs act like humans
in some important ways, that none of our machines are able to act yet.
Same is true for rats.

> I say that, done properly, AI is nothing like breaking a cipher.
> Intelligence is rated on a scale, not flipped like a switch. If you're
> not seeing that, you might be doing some nifty mathematical work or
> coding some very powerful problem solving algorithms, but you're not
> doing AI.

Huh? You "show me" people need everything so carefully explained don't
you?

I define intelligence as "any reinforcement learning machine". Do you
honestly think I believe that all reinforcement learning machines show the
same LEVEL of performance? Of course not. They exist in a huge range of
performance abilities that are also tied to, and defined by, the type of
environmental they are connected with. There is an infinite (effectively
infinite at least) number of reinforcement learning machines and
environments. So of course my defintion of intelligence is not "a switch".

So, let me again, explain why there is a "switch" at work here.

The human brain is a reinforcement learning machine that works in a very
specific type of environment. One which which the sensory data does not
have the Markov property - which means the sensory data does not fully
specify the full state of the environment). And one in which the state
space of the environment is high dimension - aka so huge that it far past
impossible to represent the state space of the environment inside the
machine.

This means it must learn, using a model of the environment which is only a
small fractional sub-set of the state space of the full environment, and
that it's sensory data, is only sensing a small fraction of that
environment.

This is a class of reinforcement learning problem we basically have NO
working solution for (our engineers that is). But yet, the brain does
implement a working solution to it - and it's easy to show that it is
solving just that type of problem. That's what classical conditioning
shows up.

So the solution to AI is well understood by people that understand these
learning problems. Someone has to figure out how to build a reinforcement
learning algorithm, that works well, in just such a domain. Many people
are working on that problem as we speak. Papers are being published about
advances on that problem on a regular basis these days.

Once anyone figures out how to solve this well defined, and well understood
class of problems, then it will become obvious to the "show me" people,
that something special is happening in this approach. Either you have
something that learns well in that environment, or you don't. It's not an
"almost" type of thing.

It's a scaling problem that must be solved by engineering, not a concept
problem.

It's like understanding how a bubble sort algorithm works, but knowing that
given class of problems can't be solved, unless you can replace that O(n^2)
bubble sort algorithm, which one that works in at least O(n log n) time.
But until someone "invents" a quick sort, the "show me" people just keep
saying "we aren't even close!" No one has a clue what is needed! The "show
me: people spout the "show me" facts of "sorting takes O(n^2) because that
is the only type of sorting they have ever been shown!".

But the working human brain is the "proof" that the algorithm we are
looking for can be built. So we aren't just "guessing" that an O(n log n)
might exist and trying to find it for the fun of it. We conceptual already
understand exactly what the brain is doing - that is, we understand what we
don't know how to build, but what we must build, to duplicate its power - a
reinforcement learning machine that operates in a high dimension,
non-markov, real time, environment, and which can learn, at the speed, the
human brain does in such an environment.

It's a sudden step, because like sorting, there are no "gradual
improvement" from O(n2) to O(nLogn). The first means a large data set
takes a day to sort, and the second means it can be done in 5 minutes.
There are not an infinite number of algorithms between the two. That is
where we are with reinforcement learning algorithms (only worse). The only
ones we have now, would take millions of years, using all the computing
power on the earth, to learn what a human brain is able to learn in a year.
Until we learn how to build a learning machine with this power, AI will
never be solved. Like with sorting, there probably are not a large number
of different machine designs between "brain speed learning" and our current
bubble-sort-like reinstatement learning algorithms.

The problem we have to solve, is well defined and understood (at least by
some). The machine design that solves, is still MIA.

> > I think "real AI" is going to catch most people off guard because of
> > this. I think it's going to explode into society far faster that almost
> > everyone expects.
>
> You are wrong. The only way that is going to happen is if a completely
> new breakthrough happens that discards all the nonsense that has gone
> into the hard-wired solutions we have today. I'm not sure anyone is
> actually working on intelligence anymore; too many, like you, think it
> is a solved problem, when it is nothing of the sort.

You can't even read and understand what I've written let along make good
predictions. I don't think it's solved. I never said it was solved. I
think we know exactly what the problem is however that needs to be solved.
I think we (some of us at least) know, and have known for a long time, what
human intelligence is (nothing more than good operant conditioned learning
machine that controls how our arms and legs move).

There are plenty of people working on advanceing machine learning these
days.

Just one of thousands of example:

http://www.csml.ucl.ac.uk/courses/msc_ml/

'Making a machine that learns is the first step towards making a machine
that thinks.' Dr. Peter J. Bentley, Honorary Senior Research
Fellow/Writer, Department of Computer Science, University College
London

We know exactly what learning powers the brain has, and no one has solved
the puzzle of making a machine duplicate those learning problems in the
high dimension real time domain the brain operates in. Until we do, AI
will never be solved. A lot of people understand this class of learning
problem, and are working on finding that solution. It won't allude us
forever.

casey

unread,
Aug 10, 2011, 5:45:11 AM8/10/11
to
On Aug 10, 4:26 am, c...@kcwc.com (Curt Welch) wrote:
> The human brain is a reinforcement learning machine that works
> in a very specific type of environment.

And that is an evolving social environment. An individual human
brain is limited but as a component in this "global brain" it
can be programmed with advanced thinking skills that are not
learned by a blank slate in the lifetime of an individual but
are inherited via language the same way the behaviors we share
with animals are inherited via the dna code.

I don't believe we need to conjure up any "high dimensional
solving skills" as without this global brain the individual brain
doesn't really show much advanced intelligence. We have an innate
ability to learn a language which is the mechanism that enables
the global brain to become smarter over many generations by using
the brains of each new generation of humans.

> But the working human brain is the "proof" that the algorithm we

> are looking for can be built. ... we must build, to duplicate


> its power - a reinforcement learning machine that operates in a
> high dimension, non-markov, real time, environment, and which can
> learn, at the speed, the human brain does in such an environment.

I think the "high dimensional problem" is solved the same way it
is solved for biological bodies, it evolves in working stages and
is passed down by some symbolic system, dna for the brains and
language for the global brain.

Once the global brain has learned something new like making fire
or the wheel it can pass that on to the next generation where
future brains can add to that knowledge in small working increments
for no brain can learn much without accessing this body of knowledge.
Conditioning is as simple in humans as it is in dogs or rats our
real skill is the ability to absorb the ever evolving knowledge of
the global brain.

Just as we have the innate ability to learn a language - the power
that allows our society to evolve - we also have the innate modules
that allow us and other animals to "see". These modules do require
a visual input, just as our language learning modules need language
as input. Learning to see relies on the innate moduels coded in the
dna code same way learning to do logic requires the knowledge
coded in the language code.

jc

Curt Welch

unread,
Aug 10, 2011, 11:31:19 AM8/10/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 10, 4:26=A0am, c...@kcwc.com (Curt Welch) wrote:
> > The human brain is a reinforcement learning machine that works
> > in a very specific type of environment.
>
> And that is an evolving social environment. An individual human
> brain is limited but as a component in this "global brain" it
> can be programmed with advanced thinking skills that are not
> learned by a blank slate in the lifetime of an individual but
> are inherited via language the same way the behaviors we share
> with animals are inherited via the dna code.
>
> I don't believe we need to conjure up any "high dimensional
> solving skills" as without this global brain the individual brain
> doesn't really show much advanced intelligence. We have an innate
> ability to learn a language which is the mechanism that enables
> the global brain to become smarter over many generations by using
> the brains of each new generation of humans.
>
> > But the working human brain is the "proof" that the algorithm we
> > are looking for can be built. ... we must build, to duplicate
> > its power - a reinforcement learning machine that operates in a
> > high dimension, non-markov, real time, environment, and which can
> > learn, at the speed, the human brain does in such an environment.
>
> I think the "high dimensional problem" is solved the same way it
> is solved for biological bodies, it evolves in working stages and
> is passed down by some symbolic system, dna for the brains and
> language for the global brain.

yeah, well except that is NOT the high dimensional problem I speak of John.

I'm talking about the power the brain has to perform high dimensional
learning after birth. I'm not talking about the millions of years it took
to evolve the learning machine into its current form (which obviously also
had to happen). Evolution is a learning process but it's one that is
separate from the learning machine evolution built.

We know the brain is a learning machine, because we can test, with ease,
its power to learn. And when we do that, we see it can solve learning
problems that none of our machines can yet solve. Until we build a machine
that matches the generic learning powers of the human brain, we won't have
solved AI.

> Once the global brain has learned something new like making fire
> or the wheel it can pass that on to the next generation where
> future brains can add to that knowledge in small working increments
> for no brain can learn much without accessing this body of knowledge.
> Conditioning is as simple in humans as it is in dogs or rats our
> real skill is the ability to absorb the ever evolving knowledge of
> the global brain.

Yes, the global brain is YET ANOTHER machine, and again, NOT THE ONE I WAS
TALKING ABOUT. You can't build a global brain with all it's advanced
learning power until you first build the individual brain with it's own
power.

A typical adult human has a level of complexity in their behavior (notice
that I am not calling it INTELLIGENCE because that is not consistent with
how I use the word), which is conditioned into us by our environment. That
environment is "modern society" with all its social norms and conventions
and past learned knowledge. If you isolate a baby from that environment,
it won't learn to be a "normal adult human". But that doesn't make the
baby any less intelligent in my definition because intelligence is the
ability to learn (as I use the word), not what we learned because of the
environment we exist in.

To solve AI, we have to build the learning agent. We already have the
environment (human society) so there is nothing there we need to create to
solve AI. But the correct learning agent, let it interact with society,
and it will like humans, develop complex behaviors that people will call
"intelligent".

> Just as we have the innate ability to learn a language - the power
> that allows our society to evolve - we also have the innate modules
> that allow us and other animals to "see". These modules do require
> a visual input, just as our language learning modules need language
> as input. Learning to see relies on the innate moduels coded in the
> dna code same way learning to do logic requires the knowledge
> coded in the language code.

All those "modules" of your's use the same generic learning ability to make
them work. Until we figure out how to duplicate that ability, we won't be
able to make ANY of those modules perform as well as the ones in our brain.

Doc O'Leary

unread,
Aug 10, 2011, 1:21:35 PM8/10/11
to
In article <20110809142640.115$n...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> Yes. The Turing test was only an attempt by Turing to get people away from
> the mind body problem and to see intelligence as a problem of behavior, not
> a problem of having a soul.

And, like I said, the problem is that the thought experiment became a
misguided foundation for too many research directions. People
mistakenly got the notion that a program that wins at chess is thus
intelligent, when nothing could be further from the truth. It's not
just the "AI effect", either, but the whole notion of the end behavior
being the definition that is wrong. It is all well and fine to have a
*test* of intelligence be based on behavior, but it is a false start (to
the point of being a logical fallacy) to *define* intelligence that way.

> We know what the problem of
> "intelligence" is. At least I do, and a few others.

Everyone else eagerly awaits your cogent definition. Do tell.

> We had a formal definition 60 years ago. Not the solution as how to build
> it.

We may have had a philosophical idea of machine intelligence for that
long (longer, if you count more abstract notions centuries before the
invention of computers), but no real formal definition. Nothing that
has step-wise lead us to our goal, as would be expected from proper
science.

> > It is you
> > who is refusing to see that intelligence is beyond your grasp, and your
> > certainty that you've found "the truth" is ironically the thing that
> > will keep you from actually finding it.
>
> It will be funny to see you have to eat you words (if we both live long
> enough to get to that point).

History has shown that the people who mainly end up eating their words
are those who proclaim to know the answer without any evidence to
support it. That is you, not me. That is also the direct *opposite* of
how science works and, some would argue, the antithesis of intelligent
behavior.

> Where exactly in such a model is the circular definition you speak of?

It is circular because it defines learning as what it defines! That in
no way means it *correctly* implements a model of learning that is
useful for building intelligent systems. Just because a system has a
mechanism for reinforcement/feedback/backpropagation/whatever does *not*
necessarily mean it is learning in any abstract sense.

> You are speaking nonsense.

One of us is.

> > From the
> > perspective of intelligent behavior, learning is a multi-layered
> > concept. It even seems like learning may be closely tied to a system's
> > knowledge representation, such that you *can't* isolate them (other than
> > as an abstraction).
>
> May be closely tired? It's not just "closely tied" it's one and the same
> thing. You can't full specify any learning system without at the same time
> fully specifying its internal knowledge presentation system. Knowledge and
> hardware are one and the same thing, they can not be separated (despite all
> the nonsensical confusion that exists in our society over dualism).

Nifty claim. Show your evidence.

> When learning is specified at the high level as operant conditioning, there
> is no ERROR of any type in the specification, nor is it a circular
> definition. There is only a lack of detail. It's the same lack of detail
> we use in in ALL specifications. If I specify how a steam engine works, at
> the high level by talking about water being heated which creates pressure,
> I've left out all the details about how to build a working steam engine.
> But I have made no error in my specification.

Yes, yes. If I specify how a faster-than-light engine works, it is a
mere matter of details in how to *build* it that keeps it from working.
I have made no error in my specification!

> Operant conditioning and reinforcement learning specifications are the
> same. They are high level specifications of the learning system that needs
> to be built, but they leave out important details.

Because you so believe you are right, you couldn't be any more wrong.

> And that's the
> missing "proof" someone like you needs. If you don't see it "acting like a
> human" you don't believe it's right.

Then you haven't read what I have written. What I have been asking for
is not human-like behavior, but actual, genuine *science* that has been
the result of the research into AI. I don't need a chess playing
computer to model play exactly like a human, but to call it intelligent
about chess I *do* need it do more than just win games. It is not
"right" for humans to do all the hard, domain-specific programming work
and then pretend that we have AI at even the most basic level.

> It's totally valid to be skeptical because those details are missing. But
> it's invalid to do what you are doing here - denying the facts we do have
> and making up nonsense to try and discount the truth to fit your personal
> views.

You're projecting. You are the one proclaiming to have all the facts,
definitions, and truths without error.

> > Then, do tell, what is intelligence?
>
> Haven't I already answered that in this thread? Did you bother to read
> what I wrote before you started to make fun of me?

No, you haven't already answered. You've done a lot of hand waving, but
you haven't actually addressed the topic of intelligence.

> I broaden the concept to what best fits reality. I define it as "any
> reinforcement learning machine".

That appears to be a vague machine specification, not a definition of
intelligence. More to the point, it seems to credit *all* learning as
intelligence, but I hope you are aware that intelligence can be found
without new information, without reinforcement, and often times in
direct contradiction to what we are told to learn.

> Common usage defines it as basically "what humans can do". Only when we
> see machines acting like humans, will most people then say "that machine
> seems intelligent".

Humans are no more a definition for intelligence than birds are a
definition for flight. I asked for *your* scientific definition (or one
you claim has been around for 60 years), not a common statement of what
the *goal* of AI research is. Again, you're engaging in circular
reasoning. Stop it.

> > Does it in some way involve your
> > precious "consensus" opinions, much like the world being flat or at
> > center of the Universe? Or does it cover those areas where true
> > intelligence differs from said consensus?
>
> Dualism is an illusion that many people are tricked by. They think there
> is something in them (call it a soul, or subjective experience, or
> consciousnesses - it makes no difference), that is separate from their
> body.
>
> The belief that there is ANY type of separation, is an illusion and does
> not in fact exist in any form in this universe. Only the illusion itself
> exists. And as such, can not, and will never be part of the AI hardware we
> build.
>
> The solution to AI will just be a reinforcement trained robot that has the
> learning powers of humans - and with that power, it will learn to act in
> the same complex ways humans learn to act - as we are acting as we write
> the messages for example.

More nice hand waving, but you never answered my questions. Just
because you proclaim to have the answer, and actively seek out a
consensus of confirmation bias, doesn't mean you actually are on the
right path. If you were, you'd be able to show more progress with your
technique than you have, or at least predict what progress will be
necessary to show results. Since you clearly don't have robots
demonstrating human-level intelligence with your oh-so-perfect solution,
what is the highest level of intelligence you *can* demonstrate? Again,
not just in end-result behavior, but in demonstrable *intelligence* in
the machine that can be shown to scale up. Bonus points for scientific
predictions of that scaling into the future.

> We have TONS to show. But some people, aren't able to understand the
> facts, and are of the type who can't deal well with abstractions, and need
> to "see it" before they will understand it.

You seem to lack a basic understanding of the scientific method. The
"show" is not about being unable to deal with abstraction, but the basic
test of a hypothesis. See string theory for another wonderful example
of some very clever abstractions/math that is understood by many, but at
the same time not accepted because it can't be shown to intersect
reality. If you truly have TONS of supporting evidence, let's see it.
Me, I'd just be happy if you actually could provide a useful definition
of intelligence that would indicate you were even on the right path.

> > I think we could all agree that a dog would not
> > pass the Turing Test, but that doesn't mean they don't have a degree of
> > intelligence, and even that different dogs will show different degrees.
>
> Yes, because you are using the the "acts like a human" definition that the
> "show me" people like to use. And we all agree, that dogs act like humans
> in some important ways, that none of our machines are able to act yet.
> Same is true for rats.

Nice straw man. I never once said or even implied that I used "acts
like a human" as my benchmark for dog intelligence. In everything I've
written, I've demonstrably been more interested in a definition of
intelligence that is *independent* of the manifestation. By my measure,
that is pretty much the only way we can then reasonably talk about a
machine being intelligent.

> Huh? You "show me" people need everything so carefully explained don't
> you?

It's called science. Look into it.

> But the working human brain is the "proof" that the algorithm we are
> looking for can be built.

It is in no way proof that you're on the right path with "any
reinforcement learning machine". In particular, your Big O talk is a
red herring. While it might benefit us to have an intelligence that is
faster than slower, I see no reason the definition inherently involves
speed, just like the definition of sorting is independent of any
particular algorithm's speed in accomplishing the task.

> There are plenty of people working on advanceing machine learning these
> days.

Again, you're falling back to circular definitions. I want to talk
about intelligence, and you only want to talk about learning. Yet you
never discuss what is *inherently* intelligent about learning,
especially in a non-Markovian world. We learn all sorts of things that
are not true; I'm interested in how we intelligently sort through it all
*independent* of that learning.

Doc O'Leary

unread,
Aug 10, 2011, 1:42:47 PM8/10/11
to
In article
<3790ed8d-efb6-4291...@a2g2000prf.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 10, 4:26 am, c...@kcwc.com (Curt Welch) wrote:
> > The human brain is a reinforcement learning machine that works
> > in a very specific type of environment.
>
> And that is an evolving social environment. An individual human
> brain is limited

I think that is the fundamental misstep that has taken AI down the wrong
path. We make the assumption that *because* we see what we've
accomplished, we can work our way back to the underlying intelligence.
Yes, it is great that we have extensive knowledge to work with in the
modern world, but *society* is the main benefactor in that, not the
individual. We're not all that much different brain-wise than we were
50,000 years ago.

At the heart of it all, you still need that "limited" spark of
innovation from intelligence that *can* see just far enough to take
things the next step. It is *that* infinitesimally small quantum that
is being missed in all this high-minded talk about "learning" and
"symbolic reasoning" and "natural language understanding" and all the
other top-down discussions of AI. Until we can properly define that
difference, it is mostly irrelevant to discuss what we've accomplished.

Curt Welch

unread,
Aug 10, 2011, 2:34:05 PM8/10/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110809142640.115$n...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > Yes. The Turing test was only an attempt by Turing to get people away
> > from the mind body problem and to see intelligence as a problem of
> > behavior, not a problem of having a soul.
>
> And, like I said, the problem is that the thought experiment became a
> misguided foundation for too many research directions. People
> mistakenly got the notion that a program that wins at chess is thus
> intelligent, when nothing could be further from the truth.

Yeah I think that's right. But you have to keep in mind that not everyone
working on such projects are confused. Some just find those projects to be
interesting directions of research and have no illusions that they are
creating "intelligence".

> It's not
> just the "AI effect", either, but the whole notion of the end behavior
> being the definition that is wrong. It is all well and fine to have a
> *test* of intelligence be based on behavior, but it is a false start (to
> the point of being a logical fallacy) to *define* intelligence that way.

What else is there to test other than physical behavior of a machine? To
say "basing it on behavior" is wrong you have to but forth what
non-physical thing you are talking about that a test of intelligence should
be based on. Are you suggesting intelligence is not physical? Are you
arguing the dualism position?

> > We know what the problem of
> > "intelligence" is. At least I do, and a few others.
>
> Everyone else eagerly awaits your cogent definition. Do tell.
>
> > We had a formal definition 60 years ago. Not the solution as how to
> > build it.
>
> We may have had a philosophical idea of machine intelligence for that
> long (longer, if you count more abstract notions centuries before the
> invention of computers), but no real formal definition. Nothing that
> has step-wise lead us to our goal, as would be expected from proper
> science.

The definition of a reinforcement learning machine operating in a high
dimension environmental has in fact been leading us step-wise to a
solution.

The problem is that many people don't believe those steps are leading us in
the right direction. Not until we are done, and then look back at the
path, will it be obvious to everyone which steps actually led us to the
solution.

> > > It is you
> > > who is refusing to see that intelligence is beyond your grasp, and
> > > your certainty that you've found "the truth" is ironically the thing
> > > that will keep you from actually finding it.
> >
> > It will be funny to see you have to eat you words (if we both live long
> > enough to get to that point).
>
> History has shown that the people who mainly end up eating their words
> are those who proclaim to know the answer without any evidence to
> support it. That is you, not me. That is also the direct *opposite* of
> how science works and, some would argue, the antithesis of intelligent
> behavior.

When we look back at what happened, all your words will be proven silly
nonsense. The fact that YOU don't see what is happening, is not proof that
there is no proof, or that no progress is being made. It is only proof
that you failed to see it, or understand it, until after someone showed you
the final answer.

> > Where exactly in such a model is the circular definition you speak of?
>
> It is circular because it defines learning as what it defines! That in
> no way means it *correctly* implements a model of learning that is
> useful for building intelligent systems.

Ah, you think the word "learning" means "what humans do". That's not what
the word means in the context I'm using it, or in the context the entire
field of machine learning uses the word.

Whether reinforcement learning is the correct _foundation_ of building a
machine that duplicates _all_ human learning, is still unproven despite
there being plenty of evidence to support the position. But there is no
circular definition of what reinforcement learning itself is.

> Just because a system has a
> mechanism for reinforcement/feedback/backpropagation/whatever does *not*
> necessarily mean it is learning in any abstract sense.

Ah, you use the word "learning" to mean something nebulous and undefined
that humans do. No wonder you think it's nebulous and undefined. That's
how you have chosen to (not) define the word.

That is correct.

If you specify the details of a steam engine down to every last nut and
bolt, you still have not fully specified what it is that needs to be built
because you have not specified the exact location of every atom in the
steam engine. We NEVER fully specify ANYTHING. It's impossible. We
specify what we HOPE is ENOUGH for the goal we seek (such as to allow
someone else to build the steam engine). Often our specifications aren't
enough - which is why planes sometimes fall out of the sky, and steam
engines sometimes blow up, and prototypes sometimes fail to work in the
first place.

I can fully specify an a quick sort algorithm as a sort algorithm that
sorts in O(n log n) time. There is no error in that specification. But
since I did not specify how the algorithm works, there is a key and very
important piece missing from the specification that would allow someone to
easily build such a thing.

Like your faster-than-light engine, we don't know if it's even possible to
build, just because you have specified it. But it is a correct and valid
specification none the less.

However, in the case of the reinforcement learning specification, we do
know it's possible, because we have brains that do it. So we have two
possible mysteries to work on 1) how does the brain do it and 2) how can we
build a non-biological machine to do it. We certainly don't even know if 2
is possible at this point - though there is strong evidence (aka our
computers and everything they can do so far) to suggest it is possible.

> > Operant conditioning and reinforcement learning specifications are the
> > same. They are high level specifications of the learning system that
> > needs to be built, but they leave out important details.
>
> Because you so believe you are right, you couldn't be any more wrong.

Either that, or I'm just right anyway.

> > And that's the
> > missing "proof" someone like you needs. If you don't see it "acting
> > like a human" you don't believe it's right.
>
> Then you haven't read what I have written. What I have been asking for
> is not human-like behavior, but actual, genuine *science* that has been
> the result of the research into AI. I don't need a chess playing
> computer to model play exactly like a human, but to call it intelligent
> about chess I *do* need it do more than just win games. It is not
> "right" for humans to do all the hard, domain-specific programming work
> and then pretend that we have AI at even the most basic level.

That's valid. But also 100% consistent with my position.

Learning machines don't need humans to program them. They program
themselves. That's what learning is. Learning machines change their
design on the own, without the help of an intelligent "programmer". A good
learning machine (as a human is) will learn to play chess on its own,
without having a human program in the chess playing behaviors. This is the
while point of why learning is so key to intelligence. With a strong
learning machine, the "intelligence" do design, and build, a chess playing
machine, is inherent in the learning machine. Strong learning machines
frees the human engineers so they don't have to be the "intelligence"
behind the behavior.

> > It's totally valid to be skeptical because those details are missing.
> > But it's invalid to do what you are doing here - denying the facts we
> > do have and making up nonsense to try and discount the truth to fit
> > your personal views.
>
> You're projecting. You are the one proclaiming to have all the facts,
> definitions, and truths without error.

I like to project.

>
> > > Then, do tell, what is intelligence?
> >
> > Haven't I already answered that in this thread? Did you bother to read
> > what I wrote before you started to make fun of me?
>
> No, you haven't already answered. You've done a lot of hand waving, but
> you haven't actually addressed the topic of intelligence.

My hand waving is the answer. As I have said from the first post, lots of
people, including you, just don't understand that yet. You are the type of
person what will NEVER understand it, until the solution is found (by
someone not like you), and they show the finished work to you. Then, and
only then, will someone like you, understand the long path that got us to
the solution.

> > I broaden the concept to what best fits reality. I define it as "any
> > reinforcement learning machine".
>
> That appears to be a vague machine specification, not a definition of
> intelligence. More to the point, it seems to credit *all* learning as
> intelligence, but I hope you are aware that intelligence can be found
> without new information, without reinforcement, and often times in
> direct contradiction to what we are told to learn.

Examples please. Just one will do. I'll show you where the learning
happened and where it was reinforced. (or at least, make up a just so store
that fits your example).

> > Common usage defines it as basically "what humans can do". Only when
> > we see machines acting like humans, will most people then say "that
> > machine seems intelligent".
>
> Humans are no more a definition for intelligence than birds are a
> definition for flight. I asked for *your* scientific definition (or one
> you claim has been around for 60 years), not a common statement of what
> the *goal* of AI research is. Again, you're engaging in circular
> reasoning. Stop it.

OK, for the forth time, "intelligence" is ANY reinforcement learning
process.

How many more times do you need me to repeat it for you?

> > > Does it in some way involve your
> > > precious "consensus" opinions, much like the world being flat or at
> > > center of the Universe? Or does it cover those areas where true
> > > intelligence differs from said consensus?
> >
> > Dualism is an illusion that many people are tricked by. They think
> > there is something in them (call it a soul, or subjective experience,
> > or consciousnesses - it makes no difference), that is separate from
> > their body.
> >
> > The belief that there is ANY type of separation, is an illusion and
> > does not in fact exist in any form in this universe. Only the illusion
> > itself exists. And as such, can not, and will never be part of the AI
> > hardware we build.
> >
> > The solution to AI will just be a reinforcement trained robot that has
> > the learning powers of humans - and with that power, it will learn to
> > act in the same complex ways humans learn to act - as we are acting as
> > we write the messages for example.
>
> More nice hand waving, but you never answered my questions. Just
> because you proclaim to have the answer, and actively seek out a
> consensus of confirmation bias, doesn't mean you actually are on the
> right path.

That's right.

> If you were, you'd be able to show more progress with your
> technique than you have, or at least predict what progress will be
> necessary to show results.

The prediction has been made and is well defined. We need a reinstatement
learning machine that operates in a high dimension environment and learns
at the same rate a human brain can learn at the same environment. If you
do not understand what I mean by that, I can give you examples and explain
it to you.

Here's a message Casey posted a while back about progress on the problem:

casey <jgkj...@yahoo.com.au> wrote:
> Thought this might interest Curt,

"Reinforcement Learning on Slow Features of High-Dimensional Input
Streams"

> http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1
> 000894
>
> JC

When strong generic solutions to this class of reinforcement learning
problem have been developed, AI will have basically been solved. That's my
prediction, and belief. Without a solution to this very well understood
problem, none of our machine swill look very "intelligent" to anyone.

> Since you clearly don't have robots
> demonstrating human-level intelligence with your oh-so-perfect solution,
> what is the highest level of intelligence you *can* demonstrate?

TD-Gammon. A small step, but a step in the right direction.

> Again,
> not just in end-result behavior, but in demonstrable *intelligence* in
> the machine that can be shown to scale up. Bonus points for scientific
> predictions of that scaling into the future.

There aren't enough data points to make such a prediction. How many data
points existed between the invent of a bubble sort, and the first invention
of an n log n sort? How could anyone have made a prediction in how long it
would take to create an N log N sort? Such a prediction would be
impossible. Only a wild guess could be made. I believe this AI problem is
the same. I don't thin there are many steps between what we have today
(with systems like TD-Gammon), and with the system that will duplicate
human intelligence. I think it's mostly one big step that will be made,
and likely soon (within a small handful of decades). But it's a wild ass
guess.

> > We have TONS to show. But some people, aren't able to understand the
> > facts, and are of the type who can't deal well with abstractions, and
> > need to "see it" before they will understand it.
>
> You seem to lack a basic understanding of the scientific method. The
> "show" is not about being unable to deal with abstraction, but the basic
> test of a hypothesis. See string theory for another wonderful example
> of some very clever abstractions/math that is understood by many, but at
> the same time not accepted because it can't be shown to intersect
> reality. If you truly have TONS of supporting evidence, let's see it.

Read all the word done by the Behaviorists.

> Me, I'd just be happy if you actually could provide a useful definition
> of intelligence that would indicate you were even on the right path.

Again, the problem is that before someone like you will believe it's the
right path, there must be a PATH. I don't think there is one in this case.
Either you have written the N log N algorithm, or all you have is the
bubble sort, and since the bubble sort doesn't "look intelligent" there's
little proof that there is a path.

What you are looking for, can only exist, if there is a long slow path to
be climbed with lots of little incremental improvements happening along the
way. I think the very history of AI work shows such a path doesn't seem to
exist. Though lots of clever programs have been written, many which some
people believed couldn't be done, the overall progress to making a machine
that acts and thinks like human, seems elusive. None of our clever
machines seem to be "intelligent", and even as they get more advanced, they
don't seem very "intelligent".

Watson for example was really great at answering questions. But did it
show any sign of being able to learn and get smarter over time? Did it
show any sign of basic human creativity? Not that I saw.

Strong learning systems program themselves to do things they never had to
be engineered to do. You give it a problem, like needing to get food on
the other side of the river, and the thing, on it's own (might take many
years), learns to build bridges. It's what we can "learn" do to on our
own, without having something more intelligent show us, that is the core of
our real intelligence. And it's what is missing from all our current AI
machines - none of them can begin to figure out, on their own, how to build
a bridge, just because we give it the problem of getting food located on
the other side of the river.

We can use our own intelligence, to build into a machine, the power to
build a bridge. But we do not yet know how to build a machine, that can,
on it's own, figure something like that out. And that's a learning
problem.

Bridge building (despite what John likes to argue) was not engineered into
humans though evolution. It was something humans figured out on their own
because they are strong learning machines. We pass that learning down
though the generations, also because we are strong learning machines, and
we solve hard problems by working together, but we _learned_ the value of
working together, because we are strong learning machines.

Without strong learning, machines show no real signs of "intelligence".
And the specific definition of the strong learning we have not yet learned
to build, is high dimension reinforcement learning.

One day, you will understand that the thing you call "intelligence" is
learning. One day.

> Yet you
> never discuss what is *inherently* intelligent about learning,

I did a bit above now.

> especially in a non-Markovian world. We learn all sorts of things that
> are not true; I'm interested in how we intelligently sort through it all
> *independent* of that learning.

Then you need to read books on how reinforcement learning systems works.
At the core of every reinforcement learning system is a reward predictor.
What the system learns, is how to predict rewards. It acts based on what
it believes will product the highest future discounted rewards. When it
gets the prediction wrong, it adjusts it's prediction system, which
indirectly controls how it acts in the future.

It "sorts through" what it knows, by constantly checking its predictions
against what happens, and the more the prediction is off, the more it
changes "what it knows".

This is happening in us with everything we do and everything we are exposed
to, even though we have no direct awareness that our brain is being
adjusted like that. It just means that the next time we are in a similar
situation, we act a little differently.

Because we exist inside a non-Markovian environment we will constantly
learn things that are not true. If we hit a button 5 times, and every time
we get food, we will learn there is a 100% probability of getting food when
we hit the button. That's because the evidence we have been exposed to,
tells us that. But odds are, that's wrong. It was just our limited access
to the true state of the world gave us limited knowledge. As we collect
more knowledge though experience, we adjust our knowledge. This is how all
reinstatement learning systems work - Markov and non-Markov alike.

casey

unread,
Aug 10, 2011, 2:59:50 PM8/10/11
to
On Aug 11, 1:31 am, c...@kcwc.com (Curt Welch) wrote:
>

> We know the brain is a learning machine, because we can test,
> with ease, its power to learn.

But not learnt from a raw environment. It is limited at any point
in history, by the intelligence of the social system it learns in.

If children had to raise themselves without any learning passed
down by adults they wouldn't be all that smart. However they
would have a language instinct and over future generations a
complex language and human social system would evolve.

> And when we do that, we see it can solve learning problems that
> none of our machines can yet solve. Until we build a machine
> that matches the generic learning powers of the human brain, we
> won't have solved AI.

First we need a machine with the innate intelligence of a child
which could over time working with many other machines discover
new things to pass on via a language of some kind.

> Yes, the global brain is YET ANOTHER machine, and again, NOT
> THE ONE I WAS TALKING ABOUT. You can't build a global brain
> with all it's advanced learning power until you first build
> the individual brain with it's own power.

The global (or tribal) brain is an integral part of our brains
just as our bodies are an integral part of what enables the
cells in our body or ants in a colony to "act smart".

Your brain can learn because your cells have a primitive ability
to learn simple things and when they work together they make for
higher learning abilities. Your individual brain cells are smarter
within the context of a brain just as we get smarter in the
context of learning social system. You cannot separate the two
when you want to understand how it is you can do calculus and
your ancestor of 50,000 years ago could not.

> To solve AI, we have to build the learning agent. We already
> have the environment (human society) so there is nothing there
> we need to create to solve AI. But the correct learning agent,
> let it interact with society, and it will like humans, develop
> complex behaviors that people will call "intelligent".

Our ability to "interact" requires innate abilities that we
share with animals and our abilities to learn skills passed
down over many generations resided in the language instinct
that animals do not have. So first you need machines that
can see and work as animals can and then add to that the
ability to communicate with humans via a language.

> All those "modules" of yours use the same generic learning


> ability to make them work.

That is your assumption I say they evolved. All cells pass on
their innovations using dna we also pass on our innovations
using the mechanisms that allow language. Language is the
thing that separates us most from all other animals not the
ability to learn which exists all the way down to the cell.

> Until we figure out how to duplicate that ability, we won't
> be able to make ANY of those modules perform as well as the
> ones in our brain.

Well I am suggesting the mechanisms of language evolved in
an animal level brain (that already had the innate ability
to see, hear and know about the world it lived in at some
fundamental level) and that a new language mechanism made for
an evolving social system that itself selected brains that
could best fit into such a social system that used language.
It makes use of the same learning mechanisms found in other
animal brains but added to it the ability to pass on
information using language of some kind (touch, sound, sight
can all be used to code a language if the mechanisms are
there to make use of it).

How language evolved and what new connections are required
in a brain to enable language is being researched. But it
not built on a blank slate even if it uses a blank slate
to construct and hold information and skills passed on over
many generations.

JC

casey

unread,
Aug 10, 2011, 3:03:53 PM8/10/11
to
On Aug 11, 3:42 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
wrote:

> People mistakenly got the notion that a program that wins
> at chess is thus intelligent, when nothing could be further
> from the truth.

When something does something with an apparent purpose as
opposed to just nonsense we call that intelligent behaviour
and a chess program fits that description even if we know
it is limited to that domain. We have to make a separation
between behavior without a purpose, aimless wandering for
example and behavior with a purpose, ants building nests
and collecting food. The ant’s behaviours are clearly not
stupid even if limited compared with ours and are thus to
some extent show intelligent behavior.

> It's not just the "AI effect", either, but the whole notion
> of the end behavior being the definition that is wrong.
> It is all well and fine to have a *test* of intelligence be
> based on behavior, but it is a false start (to the point of
> being a logical fallacy) to *define* intelligence that way.

But until you open up a brain that IS how we define intelligence.

If someone consistently succeeds at solving problems wouldn't you
say they were more intelligent than someone who consistently
fails to solve those same tasks? Isn't that based on their
observable behaviors. Intelligence is a word we use for a class
of behaviors.


> casey <jgkjca...@yahoo.com.au> wrote:
>> On Aug 10, 4:26 am, c...@kcwc.com (Curt Welch) wrote:
>> > The human brain is a reinforcement learning machine that works
>> > in a very specific type of environment.
>>
>> And that is an evolving social environment. An individual human
>> brain is limited
>
> I think that is the fundamental misstep that has taken AI down
> the wrong path. We make the assumption that *because* we see
> what we've accomplished, we can work our way back to the underlying
> intelligence. Yes, it is great that we have extensive knowledge
> to work with in the modern world, but *society* is the main
> benefactor in that, not the individual.

As components of a society we also benefit by it. Cells that worked
together and specialized had an advantage over cells that did not,
starting with simple things like the filamentous bacteria 3.5
billion years ago.

> We're not all that much different brain-wise than we were
> 50,000 years ago.

But we did develop the ability to cooperate and share knowledge
with future generations using language instead of just dna.
Knowledge in the form of language is the transmission agency
of new information playing the same role dna does for the body.

The cells in a child's brain are no different to the cells in
an adult brain but the society of these cells become smarter
over time as a result of learning as do humans in a human
society even if our brains haven't changed much in the last
50,000 years.

> At the heart of it all, you still need that "limited" spark of
> innovation from intelligence that *can* see just far enough to
> take things the next step.

Sure but that is not complex any more than the innovations of
random changes in dna that take things to the next step through
a selective process. When you write "from intelligence" you make
it sound like it is a particular thing when intelligence is
not *something* is it a description of what something *does*.


> It is *that* infinitesimally small quantum that is being missed
> in all this high-minded talk about "learning" and "symbolic
> reasoning" and "natural language understanding" and all the
> other top-down discussions of AI. Until we can properly define
> that difference, it is mostly irrelevant to discuss what we've
> accomplished.

Language and all the refinements in the brain that evolved to
make use of language is the "small quantum" difference between
human and animal intelligence that allows our social system
to evolve carrying us along with it.

jc

casey

unread,
Aug 10, 2011, 3:32:29 PM8/10/11
to
On Aug 11, 4:34 am, c...@kcwc.com (Curt Welch) wrote:
> If you specify the details of a steam engine down to every
> last nut and bolt, you still have not fully specified what
> it is that needs to be built because you have not specified
> the exact location of every atom in the steam engine.

There are appropriate levels of explanation. The location of
atoms is just silly and they might vary between any actual
implementation of a steam engine. A full specification means
in this context sufficient specification to actually build
a steam engine.


> I can fully specify an a quick sort algorithm as a sort
> algorithm that sorts in O(n log n) time. There is no error
> in that specification.

A *full* specification is more than specifying what you
want the algorithm to do, although it is a good start,
it involves enough specifying to actually write it.

> However, in the case of the reinforcement learning
> specification, we do know it's possible, because we have
> brains that do it.

We have simple machines that can do it also. It is fully
specified for such machines. How feedback is used by the
brain is not fully specified.

> A good learning machine (as a human is) will learn to
> play chess on its own, without having a human program
> in the chess playing behaviors.

We bring skills we already have to learning to play chess
and that includes inventing the game in the first place.
We are preprogrammed to learn and our learning is limited
by that programming (how the parts are connected that do
the learning).

> Strong learning machines frees the human engineers so
> they don't have to be the "intelligence" behind the
> behavior.

Much of our learning is so difficult it had to be done
by many generations over millions of years and encoded
in our dna. We think learning to play chess is hard but
"seeing" is even harder. We can program a machine to
play chess because it is easy. We have much more trouble
programming a machine to "see".

> TD-Gammon. A small step, but a step in the right direction.

Or misleading selective evidence. The ANN is not doing
what you imagine it is doing. You imagine complexity that
is not there. Remember the ANN that was trained to detect
images with or without tanks and it turned out all it was
doing was detecting dark vs. light images? That a machine
can use brute force to do multivariate statistics to
find solutions doesn't mean it is how we do it. We do
not learn to play backgammon that way. We use machines
to do things that would take us too long to do it the
way we actually do things.

jc

Curt Welch

unread,
Aug 10, 2011, 7:38:31 PM8/10/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 11, 1:31=A0am, c...@kcwc.com (Curt Welch) wrote:
> >
>
> > We know the brain is a learning machine, because we can test,
> > with ease, its power to learn.
>
> But not learnt from a raw environment. It is limited at any point
> in history, by the intelligence of the social system it learns in.

We can test it's ability to learn in ANY environmental. The fact that most
humans in the past 50,000 years have grown up in a society has nothing to
do with the raw learning power inherent in each of us. And that raw
learning power can be tested, isolated, and quantified in the laboratory.

When humans work together, the create an even greater intelligence, that
learns and advances faster, than any human alone could advance. But that
global brain would not exist, if not for the power each human brings to it.
To solve AI, we have to build the power of a single human. If you want
duplicate the power of the global brain, build a million of the AIs once
you figure out how to build one.

Talking about the global brain and the society is not at all important, or
relevant to solving AI - which is the problem of building the first AI.

> If children had to raise themselves without any learning passed
> down by adults they wouldn't be all that smart. However they
> would have a language instinct and over future generations a
> complex language and human social system would evolve.

Yes, but not relevant to the question of how to build the first AI child.

> > And when we do that, we see it can solve learning problems that
> > none of our machines can yet solve. Until we build a machine
> > that matches the generic learning powers of the human brain, we
> > won't have solved AI.
>
> First we need a machine with the innate intelligence of a child
> which could over time working with many other machines discover
> new things to pass on via a language of some kind.

Yes, we need that first child. You and I however have very different views
as to how that might be created. You want to design and build language
modules, and mimic learning modules, and seeing modules, and hearing
modules. I think all those modules are actually created from the same type
of generic learning module. If you want build a hearing module, feed the
generic module sound data. If you want to build a seeing module, feed it
visual data. The learning problem is the same no matter what type of data
you send it.

> > Yes, the global brain is YET ANOTHER machine, and again, NOT
> > THE ONE I WAS TALKING ABOUT. You can't build a global brain
> > with all it's advanced learning power until you first build
> > the individual brain with it's own power.
>
> The global (or tribal) brain is an integral part of our brains
> just as our bodies are an integral part of what enables the
> cells in our body or ants in a colony to "act smart".

You talk funny. Bricks are a part of a building, the building is not a
part of the brick.

> Your brain can learn because your cells have a primitive ability
> to learn simple things and when they work together they make for
> higher learning abilities.

Yes, I do think it works something like that.

> Your individual brain cells are smarter
> within the context of a brain just as we get smarter in the
> context of learning social system.

You talk funny. You are confusing levels. A brick doesn't get "stronger"
because it's part of a wall. The wall is stronger than the individual
bricks because the bricks work together to make a wall that is stronger
than any single brick. The bricks don't get stronger just because they are
in a wall. They are exactly the same brick, with the same mechanical
properties, whether they are in the wall, or alone.

Cells don't change when they are part of a brain either. They are the same
cell if they are lone, or in the brain. However "smart" the cell is alone,
is EXACTLY THE SAME "smartness" it has when it's in the brain. You are
trying to argue the increased smartness of the system, is somehow assigned
to and changes something fundamental about the individual, when it does
not.

Humans can learn all sorts of things in an advanced society they never
would have learned in a different place (isolated in a box all it's life).
But the innate learning ability of the two humans is identical.

> You cannot separate the two
> when you want to understand how it is you can do calculus and
> your ancestor of 50,000 years ago could not.

That's right. But _I_ have ZERO desire to understand that (because it's
obvious and has nothing to do with creating intelligence. Again, remember,
I define intelligence NOT as the ability to calculus. But rather, the
ability to learn calculus. Odds are, our brain is not much different over
the past 50,000 years, which means we could take one of those individuals
from the past and teach them calculus just as easy as we can teach someone
today. If we can, then their "intelligence" is the same (by how I define
intelligence - they can both learn the same things at the same speed when
exposed to the same environment).

> > To solve AI, we have to build the learning agent. We already
> > have the environment (human society) so there is nothing there
> > we need to create to solve AI. But the correct learning agent,
> > let it interact with society, and it will like humans, develop
> > complex behaviors that people will call "intelligent".
>
> Our ability to "interact" requires innate abilities that we
> share with animals

Yeah, like legs and eyes and a CNS with a brain to connect it all together.

> and our abilities to learn skills passed
> down over many generations resided in the language instinct
> that animals do not have.

On that subject, I just caught a program on TV about killer whales. It
seems they have cultures which are passed from generation to generation and
there are there 3 or 4 unique and very different killer whale cultures in
the world. They evolve around their hunting and social behaviors, and
around what type of food they eat (fish vs mammals for example). Each
culture makes different and unique vocalizations as well - as if they each
had their own language. Interesting stuff.

> So first you need machines that
> can see and work as animals can and then add to that the
> ability to communicate with humans via a language.

You think we need a specific type of module to give us language ability. I
don't. I think that module is yet again, just more of the same basic
generic learning hardware.

But either way, it's true that to _equal_ human ability, the machine must
have the same language learning ability humans do. Our language behaviors
is a big part of our total learned behavior and if the machine can't learn
language in the same way - it clearly won't be "equal" to humans in any
sense.

That's all obvious stuff John. It's so obvious I don't know why you keep
repeating it. The non obvious part, is what type of hardware do we need,
to give a machine ALL human abilities?

> > All those "modules" of yours use the same generic learning
> > ability to make them work.
>
> That is your assumption I say they evolved.

OF COURSE THEY EVOLVED. You are not saying anything that everyone doesn't
know and agree with when you say that. OF COURSE THEY EVOLVED. THE ENTIRE
BRAIN EVOLVED - as it did in every animal. Whatever power any animal has,
is a power that was evolved in it (or learned after birth).

The question is what type of hardware was it that evolved? That's the
question that must be answered and the one I keep talking about.

> All cells pass on
> their innovations using dna we also pass on our innovations
> using the mechanisms that allow language. Language is the
> thing that separates us most from all other animals not the
> ability to learn which exists all the way down to the cell.

Yes. Obvious. Everyone agrees. Now what type of hardware is it! How
does the hardware work? Saying "it evolved in steps" TELLS US NOTHING
about what type hardware it is.

> > Until we figure out how to duplicate that ability, we won't
> > be able to make ANY of those modules perform as well as the
> > ones in our brain.
>
> Well I am suggesting the mechanisms of language evolved in
> an animal level brain (that already had the innate ability
> to see, hear and know about the world it lived in at some
> fundamental level) and that a new language mechanism made for
> an evolving social system that itself selected brains that
> could best fit into such a social system that used language.

Yes, I think that's all true.

> It makes use of the same learning mechanisms found in other
> animal brains but added to it the ability to pass on
> information using language of some kind (touch, sound, sight
> can all be used to code a language if the mechanisms are
> there to make use of it).

Yes, but it's trivially easy to explain language learning and development
in the exact same way we explain how we learn to eat and how we learn
everything. So we do't need a special type of module to explain language
learning as a different type of learning from learning to walk. Whether
it's language, or walking, it's just our body moving in response to the
environment it is in.

Every animal that can learn has some language ability. It's not a
something or nothing ability. Dogs learn language when they learn to roll
over in response to a hand signal. It seems to only be a matter of scale,
which means it's likely only a matter of how much generic learning hardware
is allocated to the language behavior vs being allocated to other behaviors
- like walking or chasing rabbits, or navigating.

> How language evolved and what new connections are required
> in a brain to enable language is being researched. But it
> not built on a blank slate even if it uses a blank slate
> to construct and hold information and skills passed on over
> many generations.

There's no indication I've ever seen to support the argument that language
can not be explained by the use of generic learning hardware in the brain.

Chomsky was well known for putting forth the argument that we must have
language syntax hardware to explain our language ability. And he's well
known for having convinced a lot of people that his argument was valid.
Skinner was well known for consider Chomsky such a fool he didn't even want
to dignify his nonsense by responding to him.

Chomsky's argument has no legs to stand on. It's just what he happens to
want to believe - and it's become something of a urban legend in society
that far too many people mistake as fact.

There is no proof who is right. The brain could have some very unique
hardware to support language. Or, it could just be more of the same
generic learning hardware that fills the rest of the neocortex. The answer
is not known yet. My point, is that any argument that says it must be
specialized hardware, is invalid, because it is easy to explain as the work
of generic learning hardware.

And if it can be explained as the work of more of the same generic learning
hardware, why would evolution choose to build a different module, when it
already had the building blocks to work with?

Curt Welch

unread,
Aug 10, 2011, 10:10:01 PM8/10/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 11, 4:34=A0am, c...@kcwc.com (Curt Welch) wrote:
> > If you specify the details of a steam engine down to every
> > last nut and bolt, you still have not fully specified what
> > it is that needs to be built because you have not specified
> > the exact location of every atom in the steam engine.
>
> There are appropriate levels of explanation. The location of
> atoms is just silly and they might vary between any actual
> implementation of a steam engine. A full specification means
> in this context sufficient specification to actually build
> a steam engine.

Yes, but it's never FULL in that sense John. No matter how well the
"instructions" are specified, not everyone will be able to correctly build
the steam engine.

A typical "full" specification of how to build a steam engine will
typically be enough for MOST people to build one that works. But it's
never 100%. If the instructions include a diagram that says "build a steel
tank which has 1" thick walls", that specification won't work for someone
that doesn't know what steel is. And it might not work for someone that
has a different understanding of what type of steel might be needed. They
might build it out of a type of steel that proves unable to hold the
pressure.

A typical "full" specification of a house leaves out many highly important
details - like the location of studs in the wall, or the path electrical
wires take though the walls. Those are choices left up the people that
frames and wires the house. The specification isn't "full" in any sense.
It requires the people who build it to fill in lots of the missing details
(at levels of description far above the location of atoms).

All the specifications we write are nothing more than language descriptions
that are sometimes correctly understood by the receiver, and sometimes not.
When it's not correctly understood by the receiver, it fails - as does all
language.

The specification I have for intelligence is not enough for MOST people to
build it. So it's clearly lacking in that respect of usefulness. However,
an alien from another part of the universe that fully understands the human
brain, however might know exactly what to build from my specification - and
might agree that the specification is 100% correct.

> > I can fully specify an a quick sort algorithm as a sort
> > algorithm that sorts in O(n log n) time. There is no error
> > in that specification.
>
> A *full* specification is more than specifying what you
> want the algorithm to do, although it is a good start,
> it involves enough specifying to actually write it.

No John, it's ONLY a specification of what you want it to do. Only the
final source code is "full" in the sense it's a complete specification of
everything the code does. Anything less than the full source code, is a
partial specification.

We write such partial specifications all the time because we only
communicate the features that are important to us when we write
specifications and leave all the other details unspecified. The same is
true for any machine design.

We can specify the code has a print function, without bothering to specify
what type of printers it supports, of what the printed output might even
look like simply because we don't care about those details, as we also
don't care about the locations of that atoms on the printed page.

We can specify we want the list of people on the report to be sorted
alphabetically, without specifying what type of sort algorithm was used to
sort them, because it's not important to the reason we are writing the
specification. The missing pieces might be unimportant to the being able
to make it work (like the location of atoms), or they might be missing
pieces that are highly important, like how the sort algorithm actually
works. We just leave those pieces up to the person building the machine to
fill in because it's the part of the specification that's not important.

The same is true for my specification of AI. It's got missing pieces that
must be filled in by the person that builds it, but which are not important
to whether it's AI. No matter how the pieces get filled in, if the
resulting machine matches my specification, it will be intelligent. At
least I believe that to be true, even if I can't prove it yet.

It's a full specification in that if you build a machine to the
specification, it will be intelligent. That's my position. The fact that
no one yet knows how to build such a machine is the work to be finished in
solving AI, not the work required to write the a specification of
intelligence.

> > However, in the case of the reinforcement learning
> > specification, we do know it's possible, because we have
> > brains that do it.
>
> We have simple machines that can do it also. It is fully
> specified for such machines. How feedback is used by the
> brain is not fully specified.

We don't have ANY machines that can do it with high dimension data John.
NONE AT ALL. Nobody has yet solved the problem of intelligence I have
specified here.

> > A good learning machine (as a human is) will learn to
> > play chess on its own, without having a human program
> > in the chess playing behaviors.
>
> We bring skills we already have to learning to play chess
> and that includes inventing the game in the first place.
> We are preprogrammed to learn and our learning is limited
> by that programming (how the parts are connected that do
> the learning).

Yes, those statements are 100% consistent with my position. The question
at hand however, is what are those "skills we bring"? Is it anything more
than 1) generic learning hardware, and 2) stuff that hardware has learned
in life before it tried to learn chess.

You claim it is more than that. I claim it isn't. It's yet to be seen.
You can't prove my position wrong, because my position is the speculation
that such generic learning systems can be built. You can't prove it's
impossible to build.

I can't prove it is possible, until it's done - or until the brain is
understood well enough and shown to be such a machine.

Even if the brain turns out to be have lots of non-generic hardware
assisting it's operation (like 3D spatial modeling circuits as have
speculated in the past), that is still not prove that the same things can't
be done with generic learning hardware without the extra "modules".

My position is impossible to prove wrong. It's easy (and valid) to say "I
doubt it", but it's impossible to prove wrong. It's a position that will
remain unknown, until it's proven true. We have lots of evidence to
suggest it's true, but until we have a working machine that fits the
specification, we can't prove such a machine can't be built.

Or, at least, I see no way such a proof could be structured.

> > Strong learning machines frees the human engineers so
> > they don't have to be the "intelligence" behind the
> > behavior.
>
> Much of our learning is so difficult it had to be done
> by many generations over millions of years and encoded
> in our dna.

Stupid Stupid Stupid John.

Do you have any clue what you are saying there?

You are saying "I'm too stupid to understand how it might be possible, so
therefor, its is impossible".

Stupid small mind thinking John.

The fact that something is hard for you to comprehend is not a valid
argument against it. It's not an argument any rational person should ever
make.

> We think learning to play chess is hard but
> "seeing" is even harder. We can program a machine to
> play chess because it is easy. We have much more trouble
> programming a machine to "see".

Hard, is very different from impossible which is the argument you started
with above.

> > TD-Gammon. A small step, but a step in the right direction.
>
> Or misleading selective evidence. The ANN is not doing
> what you imagine it is doing.

It is. You just don't understand what's important about TD-Gammon.

Simple RL programs must explore the entire state space of the environment
to fully learn the state transition probabilities. As the state space size
increases, this becomes unworkable. The scaling problems means that fully
exploring the state sapce will take billions of years - learning becomes
exponentially slower as the state space grows. Long before we reach "high
dimension" the state space becomes far too large to solve using the
"explore every state" approach.

To deal with large state spaces, the learning system my use abstractions.
It must use abstractions that allow it to apply the lesson learned for one
state, to many other states at the same time.

All the standard Rl algorithms, don't use abstractions of any type. They
all fail to scale, because they only work if they can explore all the
states.

Many are applied to high dimension problems, but only by first hard-coding
a mapping from the high dimension program space, to a low dimension
internal model. And then using he "explore every state" approach, to the
low dimension model. They DO NOT USE ABSTRACTIONS which map learning
across multiple states of the model.

To solve the high dimension program in a generic way, two things must be
done. One, the system must use abstractions, and two, the sytem must LEARN
which abstractions to use.

TD-Gammon does the first of these two things. It's one of the only RL
applications which does this to my knowledge. The neural network it uses
abstracts learning across a set of related states (board positions).
TD-Gammon can do a good job of evaluating a board position it has NEVER
VISITED in the past. It does that by abstracting past lessons learned in
other board position, to the current one.

It's a working demonstration of one way this sort of abstraction can be
done, for one limited domain.

What it doesn't do (or doesn't do much), is learn which complex
abstractions to use based on experience. The abstractions it uses are
mostly hard-coded into the system - and they work well for Backgammon, but
are unlikely to work well in other domains.

The fact that it successful use abstractions (in any domain), and uses them
to learn in a domain that is too large to use the "explore every state"
approach, is what it shows as special.

> You imagine complexity that
> is not there. Remember the ANN that was trained to detect
> images with or without tanks and it turned out all it was
> doing was detecting dark vs. light images? That a machine
> can use brute force to do multivariate statistics to
> find solutions doesn't mean it is how we do it. We do
> not learn to play backgammon that way. We use machines
> to do things that would take us too long to do it the
> way we actually do things.

That's true that it is not a proof that the brain works that way.

But I have a specification for AI that has a lot of supporting evidence to
show it's a resonantly good guess. And TD-Gammon is one step closer to
that specification, in it's limited domain, than all the other ways RL has
been used.

There is only one step missing. That is the step of learning, from
experience, how to abstract learning across states, in a generic way that
works in all domains. If someone figures out a good way to do that, they
will have completed my specification for AI, and then we will see if my
specification is correct, or even useful.

And how this is likely to be done, is also fairly obvious to me. It must
be done based on statistical correlations across sensory signals. Sensory
signals must be re-mapped so as to create an internal set of signals which
reduce as much as possible, all correlations between signals. And in
creating such a mapping, it will, at the same time, be creating a system
that abstracts the learning.

I don't see this as lots of hopeful hand waving magic. Not only do I have
a specification for the machine, I have a reasonable good idea of how to
build a system to fit the specification. And I see lots of parallels
between what is understood about the brain, and what I would expect to see
in this sort of machine. It's not like a specification for a faster than
light drive that actually looks impossible. It's a specification that we
know is possible, because the brain does it, and a specification I at
least, have a good conceptual idea of how it can be done. If I don't build
it, I'm fairly sure someone else will built it before long, and when we
have such a system that learn useful abstractions on it's own, we will see
a jump in learning that suddenly gets the machine into the same class
(speed) of learning, that the brain has. And when that happens, are
machines will suddenly look a hell of a lot more intelligent than anything
AI has built in the past - because they will be fast and efficient generic
learning machines for the first time in history.

But all this is waiting on an invention or two, before it will be proven
true. And until then, it's just my educated speculation.

casey

unread,
Aug 11, 2011, 6:46:07 AM8/11/11
to
On Aug 11, 9:38 am, c...@kcwc.com (Curt Welch) wrote:

> casey <jgkjca...@yahoo.com.au> wrote:
> On Aug 11, 1:31=A0am, c...@kcwc.com (Curt Welch) wrote:

>> > We know the brain is a learning machine, because we can test,
>> > with ease, its power to learn.
>>
>>
>> But not learnt from a raw environment. It is limited at any point
>> in history, by the intelligence of the social system it learns in.
>
> We can test it's ability to learn in ANY environmental. The fact
> that most humans in the past 50,000 years have grown up in a
> society has nothing to do with the raw learning power inherent in
> each of us. And that raw learning power can be tested, isolated,
> and quantified in the laboratory.

> When humans work together, the create an even greater intelligence,
> that learns and advances faster, than any human alone could advance.
> But that global brain would not exist, if not for the power each
> human brings to it.

Nor would the human brain exist if not for the power each neuron
brings to it.

> To solve AI, we have to build the power of a single human.

Sure but it will not generate the technology and academic learning
seen in a modern society for it lacks the ability to achieve that
outcome by itself. It cannot learn in real time from raw data if
that data is not coming from a social system.


> I think all those modules are actually created from the same type
> of generic learning module. If you want build a hearing module,
> feed the generic module sound data.

What you are really talking about here is an evolutionary network.

I think your issue is a psychological need for an easy solution
that you might hit on by yourself if you can only figure out how
to construct a net that works the way you imagine it will work
and thus bath yourself in glory.


>> The global (or tribal) brain is an integral part of our brains
>> just as our bodies are an integral part of what enables the
>> cells in our body or ants in a colony to "act smart".
>
>

> Bricks are a part of a building, the building is not a part of
> the brick.

> ...


> You talk funny. You are confusing levels. A brick doesn't
> get "stronger" because it's part of a wall.

I don't talk funny you just don't follow the point being made.


> You are trying to argue the increased smartness of the system,
> is somehow assigned to and changes something fundamental about
> the individual, when it does not.

Apparently you can't follow the reasoning so I will let it go.


JC

Josip Almasi

unread,
Aug 11, 2011, 10:24:46 AM8/11/11
to
Burkart Venzke wrote:
> Why don't we have a stong(er) AI by now?
> What is missing?
> (A) Concept(s)?

Yes, concepts.
First, a living organism cannot be observed outside of environment;
organism *and* environment make a system that can be observed (Ashby &
such).
Then, social animals, from parrots to dogs to humans, learn from each
other somehow; some are able to learn from other species as well.
And we have no clue how this works.
Even if we did, we still have to solve the environment issue - check
this computer and it's inputs and outputs, and vast majority of those is
network, an environment totally different from any biological.
It's quite far from anything we did anytime before: much less
observation, much more experimentation.

Regards...

Doc O'Leary

unread,
Aug 11, 2011, 12:24:59 PM8/11/11
to
In article
<7d9e85e5-643c-4b3d...@e20g2000prf.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 11, 3:42 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
> wrote:
> > People mistakenly got the notion that a program that wins
> > at chess is thus intelligent, when nothing could be further
> > from the truth.
>
> When something does something with an apparent purpose as
> opposed to just nonsense we call that intelligent behaviour
> and a chess program fits that description even if we know
> it is limited to that domain.

*You* might label that intelligence, but *I* don't. I think most people
would agree that pretty much *all* machines serve a purpose, but
absolutely none of them would be considered intelligent.

Likewise, just because a computer is programmed by a human to play chess
instead of doing word processing, that doesn't make it intelligent. It
*might* be intelligent (even in the limited domain of chess), but that
is something that would have to be determined beyond the mere measure of
a win-lose tally.

> The antÄ…s behaviours are clearly not


> stupid even if limited compared with ours and are thus to
> some extent show intelligent behavior.

I don't know that to be true. Again, I'm still waiting to hear what the
definition of intelligence is such that we *can* say a behavior reflects
intelligence. An ant does a lot of things based on genetic programming
and simple reflex behavior. A bee might be a better example, but in any
case you *do* have to show that there is some mental model in place that
can reasonably be said to "think" in a manner that can reasonably be
said to be "intelligent".

>
> > It's not just the "AI effect", either, but the whole notion
> > of the end behavior being the definition that is wrong.
> > It is all well and fine to have a *test* of intelligence be
> > based on behavior, but it is a false start (to the point of
> > being a logical fallacy) to *define* intelligence that way.
>
> But until you open up a brain that IS how we define intelligence.

No. Again, maybe *you* define it that way, but in the same way that
computation was decomposed into the Turing Machine, I think intelligence
can be decomposed into something more specific than "a brain". Indeed,
the whole field of AI is *supposed* be be based on the notion that "a
brain" is not necessary.

> If someone consistently succeeds at solving problems wouldn't you
> say they were more intelligent than someone who consistently
> fails to solve those same tasks?

No. For a real world example, just look at education. It's quite
possible the most intelligent kid in the class gets poor grades because
they are bored and have mentally checked out. It's also quite possible
in our "teach to the test" system that kids will perform well without
really understanding the subject matter.

> Isn't that based on their
> observable behaviors. Intelligence is a word we use for a class
> of behaviors.

Again, maybe *you* use that circular definition, but I don't.
Intelligence is about internal processing (aka, thinking), and to what
extent that is reflected in behavior depends a great deal on the
situation.

> > I think that is the fundamental misstep that has taken AI down
> > the wrong path. We make the assumption that *because* we see
> > what we've accomplished, we can work our way back to the underlying
> > intelligence. Yes, it is great that we have extensive knowledge
> > to work with in the modern world, but *society* is the main
> > benefactor in that, not the individual.
>
> As components of a society we also benefit by it.

You missed my point. You want to make intelligence about standing on
the shoulders of giants, but I think we need to normalize that effect
out. Intelligence is about what each person *can* see, not about the
vantage point they happen to have. We might be able to *do* more thanks
to our advanced western educations, but that doesn't necessarily mean
any of us is inherently smarter than some kid in a primitive tribe in
the jungle.

> > We're not all that much different brain-wise than we were
> > 50,000 years ago.
>
> But we did develop the ability to cooperate and share knowledge
> with future generations using language instead of just dna.
> Knowledge in the form of language is the transmission agency
> of new information playing the same role dna does for the body.

Information is not intelligence.

> > At the heart of it all, you still need that "limited" spark of
> > innovation from intelligence that *can* see just far enough to
> > take things the next step.
>
> Sure but that is not complex any more than the innovations of
> random changes in dna that take things to the next step through
> a selective process. When you write "from intelligence" you make
> it sound like it is a particular thing when intelligence is
> not *something* is it a description of what something *does*.

That is the fundamental misstep I spoke about. I think AI will always
elude us so long as we follow the path of "what it does". Much like DNA
can be constantly changing behind the scenes without really changing a
species' immediate "what it does" until a moment of selection/punctuated
equilibrium is reached, I think intelligence works the same way.

Two humans can be doing the same thing over and over, and you might
naively think they're at the same level of intelligence, but one of them
might be *thinking* something different the whole time and, as a result,
eventually starts doing things in a different/better way. If you *only*
measure intelligence at the point the "what it does" changes, you are
really missing the BIG picture when it comes to understanding
intelligence.

> Language and all the refinements in the brain that evolved to
> make use of language is the "small quantum" difference between
> human and animal intelligence that allows our social system
> to evolve carrying us along with it.

Again, sharing information is great for advancing society, but it does
nothing to explain the intelligence that *necessarily* existed in the
first place to create the information from a world of semi-random data.
Language seems like it should more properly be seen as a *conclusion* of
intelligence, not a premise.

Doc O'Leary

unread,
Aug 11, 2011, 3:30:35 PM8/11/11
to
In article <20110810143404.902$t...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> What else is there to test other than physical behavior of a machine? To
> say "basing it on behavior" is wrong you have to but forth what
> non-physical thing you are talking about that a test of intelligence should
> be based on. Are you suggesting intelligence is not physical? Are you
> arguing the dualism position?

No, no; don't get me wrong. Like I said, the whole foundation of AI is
(or should be) the idea that a machine can be made intelligent. All I'm
saying is that we haven't gotten to the "DNA" of intelligence. Behavior
is more like physiology or morphology. AI took a wrong turn that way, I
don't expect we will get any closer to "strong" AI until we throw out
all the shortcuts we implemented and get back to actually dealing with
the root idea of what intelligence really is.

> The problem is that many people don't believe those steps are leading us in
> the right direction. Not until we are done, and then look back at the
> path, will it be obvious to everyone which steps actually led us to the
> solution.

Nope. Science does not work that way. If you really are taking steps,
you could show them. Like I said, where is your chess playing system
that became *intelligent* about the game of chess? Where is the
followup step that is *intelligent* about board games in general? And,
ideally, you could *demonstrate* how intelligence is present by (at the
very least) showing a control group (e.g., a learning system with no
prior training) that performs less intelligently than a test group
(e.g., a learning system that was trained at chess, but asked to play go
(or vice versa)). Science does not need you to be "done"; hell, science
says you're *never* done. If you're not able to show real progress like
that, you have to acknowledge you might just be on the wrong path.

> When we look back at what happened, all your words will be proven silly
> nonsense. The fact that YOU don't see what is happening, is not proof that
> there is no proof, or that no progress is being made. It is only proof
> that you failed to see it, or understand it, until after someone showed you
> the final answer.

That is something *every* quack claims about their miracle cure or
perpetual motion machine or whatever woo they're pushing. Intelligence
and science both offer no "final answer", and *demand* that you always
consider the possibility that you are wrong. Take a step back and
reassess your position.

> Ah, you think the word "learning" means "what humans do". That's not what
> the word means in the context I'm using it, or in the context the entire
> field of machine learning uses the word.

No, I think the word "learning" means "learning", as in incorporating
data to a system. I do not circularly define a procedure, *call* it
learning, and then proclaim that context to be the only one valid for a
discussion of "learning" or "intelligence".

> Ah, you use the word "learning" to mean something nebulous and undefined
> that humans do. No wonder you think it's nebulous and undefined. That's
> how you have chosen to (not) define the word.

Again, no. I simply haven't defined it circularly, or pretended that it
inherently has anything to do with intelligence. As a child, I learned
a lot of things about Santa Claus. Absolutely none of that speaks to
how I intelligently think about Santa Claus.

> > Yes, yes. If I specify how a faster-than-light engine works, it is a
> > mere matter of details in how to *build* it that keeps it from working.
> > I have made no error in my specification!
>
> That is correct.

Wow. Just . . . really?

> I can fully specify an a quick sort algorithm as a sort algorithm that
> sorts in O(n log n) time. There is no error in that specification. But
> since I did not specify how the algorithm works, there is a key and very
> important piece missing from the specification that would allow someone to
> easily build such a thing.

No. The important part about an algorithm is what is *does*, not how
efficiently it does it. Certainly there are implications of using
different algorithms, but unless you can *show* that they make the
problem intractable, it is moot. Likewise, without any reasonable
definition for intelligence itself, it is pointless to say that one
system can achieve it. In particular, it is important to note that the
definition of "sorted" is implicit in the algorithm, yet there is no
real intelligence about what it *means* to have something sorted.

> Like your faster-than-light engine, we don't know if it's even possible to
> build, just because you have specified it. But it is a correct and valid
> specification none the less.

Not in any scientific or intelligent sense.

> However, in the case of the reinforcement learning specification, we do
> know it's possible, because we have brains that do it. So we have two
> possible mysteries to work on 1) how does the brain do it and 2) how can we
> build a non-biological machine to do it.

You missed the bigger mystery: 0) exactly *what* is being done; the
difference that makes a difference. I get that you might be hoping to
work backwards to find it, but I've seen no evidence of anyone working
backwards far enough in the last 50 years to say "this is the root of
intelligence". If you can't do that, I have reasonable doubts that
you're on the right path.

> > Because you so believe you are right, you couldn't be any more wrong.
>
> Either that, or I'm just right anyway.

If you have no science to back that up, you are fooling yourself.

> Learning machines don't need humans to program them. They program
> themselves. That's what learning is. Learning machines change their
> design on the own, without the help of an intelligent "programmer". A good
> learning machine (as a human is) will learn to play chess on its own,
> without having a human program in the chess playing behaviors. This is the
> while point of why learning is so key to intelligence. With a strong
> learning machine, the "intelligence" do design, and build, a chess playing
> machine, is inherent in the learning machine. Strong learning machines
> frees the human engineers so they don't have to be the "intelligence"
> behind the behavior.

Yes, we all know the grand idea. Still missing is the factor of
*intelligence* you assume but (still) fail to define. You say "learning
is so key to intelligence", but I say that intelligence is key to
learning. If you don't believe me, write a letter to Santa Claus and
see what he says.

> My hand waving is the answer. As I have said from the first post, lots of
> people, including you, just don't understand that yet. You are the type of
> person what will NEVER understand it, until the solution is found (by
> someone not like you), and they show the finished work to you. Then, and
> only then, will someone like you, understand the long path that got us to
> the solution.

I understand science. I understand evidence. I don't understand hand
waving. I don't understand "oooooh, you just wait and see how clever I
am!"

> > That appears to be a vague machine specification, not a definition of
> > intelligence. More to the point, it seems to credit *all* learning as
> > intelligence, but I hope you are aware that intelligence can be found
> > without new information, without reinforcement, and often times in
> > direct contradiction to what we are told to learn.
>
> Examples please. Just one will do. I'll show you where the learning
> happened and where it was reinforced. (or at least, make up a just so store
> that fits your example).

Again, it's not about learning, it's about intelligence. You can learn
about Santa Claus or orbital teapots or evolution or climate change or
any number of other things that may or may not be true. None of it
speaks to your intelligence when *thinking* about those subjects.

> OK, for the forth time, "intelligence" is ANY reinforcement learning
> process.
>
> How many more times do you need me to repeat it for you?

Repetition does not equal truth. Your impotent circular reasoning
remains unconvincing.

> The prediction has been made and is well defined. We need a reinstatement
> learning machine that operates in a high dimension environment and learns
> at the same rate a human brain can learn at the same environment. If you
> do not understand what I mean by that, I can give you examples and explain
> it to you.

I understand perfectly well, but that is not a testable prediction. It
is a empty desire like the desire to have a FTL space travel. More to
the point, you haven't established that intelligence fundamentally
requires the same processing rate and environment of a human. Just like
"dog years", shouldn't you be able to show that you can achieve results
along a virtual timeline that is expanded to reflect hardware
limitations? And thus prediction when technology will catch up to the
demands of your system? These are the simple science questions that you
should already have addressed, yet all I see you doing here is a lot of
hand waving.

> > Since you clearly don't have robots
> > demonstrating human-level intelligence with your oh-so-perfect solution,
> > what is the highest level of intelligence you *can* demonstrate?
>
> TD-Gammon. A small step, but a step in the right direction.

Two decades ago was the last step? Hardly sounds like the right
direction to me.

> > Again,
> > not just in end-result behavior, but in demonstrable *intelligence* in
> > the machine that can be shown to scale up. Bonus points for scientific
> > predictions of that scaling into the future.
>
> There aren't enough data points to make such a prediction. How many data
> points existed between the invent of a bubble sort, and the first invention
> of an n log n sort? How could anyone have made a prediction in how long it
> would take to create an N log N sort? Such a prediction would be
> impossible. Only a wild guess could be made. I believe this AI problem is
> the same. I don't thin there are many steps between what we have today
> (with systems like TD-Gammon), and with the system that will duplicate
> human intelligence. I think it's mostly one big step that will be made,
> and likely soon (within a small handful of decades). But it's a wild ass
> guess.

So stop guessing and start doing science. Again, I think you're on the
wrong path when you insist on dealing with things like O when you have
yet to show you have the proper definition for what a sort (or
intelligence) is. AI will continue to fail at making progress so long
as the focus is on the A and not the I.

> Read all the word done by the Behaviorists.

Let me sleep on that decision.

> > Me, I'd just be happy if you actually could provide a useful definition
> > of intelligence that would indicate you were even on the right path.
>
> Again, the problem is that before someone like you will believe it's the
> right path, there must be a PATH. I don't think there is one in this case.
> Either you have written the N log N algorithm, or all you have is the
> bubble sort, and since the bubble sort doesn't "look intelligent" there's
> little proof that there is a path.

Sorting algorithms *aren't* intelligent, and neither is your precious
learning algorithm. Thinking about sorting is different from doing a
sort is different from validating a sort. Until you go *backwards* far
enough along your path that you can start talking about intelligence,
you don't have the right path to go forwards.

> What you are looking for, can only exist, if there is a long slow path to
> be climbed with lots of little incremental improvements happening along the
> way. I think the very history of AI work shows such a path doesn't seem to
> exist. Though lots of clever programs have been written, many which some
> people believed couldn't be done, the overall progress to making a machine
> that acts and thinks like human, seems elusive. None of our clever
> machines seem to be "intelligent", and even as they get more advanced, they
> don't seem very "intelligent".

Which was my argued misstep from the start. Surely if *we* can serve as
examples of intelligence, we must also serve as examples of a long slow
path to get there. Our mistake is in discarding examination of that
path in the face of exponential technological growth. AI did, as you
describe, become more about clever tricks of A rather than a sober
examination of I. Funny thing, though, is that you're on the wrong side
of that divide, but you fail to see it.

> Watson for example was really great at answering questions. But did it
> show any sign of being able to learn and get smarter over time? Did it
> show any sign of basic human creativity? Not that I saw.

I already mentioned how I thought it was an embarrassing project from a
fundamental AI perspective. A lot of people and a lot of hardware were
thrown at winning, but you could tell from some of the wrong answers it
gave that there wasn't even a rudimentary level of understanding, let
alone any significant measure of intelligence.

> Strong learning systems program themselves to do things they never had to
> be engineered to do. You give it a problem, like needing to get food on
> the other side of the river, and the thing, on it's own (might take many
> years), learns to build bridges. It's what we can "learn" do to on our
> own, without having something more intelligent show us, that is the core of
> our real intelligence. And it's what is missing from all our current AI
> machines - none of them can begin to figure out, on their own, how to build
> a bridge, just because we give it the problem of getting food located on
> the other side of the river.

And, again, you continue to idealize your chosen silver bullet without
any evidence of progress after decades. Without any definition of what
intelligence is *beyond* learning.

> Without strong learning, machines show no real signs of "intelligence".

Your cart, sir! You have placed it before your horse!

> > Again, you're falling back to circular definitions. I want to talk
> > about intelligence, and you only want to talk about learning.
>
> One day, you will understand that the thing you call "intelligence" is
> learning. One day.

You learn data. Intelligence is what makes useful information out of
it. I don't dispute the *possibility* that there a dependence in the
process, perhaps to the extent that there can be no difference if we
fundamentally wish to create machine intelligence, but you *still*
haven't provided any evidence to support that.

> > Yet you
> > never discuss what is *inherently* intelligent about learning,
>
> I did a bit above now.

No, you didn't. You keep asserting it, but without anything to back it
up.

> This is happening in us with everything we do and everything we are exposed
> to, even though we have no direct awareness that our brain is being
> adjusted like that.

But we *don't* know that the abstract "adjustment" is the *source* of
intelligence or the *result* of intelligence! Given a lack of progress,
I maintain you've not worked backwards far enough. You remain hung up,
for some unknown reason, on the belief that you can stop at learning.
The whole field of AI is hung up on similar sub-problems. My whole
argument is that we might be wise to discard that approach and get back
to looking closer at the fundamental issue of what intelligence really
*is*.

> Because we exist inside a non-Markovian environment we will constantly
> learn things that are not true. If we hit a button 5 times, and every time
> we get food, we will learn there is a 100% probability of getting food when
> we hit the button.

Except when we don't "learn" that. That is to say, what would a *truly*
intelligent agent assess the probability to be? You say it should be
100%, but it could easily be argued that it might be a
lesser-and-decreasing probability based on the non-specific notion of
"wearing out" or "unsustainable" or any number of other factors that
could be brought into consideration. Note that the agent need not learn
anything "new" to change their probability score, and need not have
their non-100% evaluation reinforced in any way, and may even reject a
guarantee that it is 100% reliable. Even your own examples fail to
support your one-size-fits-all answer of "learning".

casey

unread,
Aug 11, 2011, 4:17:51 PM8/11/11
to
On Aug 12, 2:24 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
wrote:
> An ant does a lot of things based on genetic programming
> and simple reflex behavior. A bee might be a better example,
> but in any case you *do* have to show that there is some
> mental model in place that can reasonably be said to "think"
> in a manner that can reasonably be said to be "intelligent".

You seem to be talking about a mechansim.

> I think intelligence can be decomposed into something more
> specific than "a brain".

I don't define it as a brain, it is a behavior. But again I
see you as wanting to define it by some mechanism.


> Indeed, the whole field of AI is *supposed* be be based on
> the notion that "a brain" is not necessary.

Yes the idea that a machine can *behave* intelligently.

But you seem to want to say behavior X is only intelligent
if caused by mechanism Z?

> If you *only* measure intelligence at the point the "what it
> does" changes, you are really missing the BIG picture when
> it comes to understanding intelligence.

Intelligent behavior obviously requires mechanisms and you
seem to be talking about understanding the mechanisms.


jc

Burkart Venzke

unread,
Aug 11, 2011, 6:25:42 PM8/11/11
to
>>>> Why don't we have a stong(er) AI by now?
>>>
>>> Why *should* we have it by now?
>>
>> Why not? ;)
>> Some persons are quite optimistic.
>
> Science needs to be about evidence, not blind optimism.

Without optimism, no science.
AI with "intelligence" in it it not only a natural science with
evidence, also a part of humanities, of philosophy when thinking about
what it is.

>>> It took thousands of years to get from Icarus to the Wright brothers
>>
>> But it took only some years from planes to the moon by a rocket.
>
> The current path of AI is not planes to rockets. Like I said, it's
> still at the wax and feathers stages.

Strongest AI can de seen it this way. Therefore we should define
features of a little bit weaker AI (or something like a kernel) so that
we can extend it in the course of time.

>>> and, in the end, AF looked nothing like natural flight.
>>
>> Right. Therefore it is not necessary that AI can think with a brain with
>> neorons etc.
>
> Wrong angle. My intent was to point out that the *methods* for
> achieving flight differed from what humans naively assumed from the
> start. Likewise, AI has come to employ a lot of problem solving
> techniques that really have little to do with uncovering the nature of
> intelligence.

Therefore we should defines aspects that seem to belong to intelligence,
intelligence as goal as the flight was a goal.

>>> Why assume that intelligence, which took millions more years to
>>> evolve to even our feeble level, is going to be an easier nut to crack?
>>
>> Like flying, artificial intelligence need not to the same as human
>> intelligence.
>
> But it still requires an understanding of what intelligence is, such
> that we can say any particular system, human or machine, has it.

Or we defines aspects of it. If flight had been understood as "raising
by waving with arms", no flying would have been invented.

>> We define what AI is, so it is not necessarily nebulous.
>
> Hardly. Intelligence is still in the realm of pornography: we know it
> when we see it.

Do we? Or do you only think or define that you know it!?

> We still have no formal definition for generic
> intelligence that allows us to reasonably approach the problem of
> building an artificial system to represent it.

Therefore we have to collect ideas, define models, create system with
"some intelligence"...

The wrote about of "understanding of what intelligence is". A problem is
that is no clear definition or borderline to "not intelligent", one may
define one, me or you another one. We can see it in the weak AI: Chess
playing once was seen as intelligent, in the meantime it is not (by
Fritz and other).

>>> The better the A gets, the worse the I gets.
>>
>> Would do you mean? It is only your theory?
>
> No, it is my observation. From Deep Blue to Watson on Jeopardy (hate to
> pick on IBM, but they deserve it most), the AI community should be
> *embarrassed* about how little scientific value is extracted from those
> endeavors. That is to say, we have thrown all sorts of modern hardware
> and human intelligence into writing programs to *win* at chess, but to
> what scientifically useful end? Did the machines teach us more about
> chess as a result? Or game playing techniques in general? Did we learn
> *anything* about intelligence in general? No, we just threw a lot of A
> at the problem of winning one particular game, and as a result got that
> much less I in return.

Yes, that is a problem that weak AI is used too much instead of thinking
about the deeper features of intelligence.

>>> I mean,
>>> we study all kinds of related subjects like "learning", but it is by no
>>> measure a solved problem such that you can just wrap it up into a
>>> software library and sell it for all manner of uses.
>>
>> I think we should first develop a basical, general model of learning as
>> far as possible before we build up software libraries of it.
>
> Of course. My point is that we haven't gotten even that far. We have
> nebulous words like "learning"

Do you know that there are different types of well defined machine
learning models? Again, we should use them to go forward to a broader
model of learning.

> for what we think is part of intelligent behavior,

Can you imagine something intelligent which can not learn? I cannot!

> but we have no solid understanding about what mechanisms will
> achieve it such that it results in strong AI.

The way to strong AI may be the goal...

Burkart

Curt Welch

unread,
Aug 12, 2011, 10:04:12 AM8/12/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 11, 9:38=A0am, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> > On Aug 11, 1:31=3DA0am, c...@kcwc.com (Curt Welch) wrote:
>
> >> > We know the brain is a learning machine, because we can test,
> >> > with ease, its power to learn.
> >>
> >>
> >> But not learnt from a raw environment. It is limited at any point
> >> in history, by the intelligence of the social system it learns in.
> >
> > We can test it's ability to learn in ANY environmental. The fact
> > that most humans in the past 50,000 years have grown up in a
> > society has nothing to do with the raw learning power inherent in
> > each of us. And that raw learning power can be tested, isolated,
> > and quantified in the laboratory.
>
> > When humans work together, the create an even greater intelligence,
> > that learns and advances faster, than any human alone could advance.
> > But that global brain would not exist, if not for the power each
> > human brings to it.
>
> Nor would the human brain exist if not for the power each neuron
> brings to it.
>
> > To solve AI, we have to build the power of a single human.
>
> Sure but it will not generate the technology and academic learning
> seen in a modern society for it lacks the ability to achieve that
> outcome by itself.

Yes, but now, once again, you have switched the subject to something I was
never talking about. You are talking about a group of AIs can do working
together, instead of what power a single AI has. You are talking about
that darn global brain, when I was talking about AI.

Are you saying that if we left a million of these AIs on Mars, they would
never develop the type of higher academic knowledge humans have developed?
They obviously would, if given enough time.

> It cannot learn in real time from raw data if
> that data is not coming from a social system.

That's just wrong. It's so wrong I don't even grasp how you can write
something like that. You are truly an odd person. So, if I step in dog
shit, and learn on my own not to do it again, that learning is impossible
if the dog shit didn't come from a social system? Really? That's your
view of human learning? You have to read it in a book first before you can
learn not to step in dog shit?

> > I think all those modules are actually created from the same type
> > of generic learning module. If you want build a hearing module,
> > feed the generic module sound data.
>
> What you are really talking about here is an evolutionary network.

Yes, evolution is reinforcement learning. I've talked about that many
times. But it's far more correct and meaningful to call it "reinforcement
learning" than to call it "an evolutionary network".

All forms of learning are, conceptually, systems that evolve over time. You
get that right? You aren't telling me anything new by telling me "what I'm
really talking about".

> I think your issue is a psychological need for an easy solution
> that you might hit on by yourself if you can only figure out how
> to construct a net that works the way you imagine it will work
> and thus bath yourself in glory.

That's a factor for sure. But that doesn't make me right or wrong. That
just explains why I keep responding to these messages and writing 1000's of
lines of responses. It's why I care whether I'm right or wrong, and it's
why I make a lot of effort to understand what the correct answer is, vs
most people who just think for two seconds and throw out an opinion, and
then move on.

> >> The global (or tribal) brain is an integral part of our brains
> >> just as our bodies are an integral part of what enables the
> >> cells in our body or ants in a colony to "act smart".
> >
> >
> > Bricks are a part of a building, the building is not a part of
> > the brick.
> > ...
> > You talk funny. You are confusing levels. A brick doesn't
> > get "stronger" because it's part of a wall.
>
> I don't talk funny you just don't follow the point being made.

That's right, I don't follow the point. I saw no valid point to follow.

Doc O'Leary

unread,
Aug 12, 2011, 11:47:05 AM8/12/11
to
In article <j21kt9$qtp$1...@news.albasani.net>,
Burkart Venzke <b...@gmx.de> wrote:

> Therefore we should defines aspects that seem to belong to intelligence,
> intelligence as goal as the flight was a goal.

Yes, but while that *has* been done, I argue that the aspects commonly
used (e.g., learning, natural language understanding, etc.) are at too
high a level to actually address the underlying mechanism of
intelligence. Just because one aspect of an intelligent system is X,
that doesn't mean that a system with X will necessarily exhibit
intelligence. While one might hope the study of X would shed some light
on the underlying nature of intelligence, that does not seem to be the
way most AI research is going.

> The wrote about of "understanding of what intelligence is". A problem is
> that is no clear definition or borderline to "not intelligent", one may
> define one, me or you another one. We can see it in the weak AI: Chess
> playing once was seen as intelligent, in the meantime it is not (by
> Fritz and other).

This misstep is the heart of my argument. Playing chess *can* be an
intelligent activity, if you actually *do* set out to explore the
boundary between what is and is not intelligence. Instead, people made
the goal of their research to be *winning* chess games, and poured a lot
of hardware, software, and human ingenuity into attacking that problem.
Investigation into actual intelligence fell to the wayside.

> Do you know that there are different types of well defined machine
> learning models? Again, we should use them to go forward to a broader
> model of learning.

No, we should use them to go backwards to a finer understanding of
intelligence. Again, we are on the wrong path; going forward is not
progress.

> Can you imagine something intelligent which can not learn? I cannot!

I can imagine something that can learn and is not intelligent. I don't
even have to imagine it; we're surrounded by those kinds of machines
every day.

casey

unread,
Aug 12, 2011, 12:07:03 PM8/12/11
to
On Aug 13, 12:04 am, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
>> It cannot learn in real time from raw data if that data is
>> not coming from a social system.
>
> That's just wrong. It's so wrong I don't even grasp how you
> can write something like that. You are truly an odd person.

Yeah I am sure our cat thinks our behavior is odd at times
because it can't follow the reasoning behind it. You often
misread something and then insult the writer for talking
nonsense when it is just your inability to follow the
reasoning.


> So, if I step in dog shit, and learn on my own not to do it
> again, that learning is impossible if the dog shit didn't
> come from a social system? Really? That's your view of
> human learning? You have to read it in a book first before
> you can learn not to step in dog shit?

If you read back over the posts that is clearly not what I
think or mean.

It was clear to me, if not you, from what I wrote before, I
was talking about learning calculus or an advanced (socially
evolved) language not that a feral human couldn't learn not
to step in dog shit.


>>> I think all those modules are actually created from the
>>> same type of generic learning module. If you want build
>>> a hearing module, feed the generic module sound data.
>>
>>
>> What you are really talking about here is an evolutionary
>> network.
>
>
> Yes, evolution is reinforcement learning. I've talked about
> that many times. But it's far more correct and meaningful
> to call it "reinforcement learning" than to call it "an
> evolutionary network".

You need to understand what is possible in an evolving population
of individuals over millions of years and what is possible in an
evolving population of neurons in the lifetime of an individual.


>> I think your issue is a psychological need for an easy solution
>> that you might hit on by yourself if you can only figure out how
>> to construct a net that works the way you imagine it will work
>> and thus bath yourself in glory.
>
>
> That's a factor for sure. But that doesn't make me right or wrong.

But it does bias your thinking and make your search selective and
make you more likely to get it wrong if the answer lies outside
your preconceived needs as to how you would like it to be.

> ...


> That's right, I don't follow the point. I saw no valid point
> to follow.

And from past experience I saw no point in going through the same
convoluted, missing the point all the time, exchanges.

JC

Doc O'Leary

unread,
Aug 12, 2011, 12:11:26 PM8/12/11
to
In article
<5725af3c-90c4-44b9...@e20g2000prf.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> I don't define it as a brain, it is a behavior.

You are plainly wrong. As I said, the *result* of intelligence may be
different behavior, but that change in behavior can come *long* after
the agent had begun thinking about the situation.

> But you seem to want to say behavior X is only intelligent
> if caused by mechanism Z?

Yes. Both a child and a computer can play tic tac toe. Are they of
equal intelligence in the context of that behavior? If the computer
actually wins more, is it smarter? Since tic tac toe is a solved game,
the computer may even play perfectly. But does the perfect player have
any real understanding of its perfect play?

> > If you *only* measure intelligence at the point the "what it
> > does" changes, you are really missing the BIG picture when
> > it comes to understanding intelligence.
>
> Intelligent behavior obviously requires mechanisms and you
> seem to be talking about understanding the mechanisms.

The mechanisms matter because intelligence is about more than behavior.

casey

unread,
Aug 12, 2011, 12:23:01 PM8/12/11
to
On Aug 13, 1:47 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
wrote:

> Playing chess *can* be an intelligent activity, if you actually
> *do* set out to explore the boundary between what is and is not
> intelligence. Instead, people made the goal of their research
> to be *winning* chess games, and poured a lot of hardware,
> software, and human ingenuity into attacking that problem.
> Investigation into actual intelligence fell to the wayside.

Although it is true the intelligent behavior of chess playing
programs is instilled programmer intelligence just as the
intelligent behaviors of insects is mostly instilled by the
process we call evolution it at least gives some idea of what
is required for a learning system to learn.

For example in getting machines to "see" we get some idea as
to what is needed and what is not and thus have a better idea
of what a learning system actually needs to learn rather than
what we imagine it has to learn.

Although the human brain may not be simple it may turn out
not to be as complex as some may have imagined.

In the case of the ANN in TD-gammom for example I suspect
a set of simple rules are being applied to making a value
of a game state even if it is hidden in the weights and
we haven't been able to translate them into a set of
programmable statements. If something is too complicated
I don't think it will ever happen within the lifetime of
all individual brains or ANNs.

jc

Burkart Venzke

unread,
Aug 12, 2011, 1:18:27 PM8/12/11
to
Am 12.08.2011 17:47, schrieb Doc O'Leary:
> In article<j21kt9$qtp$1...@news.albasani.net>,
> Burkart Venzke<b...@gmx.de> wrote:
>
>> Therefore we should defines aspects that seem to belong to intelligence,
>> intelligence as goal as the flight was a goal.
>
> Yes, but while that *has* been done, I argue that the aspects commonly
> used (e.g., learning, natural language understanding, etc.) are at too
> high a level to actually address the underlying mechanism of
> intelligence.

If you think of natural/human intelligence and learning, I agree.
But we also have/can construct machine learning which are not too high.
Remember that we speak of *A*I as a machine intelligence.

> Just because one aspect of an intelligent system is X,
> that doesn't mean that a system with X will necessarily exhibit
> intelligence.

Right.

> While one might hope the study of X would shed some light
> on the underlying nature of intelligence, that does not seem to be the
> way most AI research is going.

I think that most AI research is not fundamental enough, too much
applications with only few aspects of AI.

>> The wrote about of "understanding of what intelligence is". A problem is
>> that is no clear definition or borderline to "not intelligent", one may
>> define one, me or you another one. We can see it in the weak AI: Chess
>> playing once was seen as intelligent, in the meantime it is not (by
>> Fritz and other).
>
> This misstep is the heart of my argument. Playing chess *can* be an
> intelligent activity, if you actually *do* set out to explore the
> boundary between what is and is not intelligence. Instead, people made
> the goal of their research to be *winning* chess games, and poured a lot
> of hardware, software, and human ingenuity into attacking that problem.
> Investigation into actual intelligence fell to the wayside.

You are right, systems that learn to play chess (as good AI) are very
rare. (Learning is my central aspect for AI.)

>> Do you know that there are different types of well defined machine
>> learning models? Again, we should use them to go forward to a broader
>> model of learning.
>
> No, we should use them to go backwards to a finer understanding of
> intelligence.

What kind of intelligence do you mean? Human? Natural (animals)? Or also
artifical for example for a turing test?

Creating new types of learning systems are a step to understand
(artifical) learning and therefore a very important aspect of intelligence.

> Again, we are on the wrong path; going forward is not
> progress.

Going forward cannot be wrong, only the kind of way on which you are
going can be wrong.

>> Can you imagine something intelligent which can not learn? I cannot!
>
> I can imagine something that can learn and is not intelligent.

That is not the question, please review the question above!

Burkart

Burkart Venzke

unread,
Aug 12, 2011, 2:59:28 PM8/12/11
to
Am 08.08.2011 16:36, schrieb Curt Welch:
> Burkart Venzke<b...@gmx.de> wrote:
>>> It's been said that the field of AI is still trying to define what AI
>>> is. There is still no general agreement on what the brain is doing, or
>>> how it's doing it.
>>
>> It is not necessary to know how the brain work if we define AI in
>> another way.
>
> Well, to me the "real AI" we are after is making machines that can replace
> humans at any task that currently only a human can do.

Really any task? Also as total substitution for us human? For example, I
think of love and other emotions, of getting children...

For me, strong AI should have an intelligence comparable to ours but
without human emotions (which otherwise could aim to big problems such
as our replacement).

> If a company has to
> hire a human to do a job, because no one knows how to make a machine that
> can perform the same job, then we have not yet solved the AI problem.

I hope that not only the company owners then have jobs and 90-95% of the
humans have not... as far as this company owners do not spend a lot of
money/tax for the 90-95%.

> And in that definition, I choose to rule out the biological functions
> humans are hired to do, like donate blood, and only include the tasks that
> a machine with the right control system should, in theory, be able to do.
>
> We don't need to know how the brain works to solve this problem, but we do
> need to build a machine that is as good as the brain - and odds are, by the
> time we solve this problem, we will at the same time, have figured out most
> of how the brain works.

I am not so sure about it. We are able to fly without knowing how a bird
or insect can do it.

>>> The reason I think we have made so little progress in all this time is
>>> because most people working on the problem don't (or didn't) believe
>>> human behavior was something that could be explained by a learning.
>>
>> You mean that they are working only on weak AI?
>
> Ah, I totally missed the fact that you used the word "strong AI" in your
> subject and that you might actually have been asking about the mind body
> problem.

I don't think that we have a mind body problem.

> I don't believe in the strong vs weak AI position. Humans are just
> machines we are trying to duplicate the function of.

Strong and weak AI are not completely different for me. Weak AI is
something we already have, which uses normal computer programs.
Strong AI is the goal for intelligence which is quite as good as our
human intelligence but not necessarily in the same way.

>>> The problem is that the type of machine the learning algorithm must
>>> "build" as it learns, is unlike anything we would hand-create as
>>> engineers. It's a machine that's too complex for us to understand in
>>> any real sense. So to solve AI, we have to build a learning algorithm,
>>> that builds for us, a machine, we can't understand. Building working
>>> machines is hard enough, but building a learning algorithm that is
>>> supposed to build something we can't even understand? That's even
>>> harder.
>>
>> Hm, you knows... I am not a fan of rebuilding the brain respectively its
>> neural structures where the details really cannot be understood in every
>> detail.
>> But you are right, it is not necessary to understand all details
>> precisely .
>>
>>> I think in the end, the solution of how these sorts of learning
>>> algorithms work, will be very easy to understand. I think they will
>>> turn out to be very simple algorithms that create through experience
>>> machines that are too complex for any human to understand.
>>
>> Could the be symbolic (in opposite to neural) in your mind?
>
> Well depends on what you mean by "symbolic". Digital computers are
> symbolic from the ground up (1 and 0 symbols) so everything they do,
> including neural nets, are symbolic at the core.

That are not the symbols I think of.

> The "symbols" that make up our language (words) are not a foundation of the
> brain, they are a high level emergent behavior of the lower level
> processing that happens.

OK, "symbols" has different meanings or intentions. Every word is a
lingual symbol with which we associate more or less (other) items (other
symbols, emotions etc.).

I think about a "stronger" (than "weak") AI which can act with and learn
symbols like words. How far such a way to AI may work, I don't know.

Burkart

Curt Welch

unread,
Aug 12, 2011, 6:52:15 PM8/12/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110810143404.902$t...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > What else is there to test other than physical behavior of a machine?
> > To say "basing it on behavior" is wrong you have to but forth what
> > non-physical thing you are talking about that a test of intelligence
> > should be based on. Are you suggesting intelligence is not physical?
> > Are you arguing the dualism position?
>
> No, no; don't get me wrong. Like I said, the whole foundation of AI is
> (or should be) the idea that a machine can be made intelligent. All I'm
> saying is that we haven't gotten to the "DNA" of intelligence. Behavior
> is more like physiology or morphology. AI took a wrong turn that way, I
> don't expect we will get any closer to "strong" AI until we throw out
> all the shortcuts we implemented and get back to actually dealing with
> the root idea of what intelligence really is.

Well, I agree with that. But I did that 35 years ago and I've figured out
what the root of intelligence is. It's just a reinforcement learning
machine.

You also seem to be using the word "behavior" in a way different from me.
And you are using it in a way that irritates me when I see people do that.
People are trained by the way we talk to think about humans as dualistic
beings. Even after people choose to reject ideas like a soul, they
continue to think and talk about humans as if they are dualistic because
that's just the way there were conditioned to talk. And a classic part of
that, is the notion that "thinking" is not a physical act. And if thinking
is not a physical act, the the physical acts, like moving our arms, are
called "behavior" - as distinct from "thinking".

But thinking is physical. It's body motion just as much as moving our arm
is a body motion. It might be to moving of ion molecules across membranes,
and the production and transport of other molecules in the head, but it's
all at the low level, physical motion. All there is to study in humans is
the physical motion of the entire body (which includes the motions
happening in the brain).

The entire body is a physical machine that moves. And some abstract
specification of how the body moves, is what "intelligence" is. The
definition of intelligence, must be some sort of abstract description of
how the parts of a machine move.

And how the body moves, is "it's behavior". To suggest that AI took the
wrong turn, by studying behavior, is to fail to grasp that the only thing
to study, is behavior.

The question to ask, is which aspect of behavior is the important part to
study. So is "winning the chess game" the import part of the behavior, or
the "how the move choices were selected" more of the important part? I
think that's what you are getting at when you say "studying behavior" was a
wrong turn. I think you are saying we need to focus more on how it's done
inside vs the high level results, like winning a chess game, or making a
robot walk. Which I agree with. But I just wanted to lay out all the
above to make sure we are on the same page in this thinking.

I think the "how it is done" is a parallel machine that maps high dimension
sensory inputs into high dimension effector outputs, and which is
conditioned though reinforcement. That is, it slowly changes that mapping
based on conditioning by a global reward signal. None of the chess
programs or robot-walking programs are structured anything like that - and
I think they will never duplicate human-like intelligence - or even help us
get there, because they were structured totally wrong.

> > The problem is that many people don't believe those steps are leading
> > us in the right direction. Not until we are done, and then look back
> > at the path, will it be obvious to everyone which steps actually led us
> > to the solution.
>
> Nope. Science does not work that way.

You seem to be mixing up the processes of science and engineering. AI is
mostly a engineering problem. Not science. Science is the process of
studying what is. Engineering is the process of creating something that
has never existed.

The science part of AI is studding the brain - and there are plenty of
scientists working on that. They told us (the engineers) what type of
machine it was 60 years. They told us it was an operant and classical
conditioned learning machine.

Many of the engineers and scientists however decided that answer was wrong
(or at least greedy reductionism (aka over simplification)). And as such,
searched for and created other high level abstractions to try and get a
handle what that mesh of neurons was doing.

I say Skinner was right, and the "science" was complete back then in terms
of understanding what type of machine we were dealing with. The rest of
the science work to be done is uncovering the full implementation details
of the brain.

The engineering work, which is not science, is creating a machine that
doesn't yet exist, which is of the same type machine the brain is.

> If you really are taking steps,
> you could show them. Like I said, where is your chess playing system
> that became *intelligent* about the game of chess?

TD-Gammon. (for Backgammon)

> Where is the
> followup step that is *intelligent* about board games in general?

That step hasn't been taken yet. It needs to be taken. But my belief is
that we are one small engineering step away from a machine that will
demonstrate all the "next steps" you are waiting to see.

> And,
> ideally, you could *demonstrate* how intelligence is present by (at the
> very least) showing a control group (e.g., a learning system with no
> prior training) that performs less intelligently than a test group
> (e.g., a learning system that was trained at chess, but asked to play go
> (or vice versa)). Science does not need you to be "done"; hell, science
> says you're *never* done. If you're not able to show real progress like
> that, you have to acknowledge you might just be on the wrong path.

I have no issue acknowledging I might be on the wrong path. I just assign
such a low probability to that "might" that I feel it's valid to act as if
it were zero. I don't believe absolute truth exists. But that's a
different debate.

> > When we look back at what happened, all your words will be proven silly
> > nonsense. The fact that YOU don't see what is happening, is not proof
> > that there is no proof, or that no progress is being made. It is only
> > proof that you failed to see it, or understand it, until after someone
> > showed you the final answer.
>
> That is something *every* quack claims about their miracle cure or
> perpetual motion machine or whatever woo they're pushing.

Yes, if you apply the quack-o-meter to the way I talk, I come out with a
very high score. But the same happens anytime that someone understands
some truth before it becomes socially accepted. The people that talked
about the earth being round when everyone knew it was flat scored very high
on the quack-o-meter as well.

Most people that sounds like quack actually are. But not all of them are.
So if all you can base you decision on is whether I sound like a quack,
then you should assume I am one. But if you learn more about learning, you
have the option to understand why I say it is the definition of
intelligence you are looking for.

> Intelligence
> and science both offer no "final answer", and *demand* that you always
> consider the possibility that you are wrong. Take a step back and
> reassess your position.

Well, we have a final answer here because we have a well defined goal line.
If the question is "can you fly faster than the speed of sound" then there
is a final answer.

AI has the same sort of (mostly) well defined goal. The goal of building a
machine that equals or outperforms all human mental abilities. Once such a
machine is built, we are done. End of story. AI solved.

If you want to continue to explore and better understand all the mysteries
of the human brain, then that is an far more open ended process that can
continue long after this simpler AI goal is reached.

> > Ah, you think the word "learning" means "what humans do". That's not
> > what the word means in the context I'm using it, or in the context the
> > entire field of machine learning uses the word.
>
> No, I think the word "learning" means "learning", as in incorporating
> data to a system. I do not circularly define a procedure, *call* it
> learning, and then proclaim that context to be the only one valid for a
> discussion of "learning" or "intelligence".

Lighten up dude. Words have different meanings to different people. There
is no single "correct" meaning for a word. I'm sorry that I, and the
entire field of machine learning, uses the word different from you. If you
want to correctly understand the meaning of my words, you have to apply my
meaning of the word "learning", not your own.

> > Ah, you use the word "learning" to mean something nebulous and
> > undefined that humans do. No wonder you think it's nebulous and
> > undefined. That's how you have chosen to (not) define the word.
>
> Again, no. I simply haven't defined it circularly, or pretended that it
> inherently has anything to do with intelligence. As a child, I learned
> a lot of things about Santa Claus. Absolutely none of that speaks to
> how I intelligently think about Santa Claus.

I have made no circular definition. Stop being stupid. I have defined
intelligence, clearly, and precisely. You simply don't agree with my
definition (and don't like to use a few of the words the same way I do).
Don't play stupid word games. I've explained what my words mean, and there
is no circular definitions in MY meaning. They only become circular, when
you apply YOUR definition to my words.

> > > Yes, yes. If I specify how a faster-than-light engine works, it is a
> > > mere matter of details in how to *build* it that keeps it from
> > > working. I have made no error in my specification!
> >
> > That is correct.
>
> Wow. Just . . . really?

Of course. What if there were a way it could be built? And what If I
actually built it? Would your specification be wrong? Of course you
specification is not wrong. It's just lacking in details - as ALL
SPECIFICATIONS ARE.

By our best current knowledge, the machine you have specified can't be
built. But that doesn't make the specification invalid. You get that
right? A specification is just a description of something, and your
specification is a valid description. Whether the thing can, or does
exist, is a separate issue from whether you have a specification.

> > I can fully specify an a quick sort algorithm as a sort algorithm that
> > sorts in O(n log n) time. There is no error in that specification.
> > But since I did not specify how the algorithm works, there is a key and
> > very important piece missing from the specification that would allow
> > someone to easily build such a thing.
>
> No. The important part about an algorithm is what is *does*, not how
> efficiently it does it.

What's "important" is tonally up to the eye of the beholder. It has
NOTHING TO DO WITH THE ALGORITHM, or the machine. We often ignore speed
when writing code, but it's highly important in other applications, like
all the real time applications we create (like flying the space shuttle).

> > However, in the case of the reinforcement learning specification, we do
> > know it's possible, because we have brains that do it. So we have two
> > possible mysteries to work on 1) how does the brain do it and 2) how
> > can we build a non-biological machine to do it.
>
> You missed the bigger mystery: 0) exactly *what* is being done; the
> difference that makes a difference. I get that you might be hoping to
> work backwards to find it, but I've seen no evidence of anyone working
> backwards far enough in the last 50 years to say "this is the root of
> intelligence". If you can't do that, I have reasonable doubts that
> you're on the right path.

Right. But again, I believe this is because it's not a bit complex hill to
climb. It's a very very small one. Two steps small. It's a needle in a
haystack problem. It's not a problem of trying to mine a million needles
from the hay, but just one. Because of this, you won't see the "small
steps".

I believe this problem is much like building a flying machine. Where are
the small steps there? There aren't many. Either your machine flys, or it
doesn't. There's not much in the way of "small steps" to be had there.
Anyone that choose to believe flying is impossible, will be able to make
the same argument you are making, right up until the day he's proven wrong
- and someone builds that first flying machine. Anyone in 1900 could look
back at 1000 years of man dreaming about building flying machines, and
pointed out the 100's of attempts, that all failed. And they could say
what you are saying - no one is showing any progress, so we must not be
even close.

I think AI is the same way. The technology needed to solve it, has been
advancing in small steps, and when all the pieces come together (hardware,
software), it will suddenly take off. It will suddenly be easy to make
machines look, and act "intelligent" when the day before, none of the
machines seemed to act intelligently.

> If you have no science to back that up, you are fooling yourself.

The science of behaviorism backs it up.

> > Learning machines don't need humans to program them. They program
> > themselves. That's what learning is. Learning machines change their
> > design on the own, without the help of an intelligent "programmer". A
> > good learning machine (as a human is) will learn to play chess on its
> > own, without having a human program in the chess playing behaviors.
> > This is the while point of why learning is so key to intelligence.
> > With a strong learning machine, the "intelligence" do design, and
> > build, a chess playing machine, is inherent in the learning machine.
> > Strong learning machines frees the human engineers so they don't have
> > to be the "intelligence" behind the behavior.
>
> Yes, we all know the grand idea. Still missing is the factor of
> *intelligence* you assume but (still) fail to define.

I've not failed in ANY SENSE to define it. I repeat, my only failure, is
to convince you (and others) that my definition is the right one.

> You say "learning
> is so key to intelligence", but I say that intelligence is key to
> learning. If you don't believe me, write a letter to Santa Claus and
> see what he says.

Sure, but by writing that, all you have said is "Curt I have no clue what
intelligence is but despite my total ignorance, I choose to reject your
idea because it doesn't feel right to me.".

I don't mind people having that view. People are ignorant, it's normal.
We all are. But you should be honest with yourself in the fact you have no
good tools to evaluate wither my idea holds water or not. You reject it
using your gut instinct, instead of educated reason.

> > My hand waving is the answer. As I have said from the first post, lots
> > of people, including you, just don't understand that yet. You are the
> > type of person what will NEVER understand it, until the solution is
> > found (by someone not like you), and they show the finished work to
> > you. Then, and only then, will someone like you, understand the long
> > path that got us to the solution.
>
> I understand science. I understand evidence. I don't understand hand
> waving. I don't understand "oooooh, you just wait and see how clever I
> am!"

Sure you understand it. You just want to see evidence that I say, can't
and won't exist, until after AI is actually solved. You want to see the
plain fly, before you will believe it's possible for a plane to fly,
because you don't understand the flight dynamics of airfoils and the issues
of power to weight it implies. The Wright brothers understood these
things, and as such, knew there where really close, even before they saw
the plane fly.

I understand thing about the problem domain of reinforcement learning, that
you don't understand (as shown below). And these thing about learning I do
understand, is what allows me to understand how close we are, despite the
fact that not a single machine looks "intelligent" to a laymen yet.

> > > That appears to be a vague machine specification, not a definition of
> > > intelligence. More to the point, it seems to credit *all* learning
> > > as intelligence, but I hope you are aware that intelligence can be
> > > found without new information, without reinforcement, and often times
> > > in direct contradiction to what we are told to learn.
> >
> > Examples please. Just one will do. I'll show you where the learning
> > happened and where it was reinforced. (or at least, make up a just so
> > store that fits your example).
>
> Again, it's not about learning, it's about intelligence. You can learn
> about Santa Claus or orbital teapots or evolution or climate change or
> any number of other things that may or may not be true. None of it
> speaks to your intelligence when *thinking* about those subjects.

You ignorance of the subject is showing. Your understanding of learning
seems to be limited to the highly naive school-boy view of "filling up with
facts".

> > OK, for the forth time, "intelligence" is ANY reinforcement learning
> > process.
> >
> > How many more times do you need me to repeat it for you?
>
> Repetition does not equal truth. Your impotent circular reasoning
> remains unconvincing.

I never expected to change your mind in this thread. I've run across many
people like you in the many years I've debated this subject. You don't
have the educational foundation to understand this argument. And most
likely, you will never get it. That's just my prediction.

> > The prediction has been made and is well defined. We need a
> > reinstatement learning machine that operates in a high dimension

[reinforcement] (I get careless with the spelling corrector at times)

> > environment and learns at the same rate a human brain can learn at the
> > same environment. If you do not understand what I mean by that, I can
> > give you examples and explain it to you.
>
> I understand perfectly well, but that is not a testable prediction. It
> is a empty desire like the desire to have a FTL space travel.

Partially true. But not nearly so empty. The desire to have FTL space
travel is a desire to find a way around all the science that says it's
impossible.

The desire to build a generic reinforcement learning machine that on it's
own, is the foundation of intelligence at the human level, is a desire
backed by all the work of science that tells us that is exactly what a
human is.

> More to
> the point, you haven't established that intelligence fundamentally
> requires the same processing rate and environment of a human.

My definition of intelligence defines it NOT to need that. My definitions
says that the TD-Gammon program is intelligent. My definition says that
biological DNA based evolution is another example of intelligence in this
universe.

My definition has nothing to do with humans.

Don't confuse my definition of intelligence (which is simply "any
reinforcement learning process"), with my talk about solving AI. To solve
AI, we must duplicate the mental powers of humans in a machine. That's
beyond making a machine intelligent. It's the problem of making it AS
intelligent, as a human.

To solve AI, we need to build a class of reinforcement learning machine,
that deals with a type of environment, that none of our current
reinforcement machines can deal with. Building that, is the work of
solving AI, not the work of "crating intelligence". We already created
intelligence time and time again.

I believe my definition of "intelligence" is valid, because I believe once
we solve AI, we will find there was on huge, key piece of technology
missing all these years (not 1000 of little advances). And that one key
missing piece, will be a strong, generic, reinforcement learning algorithm.
If this proves true (as I'm sure it will, but most others are not), then
when we try to define intelligence, we will find ourselves on a slippery
slope where you can't define one type of reinforcement learning algorithmic
as "intelligent" without defining them all intelligent.

> Just like
> "dog years", shouldn't you be able to show that you can achieve results
> along a virtual timeline that is expanded to reflect hardware
> limitations?

I don't think we are dealing with hardware limitations. I think the
hardware to "solve AI" existed 50 years ago. The stuff we have today is so
fast, and so cheap, that once the algorithm is understood, we will almost
instantly blow past human intelligence with our machines.

But what I mean by that is not that we could easily have built a machine
equal to human intelligence 50 years ago, but that we could have built a
demonstration of intelligence, that might, for example, look like rat
intelligence, 50 years ago. But that it would make the machine look so
"life-like" and "conscious" that people would understand that technology
was the key to solving AI. Then, 50 years ago, the race would be just to
develop faster cheaper hardware, instead of being a hunt in the dark for a
definition, which is what the past 50 years has been filled with.

> And thus prediction when technology will catch up to the
> demands of your system? These are the simple science questions that you
> should already have addressed, yet all I see you doing here is a lot of
> hand waving.

I've written 1000's of posts here over the past many years. I've covered
all these questions many many times in my past posts. Have you read them?
Do you really want me to write a million lines of text here to explain it
all to you?

The limit is not hardware speed or size or cost. We don't having the right
algorithm. But the algorithm which I think is missing, is only a small
handful of lines of code, not millions of lines of code.

I think once the algorithm is created, it will take very little time to
build machines equal in intelligence than humans. I don't think the human
brain really has all that much processing power. Our phones are darn close
to having enough power now I suspect.

> > > Since you clearly don't have robots
> > > demonstrating human-level intelligence with your oh-so-perfect
> > > solution, what is the highest level of intelligence you *can*
> > > demonstrate?
> >
> > TD-Gammon. A small step, but a step in the right direction.
>
> Two decades ago was the last step? Hardly sounds like the right
> direction to me.

It's a hard step to take. It's taken 60 years really, not just two
decades.

Your arguments would be exactly the same, if it was 1903 and you were
trying to suggest we were no where near to powered flight because no one
had shown any progress. There's only one real step to take - get it off
the ground and keep it off. As I've said many times, I think "true" AI is
the same type of problem. Until you solve it, nothing looks like real
"intelligence".

TD-Gammon doesn't look intelligent to most people. It's only people like,
that see it as a wonderful step towards intelligence.

> > > Again,
> > > not just in end-result behavior, but in demonstrable *intelligence*
> > > in the machine that can be shown to scale up. Bonus points for
> > > scientific predictions of that scaling into the future.
> >
> > There aren't enough data points to make such a prediction. How many
> > data points existed between the invent of a bubble sort, and the first
> > invention of an n log n sort? How could anyone have made a prediction
> > in how long it would take to create an N log N sort? Such a prediction
> > would be impossible. Only a wild guess could be made. I believe this
> > AI problem is the same. I don't thin there are many steps between what
> > we have today (with systems like TD-Gammon), and with the system that
> > will duplicate human intelligence. I think it's mostly one big step
> > that will be made, and likely soon (within a small handful of decades).
> > But it's a wild ass guess.
>
> So stop guessing and start doing science.

I don't play with meat. I'm an engineer. I do R&D, not science. Others
are doing the science.

> Again, I think you're on the
> wrong path when you insist on dealing with things like O when you have
> yet to show you have the proper definition for what a sort (or
> intelligence) is. AI will continue to fail at making progress so long
> as the focus is on the A and not the I.

True. If my definitions is wrong, then I'm just headed down a dead end.
But it's not wrong. I see the clues even if you don't. I didn't pick this
path just because I found this sort of technology interesting. I picked it
because I first started out (in the 70's) to understand what intelligence
was. This is where I've ended up after decades of asking myself that very
question (what is intelligence?).

> > Read all the word done by the Behaviorists.
>
> Let me sleep on that decision.
>
> > > Me, I'd just be happy if you actually could provide a useful
> > > definition of intelligence that would indicate you were even on the
> > > right path.
> >
> > Again, the problem is that before someone like you will believe it's
> > the right path, there must be a PATH. I don't think there is one in
> > this case. Either you have written the N log N algorithm, or all you
> > have is the bubble sort, and since the bubble sort doesn't "look
> > intelligent" there's little proof that there is a path.
>
> Sorting algorithms *aren't* intelligent, and neither is your precious
> learning algorithm. Thinking about sorting is different from doing a
> sort is different from validating a sort. Until you go *backwards* far
> enough along your path that you can start talking about intelligence,
> you don't have the right path to go forwards.

That's right. But I have done all the work and I'll I'm telling you here,
in this thread, this is where I ended up after all that work. I'm not
trying to explain 40 years of thinking on my part in one thread here.

> > What you are looking for, can only exist, if there is a long slow path
> > to be climbed with lots of little incremental improvements happening
> > along the way. I think the very history of AI work shows such a path
> > doesn't seem to exist. Though lots of clever programs have been
> > written, many which some people believed couldn't be done, the overall
> > progress to making a machine that acts and thinks like human, seems
> > elusive. None of our clever machines seem to be "intelligent", and
> > even as they get more advanced, they don't seem very "intelligent".
>
> Which was my argued misstep from the start. Surely if *we* can serve as
> examples of intelligence, we must also serve as examples of a long slow
> path to get there.

Some technol goes just don't have a long slow path. It's the nature of the
beast. I think AI is one of those.

Leaning machines are very different from traditional engineering, and that
difference has a lot of people really confused. With traditional
engineering, we have to build every little feature into the machine we
want. You want 4 wheels, you have to build 4 wheels. You want two doors,
you have to design and build two doors. You want a radio in the dash, you
have to design and build the radio. etc, etc.

This is not only true of mechanical engineering, it's true of how we do
most of our software engineering. Every feature that ends up in a program
like Microsoft Word, has lines of code that created the feature. The more
complex the program, the more lines of code you have to write.

Humans seem very very complex - more complex than any program. And as
such, engineers naturally assume this means "lots of code" to create it.

But learning systems are like magic. You don't design a build all the
features, you just throw down some raw material and stand back, the the
damn thing builds all the features itself. Most engineers have no
understanding of how to make this magic happen.

This lack of understanding of learning processes, is also why so many
people have problems understanding and accepting evolution. They want to
believe that a watch, implies a watchmaker. If there is complexity, there
must have been some greater complexity to conceive of it, and build it.

Learning machines, create things, that are more complex, then they are.
They are machines that evolve complexity.

It's a class of complexity, most people have no real understanding of.

And it's why, when we use our old-school eyes, to look at human behavior
(even our own behavior), all we tend to see is lots of complexity, and we
want to find the "watchmaker" to explain it all.

There is no watchmaker per se. There is only a learning algorithm from
which all the complexity grows.

People need to look past the complexity of human behavior, and see the seed
from which it all grows - a strong reinforcement learning algorithm. Most
people have not learned to do that. Skinner learned to do that. I've
learned to do that.

> Our mistake is in discarding examination of that
> path in the face of exponential technological growth. AI did, as you
> describe, become more about clever tricks of A rather than a sober
> examination of I. Funny thing, though, is that you're on the wrong side
> of that divide, but you fail to see it.

No, I see it. I just happen to believe I have the right answer. You think
I just have yet another pointless clever trick.

> > Watson for example was really great at answering questions. But did it
> > show any sign of being able to learn and get smarter over time? Did it
> > show any sign of basic human creativity? Not that I saw.
>
> I already mentioned how I thought it was an embarrassing project from a
> fundamental AI perspective. A lot of people and a lot of hardware were
> thrown at winning, but you could tell from some of the wrong answers it
> gave that there wasn't even a rudimentary level of understanding, let
> alone any significant measure of intelligence.

Yeah, at least one of the wrong answers was shockingly wrong. I don't
remember what it was, but it was even difficult to understand how Watson
had found that answer. It was as far off base, as if someone had asked
what color the spots on a dalmatian were, and Watson answered George
Washington (with a 95% degree of certainty)!

However, I think there was more important technology (of the right type) in
there than people realize. A big part of creating strong RL, is this
problem of associating billions of facts to pull out the most relevant one.
Meaning, given the current situation, which of the billion past experience
I've had in my life is most relevant to the current situation? Solving the
RL problem, has a lot of important association parallels to what the Watson
database has to do, as well as what search engines like Google has to do.

> > Strong learning systems program themselves to do things they never had
> > to be engineered to do. You give it a problem, like needing to get
> > food on the other side of the river, and the thing, on it's own (might
> > take many years), learns to build bridges. It's what we can "learn" do
> > to on our own, without having something more intelligent show us, that
> > is the core of our real intelligence. And it's what is missing from
> > all our current AI machines - none of them can begin to figure out, on
> > their own, how to build a bridge, just because we give it the problem
> > of getting food located on the other side of the river.
>
> And, again, you continue to idealize your chosen silver bullet without
> any evidence of progress after decades. Without any definition of what
> intelligence is *beyond* learning.

There is nothing *beyond* it. That's all it is. That's why I don't go
beyond that - there is nothing beyond there to go to.

Let me side track and explain what I see needs to be done. To create a
robot, that acts like a human (including with thinking power), it needs to
have high bandwidth parallel sensory inputs, and lower, but still high,
bandwidth parallel outputs controlling all its effectors.

In the middle of this black box, there needs to be a mapping function
(basically some sort of neural network), that directly maps all that high
dimension input data, the the lower bandwidth output data. It's best seen
as a continuous data flow problem, rather than some sequential computer
program.

The network that does the mapping of this live flowing data, needs to be
trained by a reward signal. The network evolves it's configuration over
time, to slowly change how this robot reacts to its sensory data.

There are different ways to conceptualize what is happening here, but one
way, is to think of it as an associative memory look-up system. The
sensory data flowing in is the "context" and the behavior flowing out is
the "answer".

The key here however is that it's not so much a lot of little specialized
modules (as we might conceive of in our traditional engineering solutions),
but rather, one large generic, context sensitive programmable look-up
array. It's more like a holographic memory system, than a "one data at one
location" memory system however.

The system is basically storing a complex sort of "average" of all past
learning experiences on top of each other. And the resulting behavior that
emerges, is the sum total of all past learning experiences (biased by their
similarities to the current context).

So what we have, is a generic "behavior generating" system, that is
programmable, by a reward signal. It evolves, so as to crate the behaviors
that are predicted to maximize rewards.

The complexity of the behavior is only limited by the effective "size" of
this "context sensitive memory look-up system". And humans have a fairly
large one, which is able to produce a very large set of learned behavior,
that we spit out, as we go though each day in our lives.

I don't believe there is anything more to human "intelligence" than that.
It's just a programmable behavior generator that is trained by
reinforcement. When we are born, are behavior set is almost useless, as we
gain experience through interacting with our complex environment, our
behaviors evolve into a set of actions that are more useful at getting
rewards for us. Old actions that aren't very useful are slowly evolved
into more useful actions.

All our "thinking" and acting, is learned in this same way. It's just
behaviors which are evolved based on our experiences of how useful they
were to get rewards.

> > Without strong learning, machines show no real signs of "intelligence".
>
> Your cart, sir! You have placed it before your horse!
>
> > > Again, you're falling back to circular definitions. I want to talk
> > > about intelligence, and you only want to talk about learning.
> >
> > One day, you will understand that the thing you call "intelligence" is
> > learning. One day.
>
> You learn data. Intelligence is what makes useful information out of
> it. I don't dispute the *possibility* that there a dependence in the
> process, perhaps to the extent that there can be no difference if we
> fundamentally wish to create machine intelligence, but you *still*
> haven't provided any evidence to support that.
>
> > > Yet you
> > > never discuss what is *inherently* intelligent about learning,
> >
> > I did a bit above now.
>
> No, you didn't. You keep asserting it, but without anything to back it
> up.
>
> > This is happening in us with everything we do and everything we are
> > exposed to, even though we have no direct awareness that our brain is
> > being adjusted like that.
>
> But we *don't* know that the abstract "adjustment" is the *source* of
> intelligence or the *result* of intelligence!

Right. To answer that, we have to do science. Skinner did that science
80(?) years ago now? Then he spent the rest of his life trying to do the
same thing I'm doing here, trying to educate the non believes as to what he
found.

> Given a lack of progress,
> I maintain you've not worked backwards far enough.

Well, that's exactly why the world gave up on Behaviorism in the 50's after
a strong start out of the gate. No further progress was being made. No
one could use that understanding to build a machine that acted intelligent
for example. It was the lack of forward progress that killed the faith in
that path.

But I don't think it was the wrong path. I'm sure it was the right path.
It was just a hard nut to crack. Not a big but, or a complex net, just a
hard nut.

The reason it was a hard nut, is that looking at external behavior, there
was no real clue how to implement the internal solution. It works much
like an encryption algorithm. You can send it data, and get results back
all day, but you get no hint how to right the code, by looking at all that
data. You can work on it for 100 years, and still get no clue how to write
the encryption algorithm. You get no clue if you are even close, because
just a little bit off, and all your behavior is wrong. But once you get it
right, suddenly, every behavior is right.

Encryption algorithms are designed to be hard like that on purpose.

This RL learning system I claim we need to build is hard to reverse
engineer just be accident, not by design. It's just the nature of how a
system like that works. But that very nature, is why AI has not been
solved in all this type.

But we are getting real close. All the work funded by Jeff Hawkins for
example is biting at the heals of these associative memory look-up systems.
And his group is making good progress.

> You remain hung up,
> for some unknown reason, on the belief that you can stop at learning.
> The whole field of AI is hung up on similar sub-problems. My whole
> argument is that we might be wise to discard that approach and get back
> to looking closer at the fundamental issue of what intelligence really
> *is*.

Been there, done that, and the answer is is an associative look-up behavior
generating machine with high dimension inputs and outputs, which is trained
by a global reward signal.

> > Because we exist inside a non-Markovian environment we will constantly
> > learn things that are not true. If we hit a button 5 times, and every
> > time we get food, we will learn there is a 100% probability of getting
> > food when we hit the button.
>
> Except when we don't "learn" that. That is to say, what would a *truly*
> intelligent agent assess the probability to be?

You are just showing your ignorance there.

> You say it should be
> 100%, but it could easily be argued that it might be a
> lesser-and-decreasing probability based on the non-specific notion of
> "wearing out" or "unsustainable" or any number of other factors that
> could be brought into consideration. Note that the agent need not learn
> anything "new" to change their probability score, and need not have
> their non-100% evaluation reinforced in any way, and may even reject a
> guarantee that it is 100% reliable. Even your own examples fail to
> support your one-size-fits-all answer of "learning".

It fits it perfectly, and you, like many, show your ignorance of the
complexty at work in such system when you try to make the argument you did
above.

If someone were to "bring into considering a concept of wearing out", where
do you think that concept came from? What was the logic that caused that
person to bring that concept into consideration?

I'll tell you.

Lets say we had a case in the past, where we hit a button 5 times, and it
worked 5 times. But then later, we hit again, and it didn't work. What
did we learn by that? We learned that some things stop working after a
while. We develop the concept of "wearing out".

Now, years later, we hit a button 5 times, and it works every time (as in
my example). How does the system evaluate that? It doesn't evaluate it as
100% likely the button will always work. Why? Because IT WAS NOT THE ONLY
EXPERIENCE THE SYSTEMS HAD TO WORK WITH.

By suggesting what a "truly intelligent" agent would do, you are ASSUMING,
it has a wide background of PREVIOUS EXPERIENCE (beyond what I gave in the
example). YOU CHANGED THE CONDITION OF THE EXAMPLE.

I said, hits a button 5 times, and you changed it to, hit a button 100's of
times in the past, and now hits this new button 5 times. In your example,
the "truly intelligent" agent, is using past experience to make predictions
about this new situation.

Do you think a truly intelligent new-born baby would "bring up the concept
of wearing out" when it learned that sucking on the tit always gave milk
(the first 5 times)? At that point, the baby doesn't have past experience
to apply So it's behavior is very limited. But a day later, it's gained a
lot of experience it can apply. A year later, it's gained all that much
more experience. Everything that one year old does, is based on a year of
past experience in similar situations.

That is what makes us intelligent. The ability to usefully apply lessons
from the past, in _similar_ situations, to new situations. The ability to
abstract an important concept out of past experience to create a concept of
"wearing out" and apply that abstracted concept to a new situation.

This is nothing more than what a strong reinforcement learning machine must
do. It must dynamically create abstract concepts and use those abstract
concepts to judge the value of future actions, in _similar_ situations.

Human "intelligent" behavior is complex, because it's a merging of a life
time of past lessons into every action and every choice we make. Including
our choices to do something like talk to ourselves in our head privately vs
choosing to wave our lips and make sounds.

It's a memory look-up problem trying to answer the question "what is the
past action to perform at this instant, based on a life time of past
lessons learned from everything I've done in the past". How do you build a
machine to do that in a high dimension IO domain? Figure that out, and you
will have solved the I part of AI. Our "high intelligence" means we are
very good at picking actions, which are highly useful, based on all the
past lessons we have learned by past interactions with our environment.

A system that is not at good at applying past lessons to current
situations, would not be as "intelligent" (it's not as good of an RL
machine - at least for the environment being tested in).

Curt Welch

unread,
Aug 12, 2011, 7:33:49 PM8/12/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <j21kt9$qtp$1...@news.albasani.net>,
> Burkart Venzke <b...@gmx.de> wrote:
>
> > Therefore we should defines aspects that seem to belong to
> > intelligence, intelligence as goal as the flight was a goal.
>
> Yes, but while that *has* been done, I argue that the aspects commonly
> used (e.g., learning, natural language understanding, etc.) are at too
> high a level to actually address the underlying mechanism of
> intelligence. Just because one aspect of an intelligent system is X,
> that doesn't mean that a system with X will necessarily exhibit
> intelligence. While one might hope the study of X would shed some light
> on the underlying nature of intelligence, that does not seem to be the
> way most AI research is going.

I think you have made your point. But in all this, you have put forth no
suggestion as to what "I" might actually be - or even what sort of thing it
is so as how we might describe it. Do you have any notion or rough idea
what it is you think you are looking for?

For example, I think the brain is just a single processing machine that
makes the arms move in reaction to the sensory data. So understanding it
as a signal processing machine is a high level place to start to understand
what we are trying to describe. Do you agree with that, or do you suspect
there might be something even more mysterious than neurons making arms move
at work creating our intelligence?

> > The wrote about of "understanding of what intelligence is". A problem
> > is that is no clear definition or borderline to "not intelligent", one
> > may define one, me or you another one. We can see it in the weak AI:
> > Chess playing once was seen as intelligent, in the meantime it is not
> > (by Fritz and other).
>
> This misstep is the heart of my argument. Playing chess *can* be an
> intelligent activity, if you actually *do* set out to explore the
> boundary between what is and is not intelligence. Instead, people made
> the goal of their research to be *winning* chess games, and poured a lot
> of hardware, software, and human ingenuity into attacking that problem.
> Investigation into actual intelligence fell to the wayside.

Yeah, I think there were high hopes that "intelligence" would be solved in
the early days of AI, but then when that failed to happen, people just
choose to pick projects that were related, and focus on them, with no real
expectation they were solving the intelligence problem. The field of AI
just fragmented into a field of building machines that could perform some
limited domain task that only a human could perform, without any real care
or concern about how well it fit into the bigger picture. After all, then
needed projects they could complete, so they could continue to get funding.
Most the field of AI is sill in that mode.

My work is a self funded hobby, so I never had to worry about taking short
cuts to get results. I've only ever worked on the bigger picture of trying
to understand the correct founding principles of what the machine we call
the brain is doing.

> > Do you know that there are different types of well defined machine
> > learning models? Again, we should use them to go forward to a broader
> > model of learning.
>
> No, we should use them to go backwards to a finer understanding of
> intelligence. Again, we are on the wrong path; going forward is not
> progress.
>
> > Can you imagine something intelligent which can not learn? I cannot!
>
> I can imagine something that can learn and is not intelligent.

Right, that's a valid point.

> I don't
> even have to imagine it; we're surrounded by those kinds of machines
> every day.

--

Curt Welch

unread,
Aug 12, 2011, 7:44:49 PM8/12/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 13, 12:04=A0am, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> >> It cannot learn in real time from raw data if that data is
> >> not coming from a social system.
> >
> > That's just wrong. It's so wrong I don't even grasp how you
> > can write something like that. You are truly an odd person.
>
> Yeah I am sure our cat thinks our behavior is odd at times
> because it can't follow the reasoning behind it. You often
> misread something and then insult the writer for talking
> nonsense when it is just your inability to follow the
> reasoning.
>
> > So, if I step in dog shit, and learn on my own not to do it
> > again, that learning is impossible if the dog shit didn't
> > come from a social system? Really? That's your view of
> > human learning? You have to read it in a book first before
> > you can learn not to step in dog shit?
>
> If you read back over the posts that is clearly not what I
> think or mean.
>
> It was clear to me, if not you, from what I wrote before, I
> was talking about learning calculus or an advanced (socially
> evolved) language not that a feral human couldn't learn not
> to step in dog shit.

Yes, will that's the problem of losing context when writing and responding
to Usenet posts. You might have that context in mind when you wrote that,
but when it was multiple posts back, you should always assume the reader
will not remember the context and make it clear with your new comment - or
expect to be misunderstood.

> >>> I think all those modules are actually created from the
> >>> same type of generic learning module. If you want build
> >>> a hearing module, feed the generic module sound data.
> >>
> >>
> >> What you are really talking about here is an evolutionary
> >> network.
> >
> >
> > Yes, evolution is reinforcement learning. I've talked about
> > that many times. But it's far more correct and meaningful
> > to call it "reinforcement learning" than to call it "an
> > evolutionary network".
>
> You need to understand what is possible in an evolving population
> of individuals over millions of years and what is possible in an
> evolving population of neurons in the lifetime of an individual.

Yes. It's called "building a reinforcement learning machine". It's
something I try to gain greater understanding of all the time.

> >> I think your issue is a psychological need for an easy solution
> >> that you might hit on by yourself if you can only figure out how
> >> to construct a net that works the way you imagine it will work
> >> and thus bath yourself in glory.
> >
> >
> > That's a factor for sure. But that doesn't make me right or wrong.
>
> But it does bias your thinking and make your search selective and
> make you more likely to get it wrong if the answer lies outside
> your preconceived needs as to how you would like it to be.

That's just as true for you, as for me, as for everyone John. We are all
reinforcement learning machines that use our past experience to guild our
actions. Our choices are not just _biased_ by past experience, they are
100% controlled by past experience. Whoever is lucky enough to have the
right past experience is the one that will end the end, lucking into the
right path forward for a puzzle like this. You talk as if my view is bides
and you are free from bias so you have the right to discount my beliefs.
That's bull shit. Your view is 100% biased just like my is. One is
probably closer to the truth than the other, but both could be way off
base. The type of bias pressures at work are not a good indicator of the
accuracy of an opinion. They just explain why two people like you and I
might have such different views.

> > ...
> > That's right, I don't follow the point. I saw no valid point
> > to follow.
>
> And from past experience I saw no point in going through the same
> convoluted, missing the point all the time, exchanges.

:) But yet you keep responding!

> JC

Curt Welch

unread,
Aug 12, 2011, 7:52:26 PM8/12/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article
> <5725af3c-90c4-44b9...@e20g2000prf.googlegroups.com>,
> casey <jgkj...@yahoo.com.au> wrote:
>
> > I don't define it as a brain, it is a behavior.
>
> You are plainly wrong. As I said, the *result* of intelligence may be
> different behavior, but that change in behavior can come *long* after
> the agent had begun thinking about the situation.
>
> > But you seem to want to say behavior X is only intelligent
> > if caused by mechanism Z?
>
> Yes. Both a child and a computer can play tic tac toe. Are they of
> equal intelligence in the context of that behavior? If the computer
> actually wins more, is it smarter? Since tic tac toe is a solved game,
> the computer may even play perfectly.

This has nothing to do with the point you are making, but I just wanted to
point something out here.

When we play tic tac toe, we aren't just playing the game, we are playing
the opponent as well. And because of that, there is no "perfect" solution.
Tic Tac Toe is just hard enough, that most humans will make some careless
mistakes when playing it. And an AI that does a better job at predicting,
and taking advantage of the mistake his opponent is likely to make, will be
a better player (win more games in the long run). Trying to predict the
odds of an opponent making mistakes is as hard of an AI problem as any
because it requires you predict what a human is likely to do across many
games.

casey

unread,
Aug 12, 2011, 8:05:10 PM8/12/11
to
On Aug 13, 8:52 am, c...@kcwc.com (Curt Welch) wrote:
> Yes, if you apply the quack-o-meter to the way I talk, I come
> out with a very high score. But the same happens anytime that
> someone understands some truth before it becomes socially
> accepted. The people that talked about the earth being round
> when everyone knew it was flat scored very high on the
> quack-o-meter as well.

You are confusing the scientist with the uneducated layman.
Laymen may not be able to separate the quack from the deep
thinker because they don't have the knowledge required.
That the Earth was a sphere was demonstrated experimentally
all the way back to ancient times by those who had a
scientific bent. You have not done that with your notions
about a network with a high dimensional solution to problems.

> But learning systems are like magic. You don't design a build
> all the features, you just throw down some raw material and

> stand back, the damn thing builds all the features itself.


> Most engineers have no understanding of how to make this magic
> happen. This lack of understanding of learning processes, is
> also why so many people have problems understanding and accepting
> evolution.

And many who believe they understand evolution don't. When someone
asks how something can evolve out of nothing their question makes
sense. Evolving systems are not like magic they are understood.
Not every system has what is needed for it to evolve. And those
that can evolve do so in *possible* steps. They do not solve high
dimensional problems which is impossible. Any solution to a high
dimensional problem turns out to be a solution that makes use of
some means of reducing that high dimensionality.

Learning is incremental, small steps that can happen in a given
number of trials possible in a given period of time.

There is no magical solution to the high dimensional problem.
Constraints must be found or it will not happen.

Complex things that cannot be learned in the time of an individual
can be learned over many generations providing there is some way
of passing that knowledge onto future generations. That is what
makes modern man more capable than his ancestors. Most of our
learning takes place in many brains over many generations. A single
brain like a single neuron is severely limited in what it can learn.


jc

Curt Welch

unread,
Aug 12, 2011, 8:53:46 PM8/12/11
to
Burkart Venzke <b...@gmx.de> wrote:
> Am 08.08.2011 16:36, schrieb Curt Welch:
> > Burkart Venzke<b...@gmx.de> wrote:
> >>> It's been said that the field of AI is still trying to define what AI
> >>> is. There is still no general agreement on what the brain is doing,
> >>> or how it's doing it.
> >>
> >> It is not necessary to know how the brain work if we define AI in
> >> another way.
> >
> > Well, to me the "real AI" we are after is making machines that can
> > replace humans at any task that currently only a human can do.
>
> Really any task?

For sure.

> Also as total substitution for us human? For example, I
> think of love and other emotions, of getting children...
>
> For me, strong AI should have an intelligence comparable to ours but
> without human emotions (which otherwise could aim to big problems such
> as our replacement).

No, I think that's a sci-fi fallacy. I think it's not just important to
add in emotions, I think it's impossible to create intelligence without it.

Reinforcement learning machines are emotion machines. So if you write an
RL algorithm, you have already built emotions into a machine.

If it doesn't look like emotions to you, that's only because you learning
machine isn't good enough yet, not because you have left something
fundamental out.

What do you think love is other than a drive to be attracted to something?

Reinforcement learning machines must assign value to everything in order to
work. Every sensation, every action, has value assigned to it (aka every
state or every state actuation pair). The assigned (calculated) values are
what drive every action choice the machine makes. What makes us seek out
the company of one person, and avoid another? What makes us eat one food,
and avoid another? it's just the values our learning system has assigned
to all these things.

Love is nothing more than a high value our system has assigned to a thing.

Fear is just the prediction of pending loss of rewards (pending high
probability of lower future rewards).

All our emotions can be explained in terms of the values our reinforcement
learning system is assigning to the things we can sense and the actions
which are selected based on those values.

It's impossible to build a reinforcement learning machine that is not
emotion based.

> > If a company has to
> > hire a human to do a job, because no one knows how to make a machine
> > that can perform the same job, then we have not yet solved the AI
> > problem.
>
> I hope that not only the company owners then have jobs and 90-95% of the
> humans have not... as far as this company owners do not spend a lot of
> money/tax for the 90-95%.

Well, that's a different social problem, but one I think we will be facing
big time this century. Assume by 2050, we have $1000 AI processors that
are more intelligent than any human. They can be built into to any
machine, to make the machine intelligent (or control them remotely), and we
can motivate and train these AI machines to do any job a human can do -
only in most cases, better. And the AI brains don't need vacations. They
don't need to be paid. And when you train one for a job, you can clone it
into as many other machines as you want with a copy operation. You can
even create special configurations where long term memory (aka learning) is
disabled, and only short term memory works, so it can perform the same
boring job hour after hour, day after day, and have no concept of having to
do this for the 2000 years it's been doing it.

When these sorts of machine become available, the value of human
intelligence, will drop to below the value of these machines (which is only
the $1000 capital cost plus the pennies a day for the power to run them).
Humans basically won't be able to find work.

Not only that, we will get to the point, as consumers, that we won't want
other humans to do these jobs, because they will suck at the jobs compared
to the machines. Get a human to fix my car? No things, the jerk tries to
rip me off, fails half the time to diagnose the problem correct, takes
hours to fix something the machines can fix in minutes, and breaks
something else in the process and makes me pay to fix it, because he claims
it was already broken. And he makes me schedule the appointment days in
advance. If I take my care to the AI dealer, they can fix even major
problems, like replacing a transmission, in minutes, because the AI has 50
special built arms for doing that work, and can work like 20 mechanics at
once on the car. The cars end up being built in ways to shave money that
means a human can't even do the work on them anymore.

Human doctor doing surgery? Now fucking way.

Human cab driver? No way - the AIs have a near zero accident rate and are
able to share "thoughts" with the other cars near by so they all known each
other intentions instantly.

Human prostitutes? Not once you try an AI. Their skill is so far greater
(and no chance of human diseases). Plus, the one you like so much,
relocates into the body of local AI prostitute no matter what shop you go
into, so you get the one you like the most no matter where you are - no
need to "wait".

Humans won't be able to work - the entire notion of "working for a living"
will go right out the window once these advanced AIs become dirt cheap.
Whoever owns the most machines, will be the one with all the wealth. If
you fail to get on board early, by buying and owning the first machines,
you will be screwed. The guys that get in first, will use the machines to
take over all markets - and will build bigger and smarter machines, to make
all the invest decisions for them. The world will quickly become dominated
by a few huge privately held, AI corporations, that don't have a single
human working in them.

I suspect there will be a real danger that future generations will learn to
like the AIs better than they like other humans. They might not even want
to have other humans around them, when they could instead, have their "AI
friends".

What happens when one of the richest AI barons decides he doesn't really
like humans at all, and he's gained so much wealth and power, he just takes
over the whole world with an AI army, and kills everyone except a handful
of human slaves he keeps around in his "zoo"? Why share the resources of
the planet with billions of other humans, when he can have it all for
himself, and his 10 closets friends?

To prevent a path like that from happening, society will have to make some
changes.

> > And in that definition, I choose to rule out the biological functions
> > humans are hired to do, like donate blood, and only include the tasks
> > that a machine with the right control system should, in theory, be able
> > to do.
> >
> > We don't need to know how the brain works to solve this problem, but we
> > do need to build a machine that is as good as the brain - and odds are,
> > by the time we solve this problem, we will at the same time, have
> > figured out most of how the brain works.
>
> I am not so sure about it. We are able to fly without knowing how a bird
> or insect can do it.

Yeah, but most of what makes a bird fly, we did figure out - mostly just
airfoil fluid dynamics combined with power to weight ratios. What we
didn't figure out, is their control system, but that's just the AI problem
again.

My real point there is I think the answer to the AI problem is actually
very simple - and it's so simple, that once we do figure out out so we can
build machines, it will become fairly obvious what the brain is doing and


how it's doing it.

> >>> The reason I think we have made so little progress in all this time


> >>> is because most people working on the problem don't (or didn't)
> >>> believe human behavior was something that could be explained by a
> >>> learning.
> >>
> >> You mean that they are working only on weak AI?
> >
> > Ah, I totally missed the fact that you used the word "strong AI" in
> > your subject and that you might actually have been asking about the
> > mind body problem.
>
> I don't think that we have a mind body problem.

Well, I don't think there is one, but we have the problem that a good
number of people are still confused by it - including some scientists
studying the brain, and engineers, trying to build AI. It's causing some
work effort to be misdirected.

> > I don't believe in the strong vs weak AI position. Humans are just
> > machines we are trying to duplicate the function of.
>
> Strong and weak AI are not completely different for me. Weak AI is
> something we already have, which uses normal computer programs.
> Strong AI is the goal for intelligence which is quite as good as our
> human intelligence but not necessarily in the same way.

That's the AI definition of "strong AI" - which is fine by me. The original
definitions of strong AI came from the philosophers and is a reference to
the mind body problem. In that context "Weak AI" is a machine that acts
like exactly like a human, but isn't conscious - it has no subjective
experience (a philosophical zombie). "Strong AI" is a machine that both
acts like a human, and is conscious.

> >>> The problem is that the type of machine the learning algorithm must
> >>> "build" as it learns, is unlike anything we would hand-create as
> >>> engineers. It's a machine that's too complex for us to understand in
> >>> any real sense. So to solve AI, we have to build a learning
> >>> algorithm, that builds for us, a machine, we can't understand.
> >>> Building working machines is hard enough, but building a learning
> >>> algorithm that is supposed to build something we can't even
> >>> understand? That's even harder.
> >>
> >> Hm, you knows... I am not a fan of rebuilding the brain respectively
> >> its neural structures where the details really cannot be understood in
> >> every detail.
> >> But you are right, it is not necessary to understand all details
> >> precisely .
> >>
> >>> I think in the end, the solution of how these sorts of learning
> >>> algorithms work, will be very easy to understand. I think they will
> >>> turn out to be very simple algorithms that create through experience
> >>> machines that are too complex for any human to understand.
> >>
> >> Could the be symbolic (in opposite to neural) in your mind?
> >
> > Well depends on what you mean by "symbolic". Digital computers are
> > symbolic from the ground up (1 and 0 symbols) so everything they do,
> > including neural nets, are symbolic at the core.
>
> That are not the symbols I think of.

No, most people don't think that way. I'm weird.

> > The "symbols" that make up our language (words) are not a foundation of
> > the brain, they are a high level emergent behavior of the lower level
> > processing that happens.
>
> OK, "symbols" has different meanings or intentions. Every word is a
> lingual symbol with which we associate more or less (other) items (other
> symbols, emotions etc.).
>
> I think about a "stronger" (than "weak") AI which can act with and learn
> symbols like words. How far such a way to AI may work, I don't know.

Right, to me a symbol is a class of patterns that can be detected by a
sensory system, that is unique from other symbols, in a set.

And what that ends up meaning, is that it's possible to build a machine,
that can respond in different ways, for different symbols. Such as flash a
red light when it senses a DOG and a green light when it senses a CAT. The
dog and cat are symbols as defined by the machine (not by the symbol
itself), which can discriminate it's behavior based on the sensory
patterns.

Symbolic processing is central to AI, because in order to do something like
recognize a cookie, vs a pile of shit, we need that basic pattern
classification system at work, which discriminates the sensory data into
different sets - the cookie set, vs the pile of crap set. And in doing
that act of sensory discrimination, it's defined "symbols".

The same "symbol processing" hardware is needed for knowing how to eat
cookies and avoid piles of shit, as it is needed to generate and respond to
language. An AI needs to learn to parse the entire sensory environment,
into "chunks" that it learns to react to, and that "chunking" is what
defines the concept of symbols. At least that's how I see it.

casey

unread,
Aug 12, 2011, 9:07:47 PM8/12/11
to
Curt wrote:
>>> That's a factor for sure. But that doesn't make me right or wrong.
>>
>>
>> But it does bias your thinking and make your search selective and
>> make you more likely to get it wrong if the answer lies outside
>> your preconceived needs as to how you would like it to be.
>
>
> That's just as true for you, as for me, as for everyone John.

Well if you are aware of your bias take steps to prevent it from
restricting your thoughts. A bit like the random element added
to RL to get it to explore new moves. Consider the possibility
that all those complex weights in an ANN might only appear to be
complex but in fact only work when they are able to find some
simple constraints within human understanding even if we are
unable to unravel them at this point in time.

How do we come up with a working set of weights over n trials?

How does a protein come up with a working set of amino acids
over a limited set of trials considering that you can look
at these problems as trying to solve a combination lock.


>> And from past experience I saw no point in going through the same
>> convoluted, missing the point all the time, exchanges.
>
>
> :) But yet you keep responding!

Like poking at a sore tooth.

The problem of learning and evolving systems fascinate me but I
find your understanding of the issues to be rather simplistic.

jc

Curt Welch

unread,
Aug 12, 2011, 9:37:01 PM8/12/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 13, 8:52=A0am, c...@kcwc.com (Curt Welch) wrote:
> > Yes, if you apply the quack-o-meter to the way I talk, I come
> > out with a very high score. But the same happens anytime that
> > someone understands some truth before it becomes socially
> > accepted. The people that talked about the earth being round
> > when everyone knew it was flat scored very high on the
> > quack-o-meter as well.
>
> You are confusing the scientist with the uneducated layman.
> Laymen may not be able to separate the quack from the deep
> thinker because they don't have the knowledge required.
> That the Earth was a sphere was demonstrated experimentally
> all the way back to ancient times by those who had a
> scientific bent. You have not done that with your notions
> about a network with a high dimensional solution to problems.
>
> > But learning systems are like magic. You don't design a build
> > all the features, you just throw down some raw material and
> > stand back, the damn thing builds all the features itself.
> > Most engineers have no understanding of how to make this magic
> > happen. This lack of understanding of learning processes, is
> > also why so many people have problems understanding and accepting
> > evolution.
>
> And many who believe they understand evolution don't.

> They do not solve high


> dimensional problems which is impossible. Any solution to a high
> dimensional problem turns out to be a solution that makes use of
> some means of reducing that high dimensionality.

You are ignorant John.

Solving high dimension problems are trivially easy to prove possible.

A binary search in a continuous space is an example of a trivial solution
that runs quickly in a high dimension problem.

Trying to search a continuous space using a linear search is an example
that is impossible.

Here's the code for you:

Problem: Find the square root of 2 accurate to 1000 places.

Random search solution that fails to scale because of the high dimension
nature of the search space:

loop
x = random real between 1 to 2 with 1000 digits after the decimal point
y = x * x
does y equal 2? if yes, print "square root of two is " x, then stop.
end

Binary search:

low - 1, high = 2
loop
mid = (low + high) / 2
y = mid * mid
does y equal 2? if yes, print "square root of two is " x, then stop.
is y > 2, then high = y;
else low = y
end

The second algorithm solves the class of problem you called impossible.
The first algorithmic fails to scale, and can't solve it.

Both algorithms are search algorithms searching a high dimension problem
space (one with so many states, a search of all state spaces is
impossible).

All high dimension reinforcement learning algorithms face the same problem.
They have a state space many orders of magnitude too large to search every
space. So to solve it, there are a handful of different approaches.

One, is to user evaluation functions that can direct the search, as the
binary search did. That only works if such an evaluation function is known
to exist. The other, the one which I think the brain uses, is to lump
large areas of the search space together, and search the entire space with
a single test. It's like if you are looking for a needle in the haystack,
and you can test if a given part of the haystack contains the needle, by
using a tool like a metal detector. But in RL, that tool is rewards. The
system can check if a certain behavior, produces a slightly higher reward,
and when it find that it does, it can narrow down in that part of the
search space, just like with the binary search, to see if it can improve
the rewards. The rewards direct the search though the high dimension
space.

For example, we want to train a robotic arm to throw a ball into a trash
can. The total number of possible behaviors of the arm can be huge - aka
high dimension. One very narrow class of arm motions will produce the best
results - of getting the most balls into the can. But finding that set of
motions is a needle in a haystack. (like for example, if the can is 50 ft
away from the arm). The search space is so large, randomly picking
different arm motion sequences, could take a 1000 years to luck onto a good
result. But if it keeps testing different attempts, it will find the ball
has bounced into the can, by almost random luck at times. And if it
statistically tracks which arm motions have more "luck" than others, it
will be able to deduce a direction of search though the space, that will
maximize the "luck". And instead of taking 1000's of years to find a good
throwing behavior, it manages to converge on a good answer in a day.

Such an approach does NOT, as you said, reduce the problem to a "low
dimension" The system was always searching a high dimension problem space.
What it did, was use make good use of the rewards as "hotter" "colder"
clues to direct the path through the space.

The trick to solving this class of problem is NOT reducing it to a low
dimension problem. As I have told you many times, it must be solved as a
high dimension problem. But it is solved, by making effective use of the
rewards, as clues to guide the search. Making maximal use of the in
information in the reward signal, is how these machines solve these types
of problems.

If the reward signal does not contain enough clues to direct the search,
then the problem just is impossible (well impractical because the only
solution is a random search of the entire space). But those "impossible"
problems are not the one the brain is solving. They are not what we, as
intelligent machines, are solving. The problems we solve in the high
dimension problem we call "life" are the ones where the rewards do give us
clues as to which direction to search. The machine that makes the best use
of those clues from the reward signal, are the ones which are the most
intelligent.

Curt Welch

unread,
Aug 12, 2011, 9:43:10 PM8/12/11
to
casey <jgkj...@yahoo.com.au> wrote:
> Curt wrote:
> >>> That's a factor for sure. But that doesn't make me right or wrong.
> >>
> >>
> >> But it does bias your thinking and make your search selective and
> >> make you more likely to get it wrong if the answer lies outside
> >> your preconceived needs as to how you would like it to be.
> >
> >
> > That's just as true for you, as for me, as for everyone John.
>
> Well if you are aware of your bias take steps to prevent it from
> restricting your thoughts.

Well, if you are aware of your bias, then why do you keep ignoring the
obvious facts I keep sharing with you? Why don't you take steps to get on
the right track instead of constantly arguing nonsense directions like you
do?

My point is that your comment is stupid. Telling me not to go the wrong
direction is as absurd as me telling you not to go the wrong direction.
Neither of us are following what we believe to be the wrong direction.

> A bit like the random element added
> to RL to get it to explore new moves. Consider the possibility
> that all those complex weights in an ANN might only appear to be
> complex but in fact only work when they are able to find some
> simple constraints within human understanding even if we are
> unable to unravel them at this point in time.

No matter how many times you tell me to be stupid, I just won't do it.
Sorry.

casey

unread,
Aug 13, 2011, 12:00:05 AM8/13/11
to
On Aug 13, 11:37 am, c...@kcwc.com (Curt Welch) wrote:
> You are ignorant John.

No you are Curt. I understood the problem years ago and it is
explained
for the layman in the book written by Ross Ashby.

> Solving high dimension problems are trivially easy to
> prove possible.
>
>
> A binary search in a continuous space is an example of a trivial
> solution that runs quickly in a high dimension problem.

That only works if you have found a constraint!! Just as an ANN
can only work if it finds constraints.

You have not solved a high dimensional problem with a binary search
you have solved a low dimensional problem created by making using
of constraints.

example:

86422864826428462868462 millions of these types gets a reward
17537972775391275271973 millions of these types gets punished

This system can be reduced to a system with two possible states.


> All high dimension reinforcement learning algorithms face the
> same problem. They have a state space many orders of magnitude
> too large to search every space. So to solve it, there are a
> handful of different approaches.

It all depends on finding constraints which is only possible if
there is a lower dimensional representation possible.


> One, is to user evaluation functions that can direct the
> search, as the binary search did. That only works if such
> an evaluation function is known to exist.

As I wrote you need to find constraints so it is NO LONGER
a high dimensional problem like a combination lock.


> The other, the one which I think the brain uses, is to lump
> large areas of the search space together, and search the
> entire space with a single test.

And what constraint does it use to "lump large areas of search
space together"? A constraint must be found to reduce the
apparent complexity. The combinational lock works because it
is not solvable, because it cannot be reduced. It is a truly
high dimensional problem.


> It's like if you are looking for a needle in the haystack,
> and you can test if a given part of the haystack contains
> the needle, by using a tool like a metal detector. But in
> RL, that tool is rewards. The system can check if a certain
> behavior, produces a slightly higher reward, and when it
> find that it does, it can narrow down in that part of the
> search space, just like with the binary search, to see if
> it can improve the rewards. The rewards direct the search
> though the high dimension space.

What makes you think I don't know all that? I use search
algorithms in my programs all the time. They all depend on
reducing the high dimensional space to a low dimensional
space. What do you think a mechanism is that can "reward"
the system for a move? It is one that has found a constraint
to reduce the problem to a low dimensional one. It is just
hill climbing.

The problem is to find a combination that works (e.g. combination
of weights or connections in a network). If there is only one
possible combination out of billions of possible combinations
and there is no intermediate working steps (rewards) then it is
truly high dimensional. If hill climbing can work then those
slopes mean that it is not a high dimensional problem as it can
be reduced to a simple hill climbing problem.


> Such an approach does NOT, as you said, reduce the problem
> to a "low dimension" The system was always searching a
> high dimension problem space.

The space is not high dimensional in the combinational lock
sense it is low dimensional. It only appears high dimensional
if you can't find any simplifying rules.


> As I have told you many times, it must be solved as a high
> dimension problem. But it is solved, by making effective
> use of the rewards, as clues to guide the search.

It is not solved AS a high dimension problem which is unsolvable.

You have to reduce it to a low dimensional problem by taking
advantage of any constraints in the system. [see Ross Ashby]

You cannot generate a reward signal unless there is some
constraint on which to base that reward signal. There is no
reward signal for finding the solution to a combinational
lock, the ultimate high dimensional problem.


> The problems we solve in the high dimension problem we call
> "life" are the ones where the rewards do give us clues as to
> which direction to search. The machine that makes the best
> use of those clues from the reward signal, are the ones which
> are the most intelligent.

If constraints exist then random trials will find them providing
they are simple enough to be found for a given number of trials.
This is a fundamental mathematical fact that the combinational
lock relies on to make it unlikely a solution can be found in
a reasonable number of trials.

AI is not finding a high dimensional RL solution. AI is finding a
low dimensional representation of high dimensional space using RL.

AI that learns to see has to do what a programmer does when he/she
tries to program a machine to "see". It has to find contraints in
the problem so finding a combination of actions that work possible
within the number of trials available to it.

High dimensional systems like life are only possible if working
(rewardable) intermediate steps are low dimensional, that is,
likely to happen for a given number of trials.


JC

casey

unread,
Aug 13, 2011, 7:38:55 AM8/13/11
to
On Aug 13, 11:43 am, c...@kcwc.com (Curt Welch) wrote:

> Well, if you are aware of your bias, then why do you keep
> ignoring the obvious facts I keep sharing with you?

Well clearly not "obvious facts" to me.

I think I understand the issues as well as you do.

I may have bias but I don't allow that to arrogantly dismiss
something as stupid the way you do just because it doesn't fit
the "facts" as I see them and I don't have your psychological
need to make a great breakthrough only possible if there is a
simple solution for a "high dimensional RL network" that can
generate anything worth rewarding that can lead to learning
anything at all.


>> Consider the possibility that all those complex weights in an
>> ANN might only appear to be complex but in fact only work when
>> they are able to find some simple constraints within human
>> understanding even if we are unable to unravel them at this
>> point in time.
>
>
> No matter how many times you tell me to be stupid, I just won't
> do it. Sorry.

Well if you think that developing methods that can help make the
knowledge embedded in an ANN comprehensible is stupid you had
better tell the ANN experts that are developing such methods.

Humans have no difficulty in translating their neural weights
used to play chess into a set of rules or heuristics so there
is clearly a mechanism for doing it.

http://www.scholarpedia.org/article/Td-gammon

"An examination of the input-to-hidden weights in this network
revealed interesting spatially organized patterns of positive
and negative weights, roughly corresponding to what a knowledge
engineer might call useful features for game play."

JC

Curt Welch

unread,
Aug 13, 2011, 11:25:19 AM8/13/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 13, 11:37=A0am, c...@kcwc.com (Curt Welch) wrote:
> > You are ignorant John.
>
> No you are Curt. I understood the problem years ago and it is
> explained
> for the layman in the book written by Ross Ashby.
>
> > Solving high dimension problems are trivially easy to
> > prove possible.
> >
> >
> > A binary search in a continuous space is an example of a trivial
> > solution that runs quickly in a high dimension problem.
>
> That only works if you have found a constraint!! Just as an ANN
> can only work if it finds constraints.

Duh. If there are no constraints, there is NOTHING TO LEARN!

> You have not solved a high dimensional problem with a binary search
> you have solved a low dimensional problem created by making using
> of constraints.

You have no clue what the term "HIGH DIMENSION" means do you?

It means the search space is too large to completely search. The
constrains that you are talking about do not reduce the size of the search
space, and as such, are not changing it to a low dimension problem.

> example:
>
> 86422864826428462868462 millions of these types gets a reward
> 17537972775391275271973 millions of these types gets punished

I have no idea what you are trying to say that. Are those numeric values,
or just temporal digit strings?

> This system can be reduced to a system with two possible states.

Yes, some high dimension problems aren't really high dimension, and can be
simplified to a low dimension problem - like turning a video image of a
chess board, into the low dimension game board position. The video image
is high dimension, but the chess board position is low dimension. But most
of life is not so simple John. Chasing and catching a rabbit that is
trying to run away form you in a forest is not a low dimension problem. It
can't be reduced to a low dimension problem. It has to be solved as a high
dimension problem.

> > All high dimension reinforcement learning algorithms face the


> > same problem. They have a state space many orders of magnitude
> > too large to search every space. So to solve it, there are a
> > handful of different approaches.
>
> It all depends on finding constraints which is only possible if
> there is a lower dimensional representation possible.

No, it's not John.

If you have a video camera feeding the system data, the input is very high
dimensional - many Mbit a second. That could for example be greatly
reduced to some simplified internal state representation of only a 1000
bits, changing at the rate of say 100 bits per second. But a 1000 bit
state space is still HIGH DIMENSIONAL because it's far too large to search
in a life time. It's too large to expect the system will end up exploring
all the state and all the state transitions in a reasonable amount of time
so it can learn the transition probability distributions in less than a few
million years.

The fact that the size of the state space can be reduced to something
simpler, does not make it a low dimension problem, unless the state space
is reduced to such a trivially small size, that it can be exhaustively
searched by trial and error.

Tic Tac Toe is such a trivially small state space. It's got less then 3^9
board positions. Which is a state space of less than 20,000. That's a
binary state space of 14 bits. If you can't reduce the state of a problem
down to something like 16 bits, (actual size depends on how fast you can
search it), then you are detailing with a high dimension problem. If you
reduce a problem down to only 100 bits, then it's still 80 bits too large,
which means it's still 2^80 times too large, or trillions and trillions of
times too large to be possible to use traditional RL algorithms to solve
it.

Chasing a rabbit and catching it in real time in a forest, can not be
reduced to a 16 bit state space problem John. It's so many orders of
magnitude away from a 16 bit state space problem it's absurd to even
consider the notion that "constraints" will allow you to simplify it to
something easy to solve.

How does a learning algorithm, figure out how to use the arms and legs and
eyes of a robot body, to make it catch food? This is something all the
learning systems of mammals have in common, and it can NOT, even with the
help of constraints, be transformed into a LOW DIMENSION RL problem.

> > One, is to user evaluation functions that can direct the
> > search, as the binary search did. That only works if such
> > an evaluation function is known to exist.
>
> As I wrote you need to find constraints so it is NO LONGER
> a high dimensional problem like a combination lock.

As I wrote, you can't. You seem to have no understanding of what "high
dimension" means. You are not using the term correctly. All it means is
"state space size too large to search exhaustively". Standard RL
algorithms REQUIRE that the state space be small enough, to search
exhaustively (visiting every state many times) through trial and error
interaction with the environment.

That approach does not scale once the state space gets too large to search
exhaustively, which is what happens when you move from the trivial game of
tic tac toe, to non trial but still highly simple games like backgammon, or
chess, or go. All of these games represent high dimension learning
problems - there state space is too large to search exhaustively.

When we move up from the trivially simple world of board games, and start
trying to do learning for a multi-leg robot so it can learn to chase a
rabbit though a forest, we are so far away from low dimension, it's absurd
that you keep suggesting that "constraints" are the magic sauce that end up
turning the problem back to something as simple as tic tac toe.

Just standing on two legs without failing over is a learning problem way
above tic tac toe.

> > The other, the one which I think the brain uses, is to lump
> > large areas of the search space together, and search the
> > entire space with a single test.
>
> And what constraint does it use to "lump large areas of search
> space together"?

None at all John. That is what the sensors do for example. When you take
a digital picture, it turns maps the information in the light, down to an
array of pixels, with 24 bits per pixel. The amount of information in the
light in the field of vision of a single vision is vastly larger than 24
bits. The sensor just creates an average of the information from the
environment. So the sensor is "lumping together" a lot over very high
dimension data into something far simpler, and it's not using any
constraints (correlations) in the data to do it. The lumping starts out as
an arbitrary segmentation of the sensory space into "lumps".

Though that example is fixed hardware "lumping" defined by the physical
structure of the light sensory, it's a close parallel in my mind to what
has to happen dynamically in the processing of the data. Just like we do
that sort of "lumping" when we reduce a 1000x1000 pixel image down to a
100x100 pixel version - we end up lumping together each 10x10 grid of raw
pixel data into each final pixel when we do that.

> A constraint must be found to reduce the
> apparent complexity.

Digital cameras reduce the complexity of the light, and it uses NO
CONSTRAINTS to do it. IT's just a hard wired reduction of data.

Solving RL problems in high dimension spaces requires the same sort of
technique. The high dimension space must be divided into "grids". The
process starts with the sensors, since they are forced to do it because
they simply can't translate the full information contend of the environment
into an internal signal representational. So right off the bat, the raw
sensory data coming from the sensors is already a "lumping together" of
state information from the environment.

But then in the learning process, it must create further "lumps"
(features), and do statistical correlations between the reward signal, and
each of these features in order to do reinforcement learning.

It could just use "fixed" lumps and try to do reinforcement learning on
those fixed lumps, but that would produce very poor results for all the
interesting problems. To solve the types of problems the human brain can
solve, the size and shape of the "lumps" must be adjusted by reinforcement
learning as well.

That allows the system to isolate (by learning) the features of the
environment that are important for regulating behavior, and learn to ignore
the features of the environment that are not important for regulating
behavior.

Finding the "important" features in the high dimension sensory space is a
high dimension learning problem. It's never a low dimension problem.

> The combinational lock works because it
> is not solvable, because it cannot be reduced. It is a truly
> high dimensional problem.

You are misusing the term "high dimension". The combination lock is high
dimension, yes, but it's also got a different level of complexity that we
do not use the term "high dimension" to describe. It's an environment
which can not be solved by lumping the space into chunks because there are
no rewards for getting "close" in combination locks. You can't for
example, just set the first digit of the lock, and then test to see if the
combination is in that "chunk".

The learning problems the brain can solve (and the ones that none of our
learning programs let solve so well), all require that there are usable
clues from the environment for testing "closeness" to a solution. The
system can expect to tell that it's getting close to a good solution.

This is not reducing the problem to LOW DIMENSIONS. It's simply a hill
climbing technique for searching a HIGH DIMENSION space.

An important technique however is that the space must be divided into many
dimensions, and the "hill climbing" must happen in parallel, across all the
dimensions at the same time. This is key because it allows some dimensions
to make progress towards higher rewards, while many other search dimensions
are suck on some local optima. And as some change, it changes the
landscape of other dimensions, which allows dimensions that were stuck, to
become unstuck. Learning keeps moving forward as long as at least one of
the thousands (or millions) of parameters are making progress towards
higher rewards.

> > It's like if you are looking for a needle in the haystack,
> > and you can test if a given part of the haystack contains
> > the needle, by using a tool like a metal detector. But in
> > RL, that tool is rewards. The system can check if a certain
> > behavior, produces a slightly higher reward, and when it
> > find that it does, it can narrow down in that part of the
> > search space, just like with the binary search, to see if
> > it can improve the rewards. The rewards direct the search
> > though the high dimension space.
>
> What makes you think I don't know all that? I use search
> algorithms in my programs all the time. They all depend on
> reducing the high dimensional space to a low dimensional
> space.

There's something you clearly don't know here. It might just be the
correct meaning of "low dimension".

> What do you think a mechanism is that can "reward"
> the system for a move? It is one that has found a constraint
> to reduce the problem to a low dimensional one.

No you seem to be using "constraint" inconsistently.

The "learning constraints" you talked about before means finding
statistical correlations in data.

The "using a constraints" you just talked about above for rewards, seems to
be "creating a constraint". Creating a constraint and discovering a
constraint is two very different sides of the coin.

> It is just hill climbing.

Yes, it's "Just" reward hill climbing in a high dimension problem space. A
problem that NO ONE IN THE WORLD HAS SOLVED in the past 60 years but which
the brain does solve. So to call it "Just" is a bit naive in my book.

> The problem is to find a combination that works (e.g. combination
> of weights or connections in a network). If there is only one
> possible combination out of billions of possible combinations
> and there is no intermediate working steps (rewards) then it is
> truly high dimensional. If hill climbing can work then those
> slopes mean that it is not a high dimensional problem as it can
> be reduced to a simple hill climbing problem.

You don't understand what "high dimension" means John. Hill climbing in a
high dimensional space can be easy, but it does not reduce the space to low
dimension. It's still high dimension.

The problem that the brain solves, is that the high dimension "hill" that
must be climbed, is not well defined. So not only does the brain have to
do this high dimension hill climbing, it must define (and refine through
learning) the dimensions it is working with as it learns. It's a problem
no one has yet solved, but which the world is getting close to every day.

> > Such an approach does NOT, as you said, reduce the problem
> > to a "low dimension" The system was always searching a
> > high dimension problem space.
>
> The space is not high dimensional in the combinational lock
> sense it is low dimensional. It only appears high dimensional
> if you can't find any simplifying rules.

You are missing the obvious there John. The search for the simplifying
rules is ITSELF a high dimension problem.

You seem to be confused by the fact that your brain does that for you
without you knowing how it's working. So as a programmer, you let your
brain do the hard work of finding the simple rules, and then you program
the simple rules into your computer, to turn the "hard problem" into
something "easy".

Our brain can find these for us because it's a learning machine that can
solve the high dimension search problem of finding simple rules for getting
higher rewards in a very high dimension environment.

Building a machine that can find these rules on its own, instead of using
our brains to find them, then hand-coding the rules into our computer
programs, is the difference between a "clever program" and "truly
intelligent" machine.

We use our brains to figure out what heuristics work well for playing
chess, such as the heuristic of creating a board evaluation function based
on material and board control, and then we code those heuristics into the
program to make it play a good game of chess.

But it's not intelligent, because it wasn't able to find those heuristics
on its own. Finding useful heuristics like that is a high dimension search
problem - the one the brain is able to solve, and the one which none of our
AI programs have yet solved.

> > As I have told you many times, it must be solved as a high
> > dimension problem. But it is solved, by making effective
> > use of the rewards, as clues to guide the search.
>
> It is not solved AS a high dimension problem which is unsolvable.

You don't understand what high dimension means. It's not the "combination
lock" problem.

> You have to reduce it to a low dimensional problem by taking
> advantage of any constraints in the system. [see Ross Ashby]
>
> You cannot generate a reward signal unless there is some
> constraint on which to base that reward signal. There is no
> reward signal for finding the solution to a combinational
> lock, the ultimate high dimensional problem.

Huh? Do you even have a clue what you are writing?

It's trivial to define a reward sign that makes solving a combination lock
simple.

Here it is:

reward = the number of correct digits.

Done. That reward signal makes solving combination locks trivial. Even a
combination lock with a million digits is trivial to solve given that
reward signal.

Why do you write things like "here is no reward signal for finding the
solution to a combinational lock"????

Do you even understand that in RL the reward signal is GIVEN as part of the
problem, and IS NOT part of the solution the algorithm must find???? You
talk as if finding the reward signal is what RL program is trying to do.

> > The problems we solve in the high dimension problem we call
> > "life" are the ones where the rewards do give us clues as to
> > which direction to search. The machine that makes the best
> > use of those clues from the reward signal, are the ones which
> > are the most intelligent.
>
> If constraints exist then random trials will find them providing
> they are simple enough to be found for a given number of trials.
> This is a fundamental mathematical fact that the combinational
> lock relies on to make it unlikely a solution can be found in
> a reasonable number of trials.
>
> AI is not finding a high dimensional RL solution. AI is finding a
> low dimensional representation of high dimensional space using RL.

The environment we live in has no low dimensional solution for "staying
alive long enough to make babies". There is no "low dimension" solution to
the problem. The brain creates as complex, and as high a dimension
solution, as it can possibly create, to try and deal with the problem. But
the solution it creates, is no where near optimal. It's just the best it
can do given the amount of brain it has to work with. Humans do not
produce optimal behavior. They are down right stupid. But we do the best
we can, with the limited amount of brain we have. If we had brains with 10
times the capacity, we would still be stupid compared to the complexity of
the problem we are given to solve, but we would be 10 times better at
solving it, than any other human.

The brain searches for, and builds, very high dimensional solutions (human
behavior), to a very high dimensional problem (saying alive), in a high
dimensional space (the space of all behaviors that could be crated by any
machine with 2 legs and 2 arms attempting to stay alive).

> AI that learns to see has to do what a programmer does when he/she
> tries to program a machine to "see". It has to find contraints in
> the problem so finding a combination of actions that work possible
> within the number of trials available to it.

Yes. That's a high dimension problem being solved in a high dimension
problem space.

> High dimensional systems like life are only possible if working


> (rewardable) intermediate steps are low dimensional, that is,
> likely to happen for a given number of trials.

Your use of the term "low dimension" there has nothing to do with what that
term actually means.

You are right however in that there must be constants, and there must be
clues to follow through the high dimension space (hills to climb) or else
the problem can't be solved. But it's NOT solved by transforming it to a
low dimension problem, and then doing hill climbing in low dimensions. It
must be solved by doing hill climbing in the high dimension space itself.

Curt Welch

unread,
Aug 13, 2011, 4:21:47 PM8/13/11
to
casey <jgkj...@yahoo.com.au> wrote:
> On Aug 13, 11:43 am, c...@kcwc.com (Curt Welch) wrote:
>
> > Well, if you are aware of your bias, then why do you keep
> > ignoring the obvious facts I keep sharing with you?
>
> Well clearly not "obvious facts" to me.
>
> I think I understand the issues as well as you do.
>
> I may have bias but I don't allow that to arrogantly dismiss
> something as stupid the way you do just because it doesn't fit
> the "facts" as I see them

Of course you would. If I told you 1 + 1 = 3, you would arrogantly dismiss
it. Or maybe you would just be kind to the poor fool you were talking two,
while not for a second believing he could be correct.

Many of the things you seem to want to believe, are just as wrong in my
mind, and I dismiss them just as quickly as if you had tried to tell me I
should consider the possibility that 1+1=3.

> and I don't have your psychological
> need to make a great breakthrough only possible if there is a
> simple solution for a "high dimensional RL network" that can
> generate anything worth rewarding that can lead to learning
> anything at all.

And your lack of that psychological "need" is exactly what keeps you in the
dark about this stuff. It's just not that important to you. You are open
to all sorts of nonsense as long as it seems at all plausible, because it's
not all that important to you to get it right.

> >> Consider the possibility that all those complex weights in an
> >> ANN might only appear to be complex but in fact only work when
> >> they are able to find some simple constraints within human
> >> understanding even if we are unable to unravel them at this
> >> point in time.
> >
> >
> > No matter how many times you tell me to be stupid, I just won't
> > do it. Sorry.
>
> Well if you think that developing methods that can help make the
> knowledge embedded in an ANN comprehensible is stupid you had
> better tell the ANN experts that are developing such methods.
>
> Humans have no difficulty in translating their neural weights
> used to play chess into a set of rules or heuristics so there
> is clearly a mechanism for doing it.

Apples and oranges John. Apples and oranges. You are talking about two
unrelated things and you don't even realize it. The neural wrights don't
get "translated" into rules or heuristics. They are just used. Sometimes,
the the weights make us say things like "maybe I should protect the queen",
or "apply more pressure to the center of the board". But those heuristics
are not in any sense the same ones the neural net is actually using. If
translating the neural nets in our brain into heuristics was so easy,
anyone would write a a chess program to play chess at the exact same level
they played it at. Not only that, you would write the program to play
EXACTLY the same way you did. So after translating the heuristics in your
head, into code, your program would perfectly mirror every move you made in
any chess game.

That of course never happens. Doesn't even come close to happening.
People have NO CLUE what the neural weights in their head will make them
do, until after they have done it.

If translating heuristics were so damn easy, we would have solved all of AI
50 years ago John.

People can't translate them, which is exactly why we have made so little
progress in 50 years. Every time someone things they have the right
"heuristics" to explain aspects of human behavior, it has failed to be very
intelligent.

Neural networks like TD-Gammon which only have a few hundred weights, are
way beyond our comprehension. A neural network on the scale of a human
brain with trillions of weights would be so far beyond our comprehension
it's silly to even consider there was some possibility of "understanding
the simplicity behind the weights".

We can recognize some high level patterns, in the networks, but we can't
understand what the networks will do, or explain why one specific weight is
the value it is. WE an build tools to so some calculations on these sorts
of things for us, but we can't in any sense, as a human, "understand" the
full set of weights and what they represent.

> http://www.scholarpedia.org/article/Td-gammon
>
> "An examination of the input-to-hidden weights in this network
> revealed interesting spatially organized patterns of positive
> and negative weights, roughly corresponding to what a knowledge
> engineer might call useful features for game play."

Notice the words "Roughly corresponding". That means, they "see patterns"
but HAVE NO CLUE WHY THOSE PATTERNS are the way they are, or what they
mean.

The next sentence was:

"Thus the neural networks appeared to be capable of automatic "feature
discovery," one of the long-standing goals of game learning research since
the time of Samuel."

The point they were making is that the network _LOOKED_ as if it were doing
feature discovery, NOT, as you try to claim, anyone had a clue how to
describe, or make use, of those features (outside of the neural network
code of TD-Gammon).

casey

unread,
Aug 13, 2011, 6:32:39 PM8/13/11
to

> On Aug 13, 11:37=A0am, c...@kcwc.com (Curt Welch) wrote:
> > You are ignorant John.

> No you are Curt. I understood the problem years ago and it is
> explained
> for the layman in the book written by Ross Ashby.


>>> Solving high dimension problems are trivially easy to
>>> prove possible.
>>>
>>>
>>> A binary search in a continuous space is an example of a trivial
>>> solution that runs quickly in a high dimension problem.
>>
>>
>> That only works if you have found a constraint!! Just as an ANN
>> can only work if it finds constraints.
>>
> Duh. If there are no constraints, there is NOTHING TO LEARN!

Something I quoted from Ross Ashby's book years ago it seems at
least you are learning.

"... the organism can adapt just so far as the real world is
constrained, and no further." 7/17

"... learning is possible only to the extent that the sequence
shows constraint" "...learning is worth while only when the
environment shows constraint." 7/21


>> You have not solved a high dimensional problem with a binary
>> search you have solved a low dimensional problem created by
>> making using of constraints.
>
>
> You have no clue what the term "HIGH DIMENSION" means do you?
>
> It means the search space is too large to completely search.

Yes that is what I understand it to mean.

> The constraints that you are talking about do not reduce the


> size of the search space, and as such, are not changing it
> to a low dimension problem.

Yes you are. Out there is high dimensional. The first reduction
is done at the input. The *actual data* being processed is not
the high dimensional data "out there".

>> example:
>>
>> 86422864826428462868462 millions of these types gets a reward
>> 17537972775391275271973 millions of these types gets punished
>
>
> I have no idea what you are trying to say that. Are those
> numeric values, or just temporal digit strings?

Can't you spot the difference in the strings which would enable
a simple filter to reduce them to two types?


>> This system can be reduced to a system with two possible states.
>
>
> Yes, some high dimension problems aren't really high dimension,
> and can be simplified to a low dimension problem - like turning
> a video image of a chess board, into the low dimension game
> board position. The video image is high dimension, but the
> chess board position is low dimension. But most of life is not
> so simple John.

Not so simple but the principle is exactly the same. We think
with a "simplified" representation of the world extracted via
a process of goal appropriate reduction of the input data.


> Chasing and catching a rabbit that is trying to run away from


> you in a forest is not a low dimension problem.

Actually it can be a very low dimensional problem using a simple
motion detection filter. The rabbit becomes a moving blob.
I have played with such programs to follow objects.

> It can't be reduced to a low dimension problem. It has to be
> solved as a high dimension problem.

I disagree based on work being done in vision and our current
understanding of the human visual system. Sure it may not be
as simple as motion detection, powerful though that is as a
constraint, but it comes no where near the high dimensional
reality "out there". What your brain actually has to process
is dramatically reduced and much of the detail you think you
see "out there" is a constructed illusion.


>> > All high dimension reinforcement learning algorithms face the
>> > same problem. They have a state space many orders of magnitude
>> > too large to search every space. So to solve it, there are a
>> > handful of different approaches.
>>
>> It all depends on finding constraints which is only possible if
>> there is a lower dimensional representation possible.
>
> No, it's not John.

Yes it is Curt.

My experience in vision problems and knowledge of how it is being
done by others gives insight into the problem a learning system
will have to solve.


> How does a learning algorithm, figure out how to use the arms
> and legs and eyes of a robot body, to make it catch food?
> This is something all the learning systems of mammals have in
> common, and it can NOT, even with the help of constraints, be
> transformed into a LOW DIMENSION RL problem.

If it cannot be reduced to a simpler representation to match
the computing power of the control system then it cannot be
solved. The control possible by a controller is limited to
the extent its variety can match the variety in the controlled.


>>> One, is to user evaluation functions that can direct the
>>> search, as the binary search did. That only works if such
>>> an evaluation function is known to exist.
>>
>> As I wrote you need to find constraints so it is NO LONGER
>> a high dimensional problem like a combination lock.
>
>

> You seem to have no understanding of what "high dimension" means.
> You are not using the term correctly. All it means is "state
> space size too large to search exhaustively".

I know what it means and your definition is accurate.

> Standard RL algorithms REQUIRE that the state space be small
> enough, to search exhaustively (visiting every state many times)
> through trial and error interaction with the environment.

Sure but as you know the "standard" RL illustrates the principle,
not the solution, to control of a high dimensional input.

> That approach does not scale once the state space gets too
> large to search exhaustively, which is what happens when you
> move from the trivial game of tic tac toe, to non trial but
> still highly simple games like backgammon, or chess, or go.
> All of these games represent high dimension learning problems
> - there state space is too large to search exhaustively.

The number of possible combinations for tic tac toe is also
too high for a small system -unless it makes use of constraints
that *can* be found in the game by a non-standard RL that
doesn't just use an exhaustive search algorithm.

> When we move up from the trivially simple world of board
> games, and start trying to do learning for a multi-leg robot
> so it can learn to chase a rabbit though a forest, we are
> so far away from low dimension, it's absurd that you keep
> suggesting that "constraints" are the magic sauce that end
> up turning the problem back to something as simple as tic
> tac toe.

Yep "constraints" are the "magic sauce" but it doesn't make
them easy to find. The ANN in td-gammon found some but fails
to do so in a chess game state for reasons you might like
to think about.

> Just standing on two legs without failing over is a
> learning problem way above tic tac toe.

A different type of problem. It requires a complex but not
high dimensional computation of the feedback systems to
adjust the output motors.


>> > The other, the one which I think the brain uses, is to lump
>> > large areas of the search space together, and search the
>> > entire space with a single test.
>>
>> And what constraint does it use to "lump large areas of search
>> space together"?
>
>
> None at all John. That is what the sensors do for example.
> When you take a digital picture, it turns maps the information
> in the light, down to an array of pixels, with 24 bits per pixel.

Right so it make use of the fact you can reduce the image down
to a lower dimension without losing information relevant to
the needs of the visual system. It shows constraint.

> The sensor just creates an average of the information from the
> environment. So the sensor is "lumping together" a lot over
> very high dimension data into something far simpler, and it's
> not using any constraints (correlations) in the data to do it.

I think you should read up more about constraints and redundancy.

> Finding the "important" features in the high dimension sensory
> space is a high dimension learning problem. It's never a low
> dimension problem.

I don't agree otherwise vision would never have got started.

The first "feature" in vision was probably just the amount
of light present as detected by single light sensitive cell.
This would give its owner a reproductive advantage over
those organisms without "eyes" of this type. Motion detection
is probably the next advance in vision and so on.

> The learning problems the brain can solve (and the ones
> that none of our learning programs let solve so well), all
> require that there are usable clues from the environment
> for testing "closeness" to a solution. The system can
> expect to tell that it's getting close to a good solution.

Yes they have to find constraints. If constraints don't exist the
organism will not adapt.

> This is not reducing the problem to LOW DIMENSIONS. It's
> simply a hill climbing technique for searching a HIGH
> DIMENSION space.

A 100% high dimension space would not have hills. Each position's
height would be unrelated to its neighboring position's height.
A space full of slopes is showing low dimensionality that is
why a simple hill climbing algorithm that takes advantage of
that constraint can work in such a space.

jc


Doc O'Leary

unread,
Aug 13, 2011, 6:41:37 PM8/13/11
to
In article <20110812185214.929$b...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> > In article <20110810143404.902$t...@newsreader.com>,
> > cu...@kcwc.com (Curt Welch) wrote:
> >
> > > What else is there to test other than physical behavior of a machine?
> > > To say "basing it on behavior" is wrong you have to but forth what
> > > non-physical thing you are talking about that a test of intelligence
> > > should be based on. Are you suggesting intelligence is not physical?
> > > Are you arguing the dualism position?
> >
> > No, no; don't get me wrong. Like I said, the whole foundation of AI is
> > (or should be) the idea that a machine can be made intelligent. All I'm
> > saying is that we haven't gotten to the "DNA" of intelligence. Behavior
> > is more like physiology or morphology. AI took a wrong turn that way, I
> > don't expect we will get any closer to "strong" AI until we throw out
> > all the shortcuts we implemented and get back to actually dealing with
> > the root idea of what intelligence really is.
>
> Well, I agree with that. But I did that 35 years ago and I've figured out
> what the root of intelligence is. It's just a reinforcement learning
> machine.

Again with the circular definitions. You have yet to tell us were the
*intelligence* is in your precious learning. What is the operation they
accomplish that *necessarily* results in extracting order from chaos?

> You also seem to be using the word "behavior" in a way different from me.
> And you are using it in a way that irritates me when I see people do that.

Sucks to be you. Maybe you should stop seeing demons in every shadow.
Language is used to communicate, and most people use "behavior" to
communicate outward physical action. If they want to talk about inward
mental action, they use words like "think". The difference has nothing
to do with dualism, which only you keep returning to.

> And how the body moves, is "it's behavior". To suggest that AI took the
> wrong turn, by studying behavior, is to fail to grasp that the only thing
> to study, is behavior.

I never said it went wrong by studying behavior, but that it went wrong
by studying the *products* of intelligence (learning, language, etc.)
rather than the *source* of intelligence. Much like a Turing machine
gives us the starting point for computability, AI needs a starting point
for intelligibility. If you can't offer that, if most current research
isn't moving in that direction, everyone is on the wrong path.

> The question to ask, is which aspect of behavior is the important part to
> study. So is "winning the chess game" the import part of the behavior, or
> the "how the move choices were selected" more of the important part?

It isn't even the "how" that matters! Intelligence is about the *why*
of behavior.

> > Nope. Science does not work that way.
>
> You seem to be mixing up the processes of science and engineering. AI is
> mostly a engineering problem. Not science.

Again, you demonstrate the fundamental misstep that has resulted in very
little real progress for over 50 years.

> I say Skinner was right, and the "science" was complete back then in terms
> of understanding what type of machine we were dealing with.

Anybody who thinks science is ever complete is not doing science.
Anyone who take a position of unfalsifiability is not doing science. If
you don't start from a scientific understanding, you are unlikely to get
to an engineered solution.

> > If you really are taking steps,
> > you could show them. Like I said, where is your chess playing system
> > that became *intelligent* about the game of chess?
>
> TD-Gammon. (for Backgammon)

So you claim, but where is the intelligence? I'll grant you that its
play differed from commonly accepted strategies, and even played
"better" in that regard. But you still have yet to define what about
that difference was *intelligent*!

> > Where is the
> > followup step that is *intelligent* about board games in general?
>
> That step hasn't been taken yet. It needs to be taken. But my belief is
> that we are one small engineering step away from a machine that will
> demonstrate all the "next steps" you are waiting to see.

Belief has no part in this. Show evidence. Make predictions. Or admit
to being on the wrong path, and being no closer to really understanding
intelligence than we were at the beginning of AI.

> I have no issue acknowledging I might be on the wrong path. I just assign
> such a low probability to that "might" that I feel it's valid to act as if
> it were zero.

It should be near 100%, or else you're just engaging in wishful
thinking. Everything becomes a swamp of selection bias when you're so
convinced that you're unlikely to be wrong.

> > That is something *every* quack claims about their miracle cure or
> > perpetual motion machine or whatever woo they're pushing.
>
> Yes, if you apply the quack-o-meter to the way I talk, I come out with a
> very high score. But the same happens anytime that someone understands
> some truth before it becomes socially accepted. The people that talked
> about the earth being round when everyone knew it was flat scored very high
> on the quack-o-meter as well.

Not if they actually showed their reasoning. You, on the other hand,
have been derisive when it comes to any sort of reasonable scientific
approach to your sacred cow of learning. You continue to proclaim you
know the truth, yet continue to refuse to show how you *know* it to be
true. It is meta-ironic that the topic in question is intelligence.

> AI has the same sort of (mostly) well defined goal. The goal of building a
> machine that equals or outperforms all human mental abilities. Once such a
> machine is built, we are done. End of story. AI solved.

Indeed, that is proposition of the Turing Test. But a test is not a
design document! To use it as such is circular reasoning. Again, you
will *not* make progress so long as you do that.

> If you want to continue to explore and better understand all the mysteries
> of the human brain, then that is an far more open ended process that can
> continue long after this simpler AI goal is reached.

I don't care about the human brain. I care about intelligence. That is
present in all kinds of animal brains (to varying degrees), and
certainly *seems* like it should be able to be created in machines (to
varying degrees). All I'm looking for is an approach that defines
intelligence such that it *allows* those varying degrees, instead of
this haphazardly stabbing in the dark at replicating what a human thinks.

> > No, I think the word "learning" means "learning", as in incorporating
> > data to a system. I do not circularly define a procedure, *call* it
> > learning, and then proclaim that context to be the only one valid for a
> > discussion of "learning" or "intelligence".
>
> Lighten up dude. Words have different meanings to different people.

Says the guy who gets irritated over the common usage of "behavior". It
remains an issue that you use circular definitions. Stop doing that and
things will become downright light and breezy.

> > Again, no. I simply haven't defined it circularly, or pretended that it
> > inherently has anything to do with intelligence. As a child, I learned
> > a lot of things about Santa Claus. Absolutely none of that speaks to
> > how I intelligently think about Santa Claus.
>
> I have made no circular definition. Stop being stupid.

I should have done it sooner, but now is a good time to add ad hominem
attacks to the list of logical fallacies you have engaged in during this
discussion.

If your definition weren't circular, you'd be able to say what it was
about your precious learning that *necessarily* resulted in
intelligence. If you can't demonstrate, why should anyone think you're
on the right path to even weak AI?

> > > > Yes, yes. If I specify how a faster-than-light engine works, it is a
> > > > mere matter of details in how to *build* it that keeps it from
> > > > working. I have made no error in my specification!
> > >
> > > That is correct.
> >
> > Wow. Just . . . really?
>
> Of course. What if there were a way it could be built? And what If I
> actually built it? Would your specification be wrong? Of course you
> specification is not wrong. It's just lacking in details - as ALL
> SPECIFICATIONS ARE.

Wow. Again we see how far outside the realm of science and engineering
you are. Stop living in a world of "what if" and come join us in
reality. In reality *as we know it*, a FTL engine is not possible. No
engineering specification for it would be correct *unless* it came with
a whole new scientific understanding of how the Universe works.
Likewise, your tired notion of AI breaks no new ground on the nature of
intelligence, so it remains as wrong today as it has been for decades.

> > No. The important part about an algorithm is what is *does*, not how
> > efficiently it does it.
>
> What's "important" is tonally up to the eye of the beholder.

Uh, no. A sort algorithm has to sort. Otherwise it isn't a sort
algorithm. I could do a "sort" algorithm in O(1) if actual sorting
didn't matter. Likewise, an intelligent algorithm would have to first
and foremost be intelligent. Faster/efficient algorithms for it would
be better, of course, but we still want intelligence, right?

> Right. But again, I believe this is because it's not a bit complex hill to
> climb. It's a very very small one. Two steps small. It's a needle in a
> haystack problem. It's not a problem of trying to mine a million needles
> from the hay, but just one. Because of this, you won't see the "small
> steps".

And, again, it is convenient for you to think that when you have no
results to show for your efforts. Unfortunately, history is against
your unscientific approach. Neither have you given any reason for
future to favor you. Your hand waving, what-ifs, and
just-you-wait-and-sees are not convincing arguments.

> I believe this problem is much like building a flying machine. Where are
> the small steps there? There aren't many. Either your machine flys, or it
> doesn't. There's not much in the way of "small steps" to be had there.

Oh, goodness! You didn't bother to look into the history of human
flight, did you? Neither does your claim show much understanding of
evolved flight.

> I think AI is the same way. The technology needed to solve it, has been
> advancing in small steps, and when all the pieces come together (hardware,
> software), it will suddenly take off.

Nothing is going to suddenly happen until we can say what intelligence
fundamentally is.

> > If you have no science to back that up, you are fooling yourself.
>
> The science of behaviorism backs it up.

More hand waving. If you want to take that track, tell us what
behaviorism says about intelligence.

> > You say "learning
> > is so key to intelligence", but I say that intelligence is key to
> > learning. If you don't believe me, write a letter to Santa Claus and
> > see what he says.
>
> Sure, but by writing that, all you have said is "Curt I have no clue what
> intelligence is but despite my total ignorance, I choose to reject your
> idea because it doesn't feel right to me.".

Hardly. Feelings have nothing to do with it, just like belief doesn't.
I am not making the claim that I have an answer, but *you* are. Yet you
are unable to back up that claim with any hard (or even soft!) science.
By writing what I wrote, I was making the point that we (or at least I)
can still discuss things we learned and then subsequently learned were
not true. Somewhere in there is an intelligence that is independent of
the learning. That is what I'm after, while you seem content to spin
your wheels for decades. In fairness, so has most of the other AI
research, but that offers little comfort.

> I don't mind people having that view. People are ignorant, it's normal.
> We all are. But you should be honest with yourself in the fact you have no
> good tools to evaluate wither my idea holds water or not. You reject it
> using your gut instinct, instead of educated reason.

On the contrary. I'm the one who has offered up reason and science, and
you are the one who championed blind faith and belief. Here is my tool:
you have yet to offer any quantum of thinking (that's brain behavior to
you :-) that differentiates intelligent learning from unintelligent
learning.

> > I understand science. I understand evidence. I don't understand hand
> > waving. I don't understand "oooooh, you just wait and see how clever I
> > am!"
>
> Sure you understand it. You just want to see evidence that I say, can't
> and won't exist, until after AI is actually solved. You want to see the
> plain fly, before you will believe it's possible for a plane to fly,
> because you don't understand the flight dynamics of airfoils and the issues
> of power to weight it implies.

Funny. You have not offered up the equivalent of flight dynamics or
power ratios. That is *precisely* what I'm asking for. Instead, you
keep up with the hand waving.

> I understand thing about the problem domain of reinforcement learning, that
> you don't understand (as shown below). And these thing about learning I do
> understand, is what allows me to understand how close we are, despite the
> fact that not a single machine looks "intelligent" to a laymen yet.

So you continue to claim has been the case for decades. If you really
do understand so much, you would be able to make hard predictions rather
than your continued "needle in a haystack" evasions. So make up your
mind: do you understand how close we are, or are you just randomly
grasping at nothingness?

> You ignorance of the subject is showing. Your understanding of learning
> seems to be limited to the highly naive school-boy view of "filling up with
> facts".

If your understanding is greater, you could correct me. That's all I've
been asking you to do. Show me where intelligence is necessarily the
result of what you call learning. From all you state, the impression
you give is that you're just on a random walk over the entire problem
space.

> The desire to build a generic reinforcement learning machine that on it's
> own, is the foundation of intelligence at the human level, is a desire
> backed by all the work of science that tells us that is exactly what a
> human is.
>
> > More to
> > the point, you haven't established that intelligence fundamentally
> > requires the same processing rate and environment of a human.
>
> My definition of intelligence defines it NOT to need that. My definitions
> says that the TD-Gammon program is intelligent. My definition says that
> biological DNA based evolution is another example of intelligence in this
> universe.

Right; your definition is circular. The system learns the way you've
circularly defined learning, and so it serves as your circular
definition of intelligence. Now break that circle and tell me *why*
TD-Gammon is intelligent. What, buried in all the learning, makes its
behavior notably intellligent? Does it really understand Backgammon
differently, or did it merely *play* the game differently? Why did its
progress stall 20 years ago, and what does that say about the approach
for direction future "generic" systems should take. Why haven't future
systems become more generic?

> I believe my definition of "intelligence" is valid, because I believe once
> we solve AI, we will find there was on huge, key piece of technology
> missing all these years (not 1000 of little advances). And that one key
> missing piece, will be a strong, generic, reinforcement learning algorithm.

But that piece is not missing. Your argument is that it's been around
for decades as a solved problem. It is *my* argument that there is
indeed a missing piece, which is a fundamental definition of what
intelligence really is. You keep falling back to your circular
definition, which is not at all useful.

> If this proves true (as I'm sure it will, but most others are not), then
> when we try to define intelligence, we will find ourselves on a slippery
> slope where you can't define one type of reinforcement learning algorithmic
> as "intelligent" without defining them all intelligent.

Which should be more evidence that you're on the wrong path. We have a
definition for computability. There is no slipperly slope there; some
learning systems (like simple perceptrons) cannot even meet that goal.

Likewise, it is my contention that there is some minimum definition for
intelligence beyond computabiity. Even though it may not be formalized,
it is possible to ask ourselves if a learning system might meet such a
definition. Absolutely nothing in what you've presented indicates that
what you have is somehow *inherently* intelligent.

> I don't think we are dealing with hardware limitations. I think the
> hardware to "solve AI" existed 50 years ago. The stuff we have today is so
> fast, and so cheap, that once the algorithm is understood, we will almost
> instantly blow past human intelligence with our machines.

Another easy claim to make when you don't understand intelligence. It
may *indeed* be the case that hardware is not the bottleneck, but that
doesn't amount to a prediction when you have nothing to base the
assertion on.

> I've written 1000's of posts here over the past many years. I've covered
> all these questions many many times in my past posts. Have you read them?

No. Honestly, you're normally in my kill file (since 11 Dec 2008) for
being so unreasonable in your ramblings. I see little has changed.

> Do you really want me to write a million lines of text here to explain it
> all to you?

All I want is for you to start making sense. You could start by talking
about intelligence coherently (in a philosophical sense, at the very
least). It shouldn't take a book to informally give a perspective on
thinking that distinguishes it from information, from data, or from
learning.

> The limit is not hardware speed or size or cost. We don't having the right
> algorithm. But the algorithm which I think is missing, is only a small
> handful of lines of code, not millions of lines of code.

Again, you may indeed be correct, but as long as your position is a stab
in the dark, that is not a valid prediction. The missing "handful" you
refer to is *precisely* what I would call the misstep away from
intelligence. Something small has indeed been overlooked. The only
difference is that I don't pretend to know that it is only a handful,
because I don't pretend to understand what intelligence formally is.

> I think once the algorithm is created, it will take very little time to
> build machines equal in intelligence than humans. I don't think the human
> brain really has all that much processing power. Our phones are darn close
> to having enough power now I suspect.

Your suspicions and thinking are demonstrably unscientific. Neither is
it particularly timely to make claims like "machines will be capable,
within twenty years, of doing any work a man can do" (Simon, 1965).
Please get a fresh perspective.

> > > TD-Gammon. A small step, but a step in the right direction.
> >
> > Two decades ago was the last step? Hardly sounds like the right
> > direction to me.
>
> It's a hard step to take. It's taken 60 years really, not just two
> decades.

My mistake. Your approach is *3* times less successful than I gave you
credit for.

> > Again, I think you're on the
> > wrong path when you insist on dealing with things like O when you have
> > yet to show you have the proper definition for what a sort (or
> > intelligence) is. AI will continue to fail at making progress so long
> > as the focus is on the A and not the I.
>
> True. If my definitions is wrong, then I'm just headed down a dead end.
> But it's not wrong. I see the clues even if you don't. I didn't pick this
> path just because I found this sort of technology interesting. I picked it
> because I first started out (in the 70's) to understand what intelligence
> was. This is where I've ended up after decades of asking myself that very
> question (what is intelligence?).

And yet you still haven't got an answer that you can give me. Yes,
you've obviously settled on that one learning approach, but you still
haven't said *why* that demonstrably differed from other approaches such
that it seemed like the right path. That's all I'm looking for: the
infinitesimally small quantum of difference that divides intelligent
systems from unintelligent ones.



> That's right. But I have done all the work and I'll I'm telling you here,
> in this thread, this is where I ended up after all that work. I'm not
> trying to explain 40 years of thinking on my part in one thread here.

But you should be able to. If there was any rational, scientific method
at all behind your choice, it should be easy to drill down to the
specific discoveries you have made that *does* define intelligence.
Instead, you still admit to being in the dark, yet proclaim with
confidence how far you've gotten. That's just wrong.

> > Our mistake is in discarding examination of that
> > path in the face of exponential technological growth. AI did, as you
> > describe, become more about clever tricks of A rather than a sober
> > examination of I. Funny thing, though, is that you're on the wrong side
> > of that divide, but you fail to see it.
>
> No, I see it. I just happen to believe I have the right answer. You think
> I just have yet another pointless clever trick.

Not at all. Again, you may indeed be right, but until you can accept a
definition of intelligence that is *distinct* from learning, you don't
really *know* that you are right. You will only have stumbled into the
solution, but there is no reason to believe you have the foundation to
recognize it as the correct solution before you stumble off in another
direction. You're the stopped watch that is right twice a day. That is
fundamentally the wrong approach.

> Let me side track and explain what I see needs to be done. To create a
> robot, that acts like a human (including with thinking power), it needs to
> have high bandwidth parallel sensory inputs, and lower, but still high,
> bandwidth parallel outputs controlling all its effectors.

Does it? By that measure, a *human* couldn't act intelligently unless
it had that kind of hardware. And yet I would wager that even in a
crappy 8-bit Atari world, a human would be able to demonstrate
intelligence far beyond your expected product of those inputs and
outputs. I would wager that a human could drive a car by remote
control, and do so better, by using *far* less data than is at the
disposal of robotic cars today.

Nor is it obvious to me that that outputs are necessarily lower
bandwidth than inputs. One of the hallmarks of intelligence is to
*create* information. I could easily envision a system that had a set
of sensory inputs that were far smaller than its set of outputs.

> The system is basically storing a complex sort of "average" of all past
> learning experiences on top of each other. And the resulting behavior that
> emerges, is the sum total of all past learning experiences (biased by their
> similarities to the current context).

I don't have an "average" idea of Santa Claus that guides how I think
about him. There was never a point in time when his existence was even
a 50/50 proposition. If one person tells me a ball is blue and another
person tells me it is red, I don't "learn" that it is purple. What I do
learn may in fact be quite independent of the particular data that
either person gives me.

> The complexity of the behavior is only limited by the effective "size" of
> this "context sensitive memory look-up system". And humans have a fairly
> large one, which is able to produce a very large set of learned behavior,
> that we spit out, as we go though each day in our lives.

A big database of context memory is not intelligent. If you claim that
it is, you still need to *show* where the intelligence enters into the
processing. Without that, all you have is undifferentiated feedback.
If you want to figure out the color of the blue/red ball, you may have
to make leaps of logic that go *beyond* just what the conflicting data
tells you. Unless you can define that, all the hand waving about
learning isn't going to get you an intelligent machine.

> I don't believe there is anything more to human "intelligence" than that.

And that is why you're not seeing significant results.

> Lets say we had a case in the past, where we hit a button 5 times, and it
> worked 5 times. But then later, we hit again, and it didn't work. What
> did we learn by that? We learned that some things stop working after a
> while. We develop the concept of "wearing out".

No. All you learned was that it didn't work the 6th time. It is the
application of *intelligence* that has us coming to conclusions as to
the potential reasons why that happened. Until you can define *that*
aspect, there is no difference between coming up with concept of
"wearing out" and the concept of "it is God's will".

> By suggesting what a "truly intelligent" agent would do, you are ASSUMING,
> it has a wide background of PREVIOUS EXPERIENCE (beyond what I gave in the
> example). YOU CHANGED THE CONDITION OF THE EXAMPLE.

It is *you* who is making the assumption that previous experience alone
is what would guide the thinking of an intelligent agent. I have no
previous experience with FTL travel, nor do I have any expectation that
it is even possible, but without any material reinforcement at all I can
*explore the idea* of many such impossible things.

Likewise, it makes sense to me that, even in a world where an
intelligent agent finds everything to be 100% reliable, it might
*explore the idea* that what it has learned is not true. Therefore, I
would assert that an intelligent agent would *never* conclude that
something is 100% reliable, which directly refutes your description of
how your learning system would operate. More evidence that you are
wrong, but I'm sure you'll find a convenient way to hand wave it away.

Doc O'Leary

unread,
Aug 13, 2011, 8:01:01 PM8/13/11
to
In article <20110812193349.140$2...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> I think you have made your point. But in all this, you have put forth no
> suggestion as to what "I" might actually be - or even what sort of thing it
> is so as how we might describe it. Do you have any notion or rough idea
> what it is you think you are looking for?

I can't pretend to be as smart about intelligibility as Turing was when
it came to computability. All I know is that seems to be some
small-but-extraordinarily-important difference between how an
intelligent system changes data into information. Maybe that seems
obvious, but maybe it so obvious that we haven't really thought about
how that difference should be impacting how we explore building AI
systems.

> For example, I think the brain is just a single processing machine that
> makes the arms move in reaction to the sensory data. So understanding it
> as a signal processing machine is a high level place to start to understand
> what we are trying to describe. Do you agree with that, or do you suspect
> there might be something even more mysterious than neurons making arms move
> at work creating our intelligence?

As always, I see no reason to introduce the specter of dualism. Yes,
we're talking about some kind of signal processing, but the underlying
*intelligence* clearly it isn't directly tied to sensory information, or
even just indirectly layered. There is something . . . particular going
on in our machine that, while not a ghost, continues to *evoke* that
idea.

There is what is often described as a "eureka" effect, where some sort
of cascade of understanding happens. It is the exploration of *that*
kind of hidden difference that is neglected, and so AI continues to
elude us. I think we would be better served by tackling those aspects
directly instead of expecting in vain for decades old missteps to prove
fruitful.

> The field of AI
> just fragmented into a field of building machines that could perform some
> limited domain task that only a human could perform, without any real care
> or concern about how well it fit into the bigger picture.

Which *would* have been fine, if only they had stuck with *intelligent*
performance in the limited domain. My complaint about chess playing
programs, for example, is not that they don't give us a path to strong
AI, but that they don't even demonstrated weak levels of intelligence
about the game of chess itself. They win games, but they don't do it in
a way that reflects any understanding of what they're doing. They can
perform a forking move, but how that relates to a fork in a different
game like tic tac toe isn't even significant.

> > I can imagine something that can learn and is not intelligent.
>
> Right, that's a valid point.

And so you understand why I'm not sold on learning as being equivalent.
Just as there are systems that are equivalent to a Turing Machine when
it comes to computability, there *may* be learning systems that are
equivalent to intelligibility. We won't know until we get at the
nitty-gritty of what intelligence itself really is, though.

casey

unread,
Aug 13, 2011, 8:03:21 PM8/13/11
to
On Aug 14, 6:21 am, c...@kcwc.com (Curt Welch) wrote:

> casey <jgkjca...@yahoo.com.au> wrote:
>> Humans have no difficulty in translating their neural weights
>> used to play chess into a set of rules or heuristics so there
>> is clearly a mechanism for doing it.
>
>
> Apples and oranges John. Apples and oranges. You are talking
> about two unrelated things and you don't even realize it.

> The neural weights don't get "translated" into rules or heuristics.
> They are just used.

That they are "just used" doesn't mean they can't also be translated
into rules of heuristics.

> Sometimes, the weights make us say things like "maybe I should


> protect the queen", or "apply more pressure to the center of the
> board". But those heuristics are not in any sense the same ones
> the neural net is actually using.

I would disagree. The weights and the resulting neural firing is
the neural code and some of it can be translated into the code
we use to communicate with each other. There is at least a "many
to one" translation from a neural code to a language code. For any
verbal behavior there is a neural code substrate.


> If translating the neural nets in our brain into heuristics was so

> easy, anyone would write a chess program to play chess at the


> exact same level they played it at. Not only that, you would
> write the program to play EXACTLY the same way you did. So after
> translating the heuristics in your head, into code, your program
> would perfectly mirror every move you made in any chess game.

I am not suggesting we can translate everything into language as
we know some things happen that we cannot translate (they are said
to be unconscious processes). And the translation is not always
correct as the translator found in the left hemisphere will make
things up. But one way of following a neural process while someone
is thinking is asking them to talk aloud about their thoughts.


> That of course never happens. Doesn't even come close to happening.
> People have NO CLUE what the neural weights in their head will make
> them do, until after they have done it.

I never suggested they know anything about their neural weights.
What is happening is the neural weights form high level patterns
that can be translated into sentences. A picture is an array of
pixel values but the picture itself is a high level representation
and it is that representation we are talking about when we say
it is a picture of Bill Jones.


> If translating heuristics were so damn easy, we would have solved
> all of AI 50 years ago John.

I never said it was easy but they are working on it.


> People can't translate them, which is exactly why we have made so
> little progress in 50 years.

Yeah and 500 years ago you would have said people can't fly that is
why they haven't made any progress in 50 years.

> Every time someone things they have the right "heuristics" to
> explain aspects of human behavior, it has failed to be very
> intelligent.

Rubbish. Chess programs are very good at playing chess. Programmers
have been very good at embodying their methods in code. When you
play chess you use all the things you have learnt about chess and
programs can also use all the things programmers know about chess.
There is no reason I can see that the same couldn't work for an ANN.


> Neural networks like TD-Gammon which only have a few hundred
> weights, are way beyond our comprehension. A neural network
> on the scale of a human brain with trillions of weights would
> be so far beyond our comprehension it's silly to even consider
> there was some possibility of "understanding the simplicity
> behind the weights".

Opinions based on intuition like the earth is flat, prove it.


> We can recognize some high level patterns, in the networks,
> but we can't understand what the networks will do, or explain
> why one specific weight is the value it is. WE an build tools
> to so some calculations on these sorts of things for us, but
> we can't in any sense, as a human, "understand" the full set
> of weights and what they represent.

Are you playing with the word "understand" here? I may not
"understand" French but I can understand an English translation
and that is what I am suggesting.


>> http://www.scholarpedia.org/article/Td-gammon
>>
>> "An examination of the input-to-hidden weights in this network
>> revealed interesting spatially organized patterns of positive
>> and negative weights, roughly corresponding to what a knowledge
>> engineer might call useful features for game play."
>
>
> Notice the words "Roughly corresponding". That means, they "see
> patterns" but HAVE NO CLUE WHY THOSE PATTERNS are the way they
> are, or what they mean.
>
>The next sentence was:
>
>
>"Thus the neural networks appeared to be capable of automatic
> "feature discovery," one of the long-standing goals of game
> learning research since the time of Samuel."
>
>
> The point they were making is that the network _LOOKED_ as
> if it were doing feature discovery, NOT, as you try to claim,
> anyone had a clue how to describe, or make use, of those
> features (outside of the neural network code of TD-Gammon).

Other researchers are developing techniques to translate the
patterns into rules. The point is; at this time you have offered
no proof it is impossible just said such a belief is stupid.

Computer source code can be translated to a neural network of
weights. A simple example is a suitably weighted neuron that can
act as an AND gate.

P = Q AND R

+-----+
Q --->| |
| AND |----> P
Q --->| |
+-----+

And the process can be reversed.

JC

Doc O'Leary

unread,
Aug 13, 2011, 8:15:11 PM8/13/11
to
In article
<b7092554-d97c-47e2...@y39g2000prd.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> Although the human brain may not be simple it may turn out
> not to be as complex as some may have imagined.

I agree, but it is moot to argue the point when we don't have a good
definition of intelligence. It's like trying to argue about how complex
a machine would have to be in order to compute anything computable, all
without ever defining what it means for a function to be computable.

Doc O'Leary

unread,
Aug 13, 2011, 8:39:45 PM8/13/11
to
In article <j23n94$ov5$1...@news.albasani.net>,
Burkart Venzke <b...@gmx.de> wrote:

> What kind of intelligence do you mean? Human? Natural (animals)? Or also
> artifical for example for a turing test?

I don't know that it is meaningful to say there *are* different kinds of
intelligence. Though human intelligence does seem to differ from other
animals, it only seems to do so by degree. I expect whatever is
happening in neurons that expresses itself as intelligence is happening
for all creatures, and could be made to happen in machines.

> > Again, we are on the wrong path; going forward is not
> > progress.
>
> Going forward cannot be wrong, only the kind of way on which you are
> going can be wrong.

If "forward" on your current path circles around to meet itself, it is
wrong. If "forward" on your current path leads you away from your
desired destination, it is wrong.

> >> Can you imagine something intelligent which can not learn? I cannot!
> >
> > I can imagine something that can learn and is not intelligent.
>
> That is not the question, please review the question above!

Please think about my answer more, too. To directly address your
question, yes, I can imagine intelligence without learning, just as I
can imagine that a system can contain true statements that are not
provably true within the system. Whether or not intelligence requires
learning, results in learning, or is equivalent to some forms of
learning *still* depends on a good definition of intelligence.

Curt Welch

unread,
Aug 13, 2011, 10:53:13 PM8/13/11
to
casey <jgkj...@yahoo.com.au> wrote:
> > On Aug 13, 11:37=A0am, c...@kcwc.com (Curt Welch) wrote:
> > > You are ignorant John.
>
> > No you are Curt. I understood the problem years ago and it is
> > explained
> > for the layman in the book written by Ross Ashby.
>
> >>> Solving high dimension problems are trivially easy to
> >>> prove possible.
> >>>
> >>>
> >>> A binary search in a continuous space is an example of a trivial
> >>> solution that runs quickly in a high dimension problem.
> >>
> >>
> >> That only works if you have found a constraint!! Just as an ANN
> >> can only work if it finds constraints.
> >>
> > Duh. If there are no constraints, there is NOTHING TO LEARN!
>
> Something I quoted from Ross Ashby's book years ago it seems at
> least you are learning.
>
> "... the organism can adapt just so far as the real world is
> constrained, and no further." 7/17
>
> "... learning is possible only to the extent that the sequence
> shows constraint" "...learning is worth while only when the
> environment shows constraint." 7/21

At least I'm learning? Why are you quoting stuff that I well understood 30
years ago that has absolutely nothing to do with what we were discussing?
Honestly, what are you thinking? What you are trying to do here?

> >> You have not solved a high dimensional problem with a binary
> >> search you have solved a low dimensional problem created by
> >> making using of constraints.
> >
> >
> > You have no clue what the term "HIGH DIMENSION" means do you?
> >
> > It means the search space is too large to completely search.
>
> Yes that is what I understand it to mean.
>
> > The constraints that you are talking about do not reduce the
> > size of the search space, and as such, are not changing it
> > to a low dimension problem.
>
> Yes you are. Out there is high dimensional. The first reduction
> is done at the input. The *actual data* being processed is not
> the high dimensional data "out there".
>
> >> example:
> >>
> >> 86422864826428462868462 millions of these types gets a reward
> >> 17537972775391275271973 millions of these types gets punished
> >
> >
> > I have no idea what you are trying to say that. Are those
> > numeric values, or just temporal digit strings?
>
> Can't you spot the difference in the strings which would enable
> a simple filter to reduce them to two types?

Well, when you ask that, yes I see there seems to be lots of even digits in
the first string and lots of odd digits in the second, along with a few 2's
that don't seem to belong there.

Still also don't get your point of posting the string, but I think the
context has been lost now.

> >> This system can be reduced to a system with two possible states.
> >
> >
> > Yes, some high dimension problems aren't really high dimension,
> > and can be simplified to a low dimension problem - like turning
> > a video image of a chess board, into the low dimension game
> > board position. The video image is high dimension, but the
> > chess board position is low dimension. But most of life is not
> > so simple John.
>
> Not so simple but the principle is exactly the same. We think
> with a "simplified" representation of the world extracted via
> a process of goal appropriate reduction of the input data.

Well, the brain must work with a "simplified" representation since it's
impossible to have a full representation of the universe in our brain.

But I suspect you might be talking more about the simple list of words that
we might say as we talk to ourselves (such as the symbol "ball" being a
simplification of our actual sensory perception of real ball. However,
there's no indication that the meaning our brain assigns to a simple symbol
like the word "ball" is anything simple at all. Meaning seems to be highly
complex - probably encoded with the firing and stimulation of millions of
neurons to represent something "simple" like the meaning of the word
"ball". The firing of a million neurons is far simpler than the real word
ball itself, but it's not what I would every call a "simple" internal
representation.

> > Chasing and catching a rabbit that is trying to run away from
> > you in a forest is not a low dimension problem.
>
> Actually it can be a very low dimensional problem using a simple
> motion detection filter. The rabbit becomes a moving blob.
> I have played with such programs to follow objects.

Yes tracking a specific type of object in a 2D video image is not all that
hard and with the right pre-processing.

But do you grasp I was not talking about a problem anywhere near so trivial
right? I was talking about writing the code to make a ROBOT actually chase
and catch something like a real rabbit, in a real forest, full of real
obstacles like rocks and trees and rivers and dirt holes the rabbit can
hide in.

> > It can't be reduced to a low dimension problem. It has to be
> > solved as a high dimension problem.
>
> I disagree based on work being done in vision and our current
> understanding of the human visual system. Sure it may not be
> as simple as motion detection, powerful though that is as a
> constraint, but it comes no where near the high dimensional
> reality "out there".

Of course, The internal representation is no where near as complex as the
universe. Duh - again.

> What your brain actually has to process
> is dramatically reduced and much of the detail you think you
> see "out there" is a constructed illusion.

Yes, Duh again. You and I know this.

But the data it is processing, is still a data flow in the many Mbits of of
information per second. It's not only high dimension, it's extra super
high dimension.

When they track the information flow though the brain, there are no choke
points where all that huge data flows goes down to 5 bits to make it "low
dimension". It's comes in as high dimension, it's transformed as it passes
through the networks of the brain as high dimension, and it comes out as
high dimension driving the muscles of the body. The output is a lot lower
bandwidth than the input, but it's still super extra high dimension
signals. If you care each nerve a "dimension" (which would not be uncommon
for how the term is used in statistics), then they say 5 variables is "high
dimension". The optic never has what 100 million signals? Or is that
before processing down to 1 million? I forget the order of magnitude
there.

Do you see how 100 million is greater than 5 John? Do you understand that
5 is enough to make it "high dimension" for some people? And that 100
million is so far past high dimension it's absolutely absurd that you have
made me again waste this time talking about this with you?

> >> > All high dimension reinforcement learning algorithms face the
> >> > same problem. They have a state space many orders of magnitude
> >> > too large to search every space. So to solve it, there are a
> >> > handful of different approaches.
> >>
> >> It all depends on finding constraints which is only possible if
> >> there is a lower dimensional representation possible.
> >
> > No, it's not John.
>
> Yes it is Curt.

Actually, I think I read your comment above as "low dimension" instead of
"lower dimension" (as you wrote it) when I said "No".

The fact that it is transformed to a _lower_ dimension problem by the use
of constraints is (potentially) valid. It never however makes it down
enough to call it anywhere near "low dimension" however.

> My experience in vision problems and knowledge of how it is being
> done by others gives insight into the problem a learning system
> will have to solve.

Yes, it does.

> > How does a learning algorithm, figure out how to use the arms
> > and legs and eyes of a robot body, to make it catch food?
> > This is something all the learning systems of mammals have in
> > common, and it can NOT, even with the help of constraints, be
> > transformed into a LOW DIMENSION RL problem.
>
> If it cannot be reduced to a simpler representation to match
> the computing power of the control system then it cannot be
> solved. The control possible by a controller is limited to
> the extent its variety can match the variety in the controlled.

Yes. But not very relevant to this low vs high dimension discussion since
our cpus operate on super super high dimension data streams with no
problems at all. Yes, to solve a data intensive real time problem, we will
need a cpu large enough to do the work. Duh. Again.

> >>> One, is to user evaluation functions that can direct the
> >>> search, as the binary search did. That only works if such
> >>> an evaluation function is known to exist.
> >>
> >> As I wrote you need to find constraints so it is NO LONGER
> >> a high dimensional problem like a combination lock.
> >
> >
> > You seem to have no understanding of what "high dimension" means.
> > You are not using the term correctly. All it means is "state
> > space size too large to search exhaustively".
>
> I know what it means and your definition is accurate.
>
> > Standard RL algorithms REQUIRE that the state space be small
> > enough, to search exhaustively (visiting every state many times)
> > through trial and error interaction with the environment.
>
> Sure but as you know the "standard" RL illustrates the principle,
> not the solution, to control of a high dimensional input.

Well, only an abstract principle which fails to explain how the brain can
do it. The simple RL systems must visit every state many times to converge
on a solution. The brain converts on solutions without visiting hardly any
of the states. So the simple RL says what the brain does is impossible.
And if not for the fact we had brains to point to, everyone might actually
believe it was impossible.

> > That approach does not scale once the state space gets too
> > large to search exhaustively, which is what happens when you
> > move from the trivial game of tic tac toe, to non trial but
> > still highly simple games like backgammon, or chess, or go.
> > All of these games represent high dimension learning problems
> > - there state space is too large to search exhaustively.
>
> The number of possible combinations for tic tac toe is also
> too high for a small system -unless it makes use of constraints
> that *can* be found in the game by a non-standard RL that
> doesn't just use an exhaustive search algorithm.

Huh? The entire game state space is less than 20K states (without any help
of reflections rotations, or eliminating invalid board positions). What
computer these days can't track 20K variables? You have to look pretty
hard these days to find a computer that is too small to do that.

> > When we move up from the trivially simple world of board
> > games, and start trying to do learning for a multi-leg robot
> > so it can learn to chase a rabbit though a forest, we are
> > so far away from low dimension, it's absurd that you keep
> > suggesting that "constraints" are the magic sauce that end
> > up turning the problem back to something as simple as tic
> > tac toe.
>
> Yep "constraints" are the "magic sauce" but it doesn't make
> them easy to find. The ANN in td-gammon found some but fails
> to do so in a chess game state for reasons you might like
> to think about.
>
> > Just standing on two legs without failing over is a
> > learning problem way above tic tac toe.
>
> A different type of problem. It requires a complex but not
> high dimensional computation of the feedback systems to
> adjust the output motors.

You really need to learn to use the term "high dimension". You failed yet
again to get it right.

You comment above is correct, but it has nothing to do with what I said.

I said:

> > Finding the "important" features in the high dimension sensory
> > space is a high dimension learning problem. It's never a low
> > dimension problem.

aka, extracting features that have a high correlations to a reward signal
in a high dimension space is a high dimension problem.

> > The learning problems the brain can solve (and the ones
> > that none of our learning programs let solve so well), all
> > require that there are usable clues from the environment
> > for testing "closeness" to a solution. The system can
> > expect to tell that it's getting close to a good solution.
>
> Yes they have to find constraints. If constraints don't exist the
> organism will not adapt.
>
> > This is not reducing the problem to LOW DIMENSIONS. It's
> > simply a hill climbing technique for searching a HIGH
> > DIMENSION space.
>
> A 100% high dimension space would not have hills. Each position's
> height would be unrelated to its neighboring position's height.

Oh my god John, please do some reading on what the term "high dimension"
means.

It does not mean non-correlated sensory signals, or signals with no
constraints.

You have now used the term "high dimension" in what must be at least 5 ways
that have nothing at all to do with what the term actually means.

casey

unread,
Aug 14, 2011, 3:40:55 AM8/14/11
to
On Aug 14, 12:53 pm, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
> Well, the brain must work with a "simplified" representation
> since it's impossible to have a full representation of the
> universe in our brain.
>
> But I suspect you might be talking more about the simple list
> of words that we might say as we talk to ourselves (such as
> the symbol "ball" being a simplification of our actual sensory
> perception of real ball. However, there's no indication that
> the meaning our brain assigns to a simple symbol like the word
> "ball" is anything simple at all.
>
>
> Meaning seems to be highly complex - probably encoded with the
> firing and stimulation of millions of neurons to represent
> something "simple" like the meaning of the word "ball".

There are thousands of logic gates turning on and off when
a computer program recognizes a ball but that doesn't mean
the "meaning is highly complex".

> The firing of a million neurons is far simpler than the real
> word ball itself, but it's not what I would every call a
> "simple" internal representation.

How the brain represents the world is a big topic and I can't
summarize the current theories here. But that there may be
millions of neurons involved when the brain is representing
something like a "ball" isn't what I mean by highly complex.


> I was talking about writing the code to make a ROBOT actually
> chase and catch something like a real rabbit, in a real forest,
> full of real obstacles like rocks and trees and rivers and dirt
> holes the rabbit can hide in.

I understand your intuitive belief in the need for a complex
solution to such apparently complex situations but based on my
experience trying to solve vision problems that evolution had
to solve, or a system would have to learn to solve, I hold a
different view.

I would add that a simplified representation may still require
intensive parallel computations which the brain is good at.


>> What your brain actually has to process is dramatically reduced
>> and much of the detail you think you see "out there" is a
>> constructed illusion.
>
>
> Yes, Duh again. You and I know this.

I don't think you do appreciate just how reduced it is.


> But the data it is processing, is still a data flow in the many

> Mbits of information per second. It's not only high dimension,


> it's extra super high dimension.

All the vision data I use in my visual recognition programs are
"high dimension" in that sense but I don't see the issues you
seem to have with regards to how a system makes use of that data.

I think actual examples are required to see where you find an issue.

Just as with the tic tac toe example below, which you didn't
follow why I mentioned it, with vision problems I started out
with a simple input of a 5x7 binary matrix that could hold
the patterns for the standard ASCII set. Given that such an
input has 2^35 = 34,359,738,368 possible combinations what
methods could a program use to learn to recognize (or map
to an output) any of those possible input patterns.

In my experience people at first think that a particular vision
problem requires a complex solution because the input is so
complex. Just recently someone was trying to solve a pattern
extraction problem the hard way and when I to showed him there
was a simple solution he exclaimed "Wow, thanks, that algorithm
is amazing!" It wasn't amazing of course but it is one of the
many times I see a newbie overestimate how complex a solution
has to be.

I think evolution would tend to find the simpler solutions first
because a simpler solution is faster and uses less resources
giving it a reproductive advantage over those organisms that
use a more complex solution.

[...]

>> The number of possible combinations for tic tac toe is also
>> too high for a small system -unless it makes use of constraints
>> that *can* be found in the game by a non-standard RL that
>> doesn't just use an exhaustive search algorithm.
>
>
> Huh? The entire game state space is less than 20K states
> (without any help of reflections rotations, or eliminating
> invalid board positions). What computer these days can't
> track 20K variables? You have to look pretty hard these
> days to find a computer that is too small to do that.

I think you missed the reason for the statements.

I was making a relative comparison where we can do it both
ways to see how even system too small to memorize all the
game states could still learn to play the game and where
those techniques, such as using temporal differences and
ANNs, can be easily illustrated to show techniques for
dealing with problems bigger than a system can solve by
an exhaustive search.

The fact that a modern computer can solve the tic tac toe
learning by brute force by playing every combination wasn't
the point being made. Have you forgotten the email exchanges
we had when I wrote such a program to explore the use of
temporal differences and then went on to explore the use of
an ANN? It is better to start with a simple example to see
if you do understand these things which means you can code
them. I was going to try and duplicate td-gammon but didn't
have the time.


> You really need to learn to use the term "high dimension".
> You failed yet again to get it right.

Well perhaps we need to flesh out what the difference is
when I talk about "high dimensions"? I often read others
write about it the way I do. They all seem to me to talk
about reducing the data to a manageable level for any
given system.


jc

Curt Welch

unread,
Aug 14, 2011, 10:46:39 AM8/14/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:

I can't imagine anything I would call intelligence without learning (even
trying to ignore my definition of the term).

We can build a robot that drives around the floor, and when it hits a wall,
it backs up turns, and then drives on. But such machines, without
learning, can sometimes get trapped in a tight spit and keep bumping itself
against the same wall over and over until it's batteries wear out. When I
see these little non-learning machines do things like that, my first
thought is not one of how intelligent the machine is.

If you don't have learning, you don't have term memory of the environment.
When a machine can use memory to respond differently in the future based on
the past environment, then it's a learning machine. It's a learning
machine because it has changed it's behavior in response to its
environment. It doesn't have to be a reinforcement learning machine just
because it has memory, but it is a learning machine.

So a machine without any learning, is a machine with no long term memory at
all and no ability to change it's behavior based on past experience. It
will act exactly the same way, when presented with the same environmental
conditions, every time.

I can't imagine any form of machine that is limited to type of behavior,
that I would consider intelligent.

Short (non permanent) memory could be argued as learning, but if it's not a
permanent change we could also argue it's not technically learning.
Without short term memory, the machine is really limited to stupid
behaviors. But with short term memory, it can only look intelligent, for a
short term. It stops looking intelligent, once you realize it has no
memory of what was happening before it's memory limit. Though it could be
very human-like for the short term, the instant it showed it had no memory
of what was going on 10 minutes ago, it stops looking intelligent to me.

Curt Welch

unread,
Aug 14, 2011, 11:59:15 AM8/14/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110812193349.140$2...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > I think you have made your point. But in all this, you have put forth
> > no suggestion as to what "I" might actually be - or even what sort of
> > thing it is so as how we might describe it. Do you have any notion or
> > rough idea what it is you think you are looking for?
>
> I can't pretend to be as smart about intelligibility as Turing was when
> it came to computability. All I know is that seems to be some
> small-but-extraordinarily-important difference between how an
> intelligent system changes data into information. Maybe that seems
> obvious, but maybe it so obvious that we haven't really thought about
> how that difference should be impacting how we explore building AI
> systems.

Well, you are skirting on dualistic ideas again there.

But, ignoring the dualism problem, sensory data has lots of information.
But we like to talk about concept like signal, and noise. Noise is sensory
information just as much as signal is. It's just values in the sensory
stream. But we as humans like to think of some things as useful (signal)
and others as wasted garbage (noise). We naturally classify all sorts of
stuff in those sorts of terms - useful or not useful. We assign value to
everything. We look at anything without sensing it's inherent value, or
lack of value. We color our world with an overlaying sense of value and it
comes out in everything we do, and think. It's so intrinsic to us, we
can't perceive of a world where everything is equally important - where
nothing has any unique value or purpose.

The universe doesn't have value like that. Only we assign value to the
universe. Physics has no value function.

All that falls out, because we are reinforcement learning machines. And
though reinforcement learning machines seek to make action choices that
maximize a reward, how they work internally, is to compute a value
function. They estimate the value of everything that can sense and do, and
that estimation system is constantly updating itself from everything the
machine experiences (not just from the rewards it receives).

Part of what ends up happening, is that sensory data is classified into the
data that has the most valuable (the data that is most important in making
action decisions), and the data that is least important (noise).

"information" is just the word we use to describe useful data - the data
that the system as extracted from all the raw sensory data as the most
valuable for making actions decisions.

Data is transformed into information by reinforcement learning machines
because they must do that in order to make the highest quality action
decisions.

Now, the exact details of how data is _best_ filtered like that with a
generic data filtering system, is one of the unanswered questions of AI.
But all RL programs are already doing it in some form.

> > For example, I think the brain is just a single processing machine that
> > makes the arms move in reaction to the sensory data. So understanding
> > it as a signal processing machine is a high level place to start to
> > understand what we are trying to describe. Do you agree with that, or
> > do you suspect there might be something even more mysterious than
> > neurons making arms move at work creating our intelligence?
>
> As always, I see no reason to introduce the specter of dualism.

I think there is more reason to bring it up that most suspect.

> Yes,
> we're talking about some kind of signal processing, but the underlying
> *intelligence* clearly it isn't directly tied to sensory information, or
> even just indirectly layered. There is something . . . particular going
> on in our machine that, while not a ghost, continues to *evoke* that
> idea.

Let me sidetrack to dualism before we move on your eureka idea below.

Humans can sense things happening in their head - such as when we talk to
ourselves in our head, or have memories, or "think" in other forms besides
talking, like visual. We have this head full of private mental events.

But what these events? Where are there? Do they have a physical form? We
can't tell from our direct experience, because there is no data available
to us to tie our private mental events, to the external events, we sense
with our eyes and ears and fingers.

When someone is thinking, we can't hear what he is thinking with our ears.
We can't even hear any buzzing cumming from him. We can't feel him vibrate
like we can feel the engine of a car vibrate. We can't see anything
happening with our eyes - no flashing lights or glow coming out of his
ears.

Not only can we not sense these private thoughts of others, we can't sense
anything about ourselves either - our own years, fingers, and eyes. aren't
picking up anything happening at all, when we have private thoughts.

This makes those private thoughts seem to exist in a realm separate from
the stuff we can sense with our eye ears and fingers. They have no
physical form to us, because our "physical" sensors can't sense anything
connected to them.

These simple facts, is what leads to the perceived separation of mind and
body. It's just a lack of sensory information that leads us to naturally
creating a dualistic view of the reality we exist in. This inherent
dualism, grows in people to the point that that they sense they have two
parts to them - the "mental" part, and the "physical" part. It become a
prime foundation of how we think about everything. Even what happens
beyond the human body.

To tackle the "mystery" of AI, people need to grasp just how deep this
dualistic thinking tendency has been embedded into us by our culture. They
don't just need to reject the idea of a soul, they have to learn things
like software is really just more hardware, and not something separate from
the hardware. Our dualistic thinking patterns have littered the landscape
of our cultural thinking patterns - and all that dualstic litter, makes AI
and intelligence look more complex than it really is to lots of people.

I just wanted to throw that out because I think it causes a lot of
confusion in people, and might be causing some of that "mystery" in your
thought process when asking yourself just what is "intelligence".

> There is what is often described as a "eureka" effect, where some sort
> of cascade of understanding happens. It is the exploration of *that*
> kind of hidden difference that is neglected, and so AI continues to
> elude us. I think we would be better served by tackling those aspects
> directly instead of expecting in vain for decades old missteps to prove
> fruitful.

ANY idea like this, I can tell you why RL answers it. I've been answering
all these sorts of questions, both for myself, and for others, for a very
long time now.

The "eureka" effect, just comes from the fact that we have almost no idea
why we do the things we do. We rationalize all the time, to try and
justify and explain our behavior, but for the most part, we have no idea
and can not predict what we will do with any fine grain of accuracy.

I have no clue what I'm going to write in one of these posts, until I'm
done writing. I make it up as I go. I can't predict what my brain is
going to make my fingers do next.

A strong reinforcement learning machine will make action choices by in
effect, doing a weighted after of all past experience, biased by how close
each past experience, was to the current context, and pick an action
choice, based on the sum total of all that information. Everything you
have every done, or experienced, is likely having some effect, on
everything you are doing right now.

How could we possible "predict" what the output of such a massive
statistical process based on billions of stored parameters, is going to do
next? We have no hope. We can only predict the tip of the iceberg effects
by doing a little high level philosophizing and hand waving.

So the first part of the "eureka" effect is simply the fact that our brain
will do something, we didn't see coming - and it does it, by leveraging an
entire life time of experience. That is also what leads people to believe
they are psychic. They seem to know things, which they don't think they
should know. But all they are doing is making very good educated guesses
based on that huge database of stored knowledge. And not only can they use
the power of their own brain to fool others, some people even seem to fool
themselves into believing something magic is at work.

But back to "eureka". As I said in other articles. Reinforcement learning
machines assign value to everything. It's using all those value estimates
to make a prediction about expected future rewards - it's trying to predict
rewards, (or a lack of rewards) before they happen. And it uses that
prediction, to train itself. Our learning happens not from the real
rewards, but from the difference between the prediction, and the real
rewards.

When we look down, and see a $100 dollar bill on the ground, we recognize
it's value, and we recognize that value was unexpected - our prediction
system did not predict our environment would suddenly become that much more
valuable to us. We have a sudden jump in our internal measure of value,
which we are very aware of, as a "eureka" moment (yeah, I just got $100 for
free!).

But the same thing happens, when we have thoughts. Thoughts are no
different than actions (despite how our dualistic world view makes is think
they are so different). They are just physical movements of our brain that
we have some internal sensory awareness of. But we recognize the value of
out thoughts, just as we recognize the value of seeing a $100 bill in the
ground. It allows our prediction system to indicate that the future just
got better for us.

If you are trying to solve a puzzle, like the puzzle of AI, we can spend
time thinking about it. But what thoughts do we have? Again, this is just
behavior being produced by that large statistical process that is using
information from a life time of past experience to guide it. It produces
the sequence of thoughts that it calculates to be the most useful, like it
might tell us to look down at our feet. It was not predicting such a huge
gain in value by looking down at the feet, but it did think that action was
the best action possible at that point in time, for getting maximal future
rewards. It just ended up getting a lot more "reward" that it was
predicting when it saw that $100 bill.

The same thing happens when we have thoughts. Our statistical engine
produces some thought, and then instantly the value analyzer, determines
after the fact, that the thought will lead to higher than expected future
rewards. We have a "eureka" moment.

Another example. We are captive in prison, and we think there is no way
out, and we are going to be tortured and killed. Our "future reward"
predictor is running at an all time low. But suddenly, in response to the
sensory stimulation, and idea forms in the brain of how we can escape.
Just on the chance we might escape, our reward predictor takes a sudden
jump to the positive. We have a "eureka" moment which fills us with all
sorts of hope for the future we didn't have the second before.

Eureka moments, are just those times, when our expected future reward
predictor, suddenly takes a quick turn to the positive. And these reward
predictors, are a standard part of all current reinforcement learning
machines.

> > The field of AI
> > just fragmented into a field of building machines that could perform
> > some limited domain task that only a human could perform, without any
> > real care or concern about how well it fit into the bigger picture.
>
> Which *would* have been fine, if only they had stuck with *intelligent*
> performance in the limited domain. My complaint about chess playing
> programs, for example, is not that they don't give us a path to strong
> AI, but that they don't even demonstrated weak levels of intelligence
> about the game of chess itself. They win games, but they don't do it in
> a way that reflects any understanding of what they're doing. They can
> perform a forking move, but how that relates to a fork in a different
> game like tic tac toe isn't even significant.

TD-Gammon works in a way that is far more similar to how humans play games.
It uses "gut instinct" to pick moves using a large statistical process that
merges together all past game playing experience.

What TD-Gammon doesn't do like humans, is to learn how to use language to
talk to itself about what moves it should make. Or to visualize future
move scenarios. It's only a very strong "gut feeling" or "emotion" machine
for the limited domain of picking Backgammon moves. But I think it's the
exactly right foundation of explaining what the foundation of intelligence
is.

> > > I can imagine something that can learn and is not intelligent.
> >
> > Right, that's a valid point.
>
> And so you understand why I'm not sold on learning as being equivalent.

Sure. But I don't think you have thought through all the ways that
reinforcement learning explains every little last aspect of human ability
and human behavior, like I have. Nor do I suspect you have spent as much
time studying the problem of reinforcement learning itself, as I have.

So you don't see the massive number of parallels between human behavior,
and the expected behavior of a strong reinforcement learning machine.

Like the example above about "eureka" moments. I've spend decades (as a
hobby) thinking about this stuff, and a serious amount of time debating
these concepts here in c.a.p. which has actually helped to open my eyes to
many issues I had never thought of.

> Just as there are systems that are equivalent to a Turing Machine when
> it comes to computability, there *may* be learning systems that are
> equivalent to intelligibility. We won't know until we get at the
> nitty-gritty of what intelligence itself really is, though.

Yeah, true. But as it happens, I have gotten to the nitty-gritty is of
what intelligence is so I'm already there. You just don't have enough
knowledge yet to understand why this is the nitty-gritty of intelligence.

Doc O'Leary

unread,
Aug 14, 2011, 12:58:30 PM8/14/11
to
In article <20110814104638.925$N...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> I can't imagine anything I would call intelligence without learning (even
> trying to ignore my definition of the term).

I agree it is foreign to our own way of thinking, but that doesn't mean
we can quickly dismiss it. Consider that learning allows us to
incorporate new information into our internal representation for
processing. Now try to imagine a system with *so* many sensors that
such an internal representation is unnecessary to its function; updated
information is always readily available to it as easily as we recall a
memory. It seems to me that such as system might still be able to
intelligently process that direct input. In many ways, it is *more* the
behaviorist-style intelligence you champion than anything involving
learning.

> So a machine without any learning, is a machine with no long term memory at
> all and no ability to change it's behavior based on past experience. It
> will act exactly the same way, when presented with the same environmental
> conditions, every time.
>
> I can't imagine any form of machine that is limited to type of behavior,
> that I would consider intelligent.

Think about it, though. Wouldn't intelligent behavior be *exactly*
that? In the absence of learning, it *should* be doing the smartest
known thing under identical conditions. This even explains the
emergence of intelligence evolutionarily!

> Though it could be
> very human-like for the short term, the instant it showed it had no memory
> of what was going on 10 minutes ago, it stops looking intelligent to me.

What, then, of humans who suffer similar memory problems? Are they
necessarily unintelligent? Or, really, what of the countless normal
people we encounter only briefly in our daily lives? I think there are
many circumstances where intelligent behavior can be demonstrated
without any learning.

casey

unread,
Aug 14, 2011, 2:39:29 PM8/14/11
to
On Aug 15, 2:58 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
wrote:
> [...]

> What, then, of humans who suffer similar memory problems?
> Are they necessarily unintelligent? Or, really, what of
> the countless normal people we encounter only briefly in
> our daily lives? I think there are many circumstances
> where intelligent behavior can be demonstrated without any
> learning.

Intelligent behaviors are the result of learning, be it the
result of changes in the brain as a result of experiences or
or as a result of changes in brains due to the selective
process of evolution.

So even if a person loses their memory or an animal is 100%
innate in its sensible actions the behavior itself has been
the result of a learning (or evolutionary) process.

The behavior of a chess playing program is the result of a
learning process in the brains of the programmer and the
chess players who provide information to those programmers.

So without learning no new behaviors we might call intelligent
can ever arise. And learning is the result of a feedback
process involving an evaluation to select the changes.

So in that sense all intelligent behavior involved learning
at some stage. Without learning there is no intelligence.

JC

casey

unread,
Aug 14, 2011, 3:05:30 PM8/14/11
to
On Aug 15, 1:59 am, c...@kcwc.com (Curt Welch) wrote:
> I have no clue what I'm going to write in one of these posts,
> until I'm done writing. I make it up as I go. I can't
> predict what my brain is going to make my fingers do next.

That is only true in detail. I can predict in general what
you will write about. Dualism. RL.

> A strong reinforcement learning machine will make action
> choices by in effect, doing a weighted after of all past
> experience, biased by how close each past experience, was
> to the current context, and pick an action choice, based
> on the sum total of all that information.

I don't think it is a simple "sum total". Also there is
a threshold effect that prunes and selects turning a
perhaps 0 or 1 into an actual 0 or 1 so the detail from
that point on isn't retained.

> Everything you have every done, or experienced, is likely
> having some effect, on everything you are doing right now.
>
>
> How could we possible "predict" what the output of such
> a massive statistical process based on billions of stored
> parameters, is going to do next? We have no hope.

We can't predict that in detail but we can in general
otherwise the statistical processing wouldn't have any
purpose. Its purpose is to make general predictions.


> We can only predict the tip of the iceberg effects by
> doing a little high level philosophizing and hand waving.

Intuitive ideas are fine but they are not the only kind
of predicting we can do and as religion has shown without
a rigorous scientific process the intuitive predictions
(hand waving) can turn out to be completely wrong.


> TD-Gammon works in a way that is far more similar to how
> humans play games. It uses "gut instinct" to pick moves
> using a large statistical process that merges together
> all past game playing experience.
>
> What TD-Gammon doesn't do like humans, is to learn how
> to use language to talk to itself about what moves it
> should make.

Essentially it has no understanding of why its "gut instinct"
might work. It can't explain itself or use logic. This is
the weakness of an associative network. Association and doing
statistics isn't wrong it just isn't sufficient by itself to
explain the powers of the human brain.

JC

Burkart Venzke

unread,
Aug 14, 2011, 3:10:28 PM8/14/11
to
Am 13.08.2011 02:53, schrieb Curt Welch:
> Burkart Venzke<b...@gmx.de> wrote:
>> Am 08.08.2011 16:36, schrieb Curt Welch:
>>> Burkart Venzke<b...@gmx.de> wrote:
>>>>> It's been said that the field of AI is still trying to define what AI
>>>>> is. There is still no general agreement on what the brain is doing,
>>>>> or how it's doing it.
>>>>
>>>> It is not necessary to know how the brain work if we define AI in
>>>> another way.
>>>
>>> Well, to me the "real AI" we are after is making machines that can
>>> replace humans at any task that currently only a human can do.
>>
>> Really any task?
>
> For sure.

My first question is: Do you really want this? A substition for us?
My second question: Couldn't we have intelligence without a possibile
(risk of) substitution?

>> Also as total substitution for us human? For example, I
>> think of love and other emotions, of getting children...
>>
>> For me, strong AI should have an intelligence comparable to ours but
>> without human emotions (which otherwise could aim to big problems such
>> as our replacement).
>
> No, I think that's a sci-fi fallacy.

Do you think of Delta (start trek next generation)?
For me, I. Asimov is a good guide.

> I think it's not just important to
> add in emotions, I think it's impossible to create intelligence without it.

Ok, something like emotions is necessary, also for me.
But I think that it is not necessary to copy all human emotions.
Because of my first question above, I think of "emotions" to fulfill
"enough intelligence", intelligence we want without problems and risks.
Do you want an AI hating humans, extremely being a suicide attacker?

> Reinforcement learning machines are emotion machines. So if you write an
> RL algorithm, you have already built emotions into a machine.

Some especially chosen are ok.

> If it doesn't look like emotions to you, that's only because you learning
> machine isn't good enough yet, not because you have left something
> fundamental out.

How about AE = artificial emotions?

> What do you think love is other than a drive to be attracted to something?

Do we need loving AI (yet)?

> Reinforcement learning machines must assign value to everything in order to
> work. Every sensation, every action, has value assigned to it (aka every
> state or every state actuation pair). The assigned (calculated) values are
> what drive every action choice the machine makes. What makes us seek out
> the company of one person, and avoid another?

Should an AI (as a machine) avoid persons?
OK, one may be more helpful for it (seek out his company) but avoid...?
What would you say if computer don't want to be used by you any longer?

> What makes us eat one food,
> and avoid another? it's just the values our learning system has assigned
> to all these things.

Yes, finding good "food" and avoiding perhaps destroying is reasonable.

> Love is nothing more than a high value our system has assigned to a thing.

Human, natural love is too complex to described with a single value.
For example you can love and hate one person.

We should care of what we talk about: High values assigned to an
aspect/situation is fine, love in this sense a special notion.

> Fear is just the prediction of pending loss of rewards (pending high
> probability of lower future rewards).

That is not human fear resp. it can be far more complex.

> All our emotions can be explained in terms of the values our reinforcement
> learning system is assigning to the things we can sense and the actions
> which are selected based on those values.
>
> It's impossible to build a reinforcement learning machine that is not
> emotion based.

Would you accept that this emotions are artifical emotions, not the same
humans have in quantity and quality?

>>> If a company has to
>>> hire a human to do a job, because no one knows how to make a machine
>>> that can perform the same job, then we have not yet solved the AI
>>> problem.
>>
>> I hope that not only the company owners then have jobs and 90-95% of the
>> humans have not... as far as this company owners do not spend a lot of
>> money/tax for the 90-95%.
>
> Well, that's a different social problem, but one I think we will be facing
> big time this century. Assume by 2050, we have $1000 AI processors that
> are more intelligent than any human. They can be built into to any
> machine, to make the machine intelligent (or control them remotely), and we
> can motivate and train these AI machines to do any job a human can do -
> only in most cases, better. And the AI brains don't need vacations. They
> don't need to be paid.

No vacation, not to be paid... If AI have human like emotions, won't
they need something like this?

> And when you train one for a job, you can clone it
> into as many other machines as you want with a copy operation. You can
> even create special configurations where long term memory (aka learning) is
> disabled, and only short term memory works, so it can perform the same
> boring job hour after hour, day after day, and have no concept of having to
> do this for the 2000 years it's been doing it.

I agree, this all could be possible.

> When these sorts of machine become available, the value of human
> intelligence, will drop to below the value of these machines (which is only
> the $1000 capital cost plus the pennies a day for the power to run them).
> Humans basically won't be able to find work.

May be... would you appreciate it?
Or we could have other work like controlling the machines.
No (forced) work would be acceptable if we have enough money to live
without work. But this needs another society where we have a real
democracy (without the gap between the rich and the poor, also regarding
to power).

> Not only that, we will get to the point, as consumers, that we won't want
> other humans to do these jobs, because they will suck at the jobs compared
> to the machines. Get a human to fix my car? No things, the jerk tries to
> rip me off, fails half the time to diagnose the problem correct, takes
> hours to fix something the machines can fix in minutes, and breaks
> something else in the process and makes me pay to fix it, because he claims
> it was already broken. And he makes me schedule the appointment days in
> advance. If I take my care to the AI dealer, they can fix even major
> problems, like replacing a transmission, in minutes, because the AI has 50
> special built arms for doing that work, and can work like 20 mechanics at
> once on the car. The cars end up being built in ways to shave money that
> means a human can't even do the work on them anymore.

OK.

> Human doctor doing surgery? Now fucking way.

Then, doctors may have more time to speak their patients, more about
their emotions (and problems with them) - AI as a machine should not be
better humans (at least at first).

> Human cab driver? No way - the AIs have a near zero accident rate and are
> able to share "thoughts" with the other cars near by so they all known each
> other intentions instantly.

We will see perhaps.

> Human prostitutes? Not once you try an AI. Their skill is so far greater
> (and no chance of human diseases). Plus, the one you like so much,
> relocates into the body of local AI prostitute no matter what shop you go
> into, so you get the one you like the most no matter where you are - no
> need to "wait".

I don't know prositutes but when love is an aspect, a human should make
more sense.

> Humans won't be able to work - the entire notion of "working for a living"
> will go right out the window once these advanced AIs become dirt cheap.
> Whoever owns the most machines, will be the one with all the wealth. If
> you fail to get on board early, by buying and owning the first machines,
> you will be screwed. The guys that get in first, will use the machines to
> take over all markets - and will build bigger and smarter machines, to make
> all the invest decisions for them. The world will quickly become dominated
> by a few huge privately held, AI corporations, that don't have a single
> human working in them.

I hope we can stop such economical war(s)...

I have recently heard that the idea of the American dream ("dishwasher
to millinaire") is questions more that ever because of economical
problem e.g. of overindebted house owners.
Aren't you from the USA and can say if this is true?
(More critical people for a better society...)

> I suspect there will be a real danger that future generations will learn to
> like the AIs better than they like other humans. They might not even want
> to have other humans around them, when they could instead, have their "AI
> friends".

Also I. Asimov has predicted it but only partially (such a society may
die sooner or later).

> What happens when one of the richest AI barons decides he doesn't really
> like humans at all, and he's gained so much wealth and power, he just takes
> over the whole world with an AI army, and kills everyone except a handful
> of human slaves he keeps around in his "zoo"? Why share the resources of
> the planet with billions of other humans, when he can have it all for
> himself, and his 10 closets friends?
>
> To prevent a path like that from happening, society will have to make some
> changes.

Right. And we AI developers can influence it e.g. by giving the AI not
the whole spectrum of emotions - and let forbid the others (like bad
weapons).

>>> And in that definition, I choose to rule out the biological functions
>>> humans are hired to do, like donate blood, and only include the tasks
>>> that a machine with the right control system should, in theory, be able
>>> to do.
>>>
>>> We don't need to know how the brain works to solve this problem, but we
>>> do need to build a machine that is as good as the brain - and odds are,
>>> by the time we solve this problem, we will at the same time, have
>>> figured out most of how the brain works.
>>
>> I am not so sure about it. We are able to fly without knowing how a bird
>> or insect can do it.
>
> Yeah, but most of what makes a bird fly, we did figure out - mostly just
> airfoil fluid dynamics combined with power to weight ratios. What we
> didn't figure out, is their control system, but that's just the AI problem
> again.
>
> My real point there is I think the answer to the AI problem is actually
> very simple - and it's so simple, that once we do figure out out so we can
> build machines, it will become fairly obvious what the brain is doing and
> how it's doing it.

Who knows. Until we have no proof for it other ways have also a good chance.

>>>>> The reason I think we have made so little progress in all this time
>>>>> is because most people working on the problem don't (or didn't)
>>>>> believe human behavior was something that could be explained by a
>>>>> learning.
>>>>
>>>> You mean that they are working only on weak AI?
>>>
>>> Ah, I totally missed the fact that you used the word "strong AI" in
>>> your subject and that you might actually have been asking about the
>>> mind body problem.
>>
>> I don't think that we have a mind body problem.
>
> Well, I don't think there is one, but we have the problem that a good
> number of people are still confused by it - including some scientists
> studying the brain, and engineers, trying to build AI. It's causing some
> work effort to be misdirected.

A lot of persons think that risky economic is fine... what can we others
do against it...
We can mainly try to make it better.

>>> I don't believe in the strong vs weak AI position. Humans are just
>>> machines we are trying to duplicate the function of.
>>
>> Strong and weak AI are not completely different for me. Weak AI is
>> something we already have, which uses normal computer programs.
>> Strong AI is the goal for intelligence which is quite as good as our
>> human intelligence but not necessarily in the same way.
>
> That's the AI definition of "strong AI" - which is fine by me. The original
> definitions of strong AI came from the philosophers and is a reference to
> the mind body problem. In that context "Weak AI" is a machine that acts
> like exactly like a human, but isn't conscious - it has no subjective
> experience (a philosophical zombie). "Strong AI" is a machine that both
> acts like a human, and is conscious.

Hm, in first lines of wikipedia for "strong AI" I cannot find a hint to it:
"Strong AI is artificial intelligence that matches or exceeds human
intelligence — the intelligence of a machine that can successfully
perform any intellectual task that a human being can.[1] It is a primary
goal of artificial intelligence research and an important topic for
science fiction writers and futurists. Strong AI is also referred to as
"artificial general intelligence"[2] or as the ability to perform
"general intelligent action".[3] Science fiction associates strong AI
with such human traits as consciousness, sentience, sapience and
self-awareness.

Some references emphasize a distinction between strong AI and "applied
AI"[4] (also called "narrow AI"[1] or "weak AI"[5]): the use of software
to study or accomplish specific problem solving or reasoning tasks that
do not encompass (or in some cases are completely outside of) the full
range of human cognitive abilities."

Perhaps the usual definition has changed.

>>>>> The problem is that the type of machine the learning algorithm must
>>>>> "build" as it learns, is unlike anything we would hand-create as
>>>>> engineers. It's a machine that's too complex for us to understand in
>>>>> any real sense. So to solve AI, we have to build a learning
>>>>> algorithm, that builds for us, a machine, we can't understand.
>>>>> Building working machines is hard enough, but building a learning
>>>>> algorithm that is supposed to build something we can't even
>>>>> understand? That's even harder.
>>>>
>>>> Hm, you knows... I am not a fan of rebuilding the brain respectively
>>>> its neural structures where the details really cannot be understood in
>>>> every detail.
>>>> But you are right, it is not necessary to understand all details
>>>> precisely .
>>>>
>>>>> I think in the end, the solution of how these sorts of learning
>>>>> algorithms work, will be very easy to understand. I think they will
>>>>> turn out to be very simple algorithms that create through experience
>>>>> machines that are too complex for any human to understand.
>>>>
>>>> Could the be symbolic (in opposite to neural) in your mind?
>>>
>>> Well depends on what you mean by "symbolic". Digital computers are
>>> symbolic from the ground up (1 and 0 symbols) so everything they do,
>>> including neural nets, are symbolic at the core.
>>
>> That are not the symbols I think of.
>
> No, most people don't think that way. I'm weird.

1 and 0 are symbols on a very low level which is too far away from the
high abstract level of intelligence, of learning etc.

When I am driving a car I don't think of physical forces like the
centrifugal force (or even precise formulars for it), I only think of
consequences (and possible actions) for me (and the car and other
macroscopical things) when I drive through a curve quite fast.

>>> The "symbols" that make up our language (words) are not a foundation of
>>> the brain, they are a high level emergent behavior of the lower level
>>> processing that happens.
>>
>> OK, "symbols" has different meanings or intentions. Every word is a
>> lingual symbol with which we associate more or less (other) items (other
>> symbols, emotions etc.).
>>
>> I think about a "stronger" (than "weak") AI which can act with and learn
>> symbols like words. How far such a way to AI may work, I don't know.
>
> Right, to me a symbol is a class of patterns that can be detected by a
> sensory system, that is unique from other symbols, in a set.

Also "symbol" seems to be seen in different ways.
The usage I have learned is: In symbolic system, knowledge is
represented explicitly in opposite to subsymbolic/neuronal
systems/networks with (only) implicit knowledge.

Burkart

Burkart Venzke

unread,
Aug 14, 2011, 5:03:17 PM8/14/11
to
>> What kind of intelligence do you mean? Human? Natural (animals)? Or also
>> artifical for example for a turing test?
>
> I don't know that it is meaningful to say there *are* different kinds of
> intelligence. Though human intelligence does seem to differ from other
> animals, it only seems to do so by degree.

The problem is that we have natural (human and animal) intellgence
(though its definition is difficult or even impossble, something like a
top down problem) whereas we try to create *artificial* intelligence (a
buttom up problem). So we try to match (partly) our natural and our
artificial ideal of intelligence.

> I expect whatever is
> happening in neurons that expresses itself as intelligence is happening
> for all creatures, and could be made to happen in machines.

Hm, does computers have real neurons?
In my mind, "intelligence" should be interpreted as in the turing test,
the result of a black box should be relevant.

>>> Again, we are on the wrong path; going forward is not
>>> progress.
>>
>> Going forward cannot be wrong, only the kind of way on which you are
>> going can be wrong.
>
> If "forward" on your current path circles around to meet itself, it is
> wrong.

Sure.

> If "forward" on your current path leads you away from your
> desired destination, it is wrong.

And why do you expect us to be away from our desired destination?
Or don't you? (But what otherwise then?)

>>>> Can you imagine something intelligent which can not learn? I cannot!
>>>
>>> I can imagine something that can learn and is not intelligent.
>>
>> That is not the question, please review the question above!
>
> Please think about my answer more, too.

Learning is not everything for an intelligence, you are right, it is
(for me) only a necessary condition ("sine qua non" is leo.org's
translation for the German "notwendige Bedingung").

> To directly address your
> question, yes, I can imagine intelligence without learning, just as I
> can imagine that a system can contain true statements that are not
> provably true within the system. Whether or not intelligence requires
> learning, results in learning,

"Results in"... what "learning" do you speak of?

> or is equivalent to some forms of
> learning *still* depends on a good definition of intelligence.

What can be equivalent to learning? Learning means e.g. collecting and
processing new data, how could this be substituted?

Burkart

Burkart Venzke

unread,
Aug 14, 2011, 5:11:51 PM8/14/11
to
@Doc O'Leary:

I agree to casey and his arguments that learning is necessary for
intelligence.

Burkart

Curt Welch

unread,
Aug 14, 2011, 5:18:00 PM8/14/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110814104638.925$N...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > I can't imagine anything I would call intelligence without learning
> > (even trying to ignore my definition of the term).
>
> I agree it is foreign to our own way of thinking, but that doesn't mean
> we can quickly dismiss it. Consider that learning allows us to
> incorporate new information into our internal representation for
> processing. Now try to imagine a system with *so* many sensors that
> such an internal representation is unnecessary to its function; updated
> information is always readily available to it as easily as we recall a
> memory. It seems to me that such as system might still be able to
> intelligently process that direct input. In many ways, it is *more* the
> behaviorist-style intelligence you champion than anything involving
> learning.

Well, sure I can imagine such a machine. We can see it for example in a
machine that plays tic tac toe. It has full knowledge of the state of the
environment. Easy enough to also imagine god-like knowledge where
something knows the entire state of the universe.

But where is the intelligence in that? How is knowing the state of the tic
tac toe board fully intelligent?

It still must act. So what controls how it acts? It could have it's
entire behavior set hard coded into it's hardware - such a a tic tac toe
porgram that just had a big table that said for this board position, make
this move.

Such a machine, built to look like a human, could act exactly like a human
for its entire life - assuming it had enough hardware to code that entire
life time of behavior. So externally, we could not tell the difference
between it, and a human. And if course, that would assume however the
system was coded (whoever coded it) was able to perfectly predict the
entire life of this human.

So from that perspective, you are right, that enough sensory perception,
combined with enough hard coded behavior, could make anything "look"
intelligent. It can even "look" like it has memory, when it has none. But
unless there is some really strange things going on in this universe,
there's no indication that humans are anything like that.

> > So a machine without any learning, is a machine with no long term
> > memory at all and no ability to change it's behavior based on past
> > experience. It will act exactly the same way, when presented with the
> > same environmental conditions, every time.
> >
> > I can't imagine any form of machine that is limited to type of
> > behavior, that I would consider intelligent.
>
> Think about it, though. Wouldn't intelligent behavior be *exactly*
> that? In the absence of learning, it *should* be doing the smartest
> known thing under identical conditions. This even explains the
> emergence of intelligence evolutionarily!

Sure. And some animals, like insects, seem to be mostly that - hard coded
little machine with little to no learning. They do "smart" things, because
evolution made them do it. But evolution is itself, a reinforcement
learning process that has long term memory - stored in the DNA. Evolution
itself is intelligence - just a very slow learning type of process.

So we explain why smart behaviors emerge from evolution, because it's a
reinforcement learning process with memory (decent with modification -
decent is "memory") - and all reinforcement learning processes are
intelligent.

This is why some people want to argue for intelligent design - it's because
humans were designed by an intelligence - the intelligence of evolution.

> > Though it could be
> > very human-like for the short term, the instant it showed it had no
> > memory of what was going on 10 minutes ago, it stops looking
> > intelligent to me.
>
> What, then, of humans who suffer similar memory problems? Are they
> necessarily unintelligent? Or, really, what of the countless normal
> people we encounter only briefly in our daily lives? I think there are
> many circumstances where intelligent behavior can be demonstrated
> without any learning.

Right. Remember what I'm arguing. I'm arguing that a reinforcement
learning process is the source of all intelligence. But a reinforcement
learning process is a behavior search process, that attempts to constantly
improve behavior - making it smarter and smarter (seeking higher rewards).

So you can talk about a given behavior and how "smart" that behavior is, or
we can talk about the process that created the behavior.

Most AI research has been working in duplicating the "smart" behaviors,
instead of duplicating the process that gave rise to those behaviors in the
first place.

I argue that the behaviors alone, without the underlying process contantly
improving them, is not "true" intelligence, even though the behaviors are
very "smart". But that's just my definition of "true intelligence".

Machines without the underlying learning process to improve their
behaviors, would keep doing the same things forever. If they were built to
make spider webs, they would make spider webs the same way their entire
life. They would not improve their behaviors over time, and learn to build
a better spider web, or learn a better way to catch flies. To get the
creative component of intelligence, you have to have that underlying
process of constant improvement called reinforcement learning.

We can call making spider webs "smart", but without the ability to improve
what it is was doing, I would not choose to call it intelligent.

Spiders didn't get "smart" because they are intelligent. Spiders got smart
because the process of evolution that built them is intelligent.

A chess program didn't get "smart" because it is intelligent. It got smart
because a process happening in a human brain was the intelligence that
created them.

TD-Gammon got smart on it's own. It's ability to play a good game of
Backgammon was not programmed into it by an intelligent creator. Instead,
intelligence itself was programmed into TD-Gammon, and TD-gammon used its
on intelligence, to learn how to make really good moves in the game of
backgammon.

Reinforcement learning processes are known for doing things their creators
never thought of doing. That's true intelligence. Most AI projects don't
have that property. Most AI projects only do the things their intelligent
creator thought up for it to do.

By creating a reinforcement learning process, the machine is doing what we
tell it to do of course, but what we are telling it to do, in that case, is
"be intelligent". It's figuring out how to act on it's own.

casey

unread,
Aug 14, 2011, 6:08:44 PM8/14/11
to
On Aug 15, 7:18 am, c...@kcwc.com (Curt Welch) wrote:
> So you can talk about a given behavior and how "smart" that
> behavior is, or we can talk about the process that created
> the behavior.

And they are not the same thing which is why I am critical
of your notion that learning = intelligence when we use the
words to describe two different things.

The word "intelligence" is applied to how "smart" it is not
to the process "learning" that created it. Intelligence is
the painting not the painter. To say learning = intelligence
just confuses the reader who knows others mean it to be
smart behavior. We may not know the actual source of the
smart behavior only that it was at some point in time the
result of a learning process.

The problem is the word "intelligence" has been given a
status it doesn't deserve and thus everyone wants to
"explain" it when there is nothing to explain. What is
to be explained is the self organizing mechanisms that
produce behaviors we call intelligent.

I think the high status of the word is also related to
the fact many see it as meaning "being alive".


> Most AI research has been working in duplicating the
> "smart" behaviors, instead of duplicating the process
> that gave rise to those behaviors in the first place.

True. But that is itself a learning process.

> Spiders got smart because the process of evolution that
> built them is intelligent.

I would say it was a learning process. No need to equate
it with the end product, intelligent behavior.

> TD-Gammon got smart on it's own. It's ability to play
> a good game of Backgammon was not programmed into it by
> an intelligent creator. Instead, intelligence itself
> was programmed into TD-Gammon, and TD-gammon used its
> on intelligence, to learn how to make really good moves
> in the game of backgammon.

I would say a learning algorithm was built into td-gammon
that resulted in intelligent behaviors which remain smart
(intelligent) behaviors even though the learning algorithm's
ability to make it smarter has leveled out. The learning
system moved to some stable state (maximized reward) and
stopped. All behaviors can be reduced to systems moving
to stable states.

Learning systems have been around since the start of AI.
The first game playing learning system I think was Samuel's
Checker Player.

http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node109.html

JC

Curt Welch

unread,
Aug 15, 2011, 12:15:22 AM8/15/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110812185214.929$b...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:

> Again with the circular definitions. You have yet to tell us were the
> *intelligence* is in your precious learning.

Yes I have. You just don't understand how what I'm telling you could
possibly be true.

> What is the operation they
> accomplish that *necessarily* results in extracting order from chaos?

Well, you should give me a specific example of what you are talking about
when you say that. I'll then respond to your specific example. You first
need to explain to me what type of "order from chaos" you think is a
property of intelligence.

Humans tend to organize stuff. They line books up on a shelf, they put
things in neat piles. They put things in neat rows. The sort and classify
stuff. That's one example of of "order" that emerges from human
intelligence. That sort of order happens simply because such arrangements
makes live easier - it allows us to get higher future rewards. If I can't
find the thing I want, like my car keys, then I can't go get food, and my
future rewards are reduced. So we organizes our life, in order to maximize
our future rewards. Reinforcement learning machines should be expected to
do that, because they try to maximize rewords.

I don't know of any good examples of current reinforcement learning
algorithms that demonstrate anything near human organizing behavior, so I
don't have anything to show that a reinforcement learning can do such
stuff. But it certainly follows that if the learning systems was "good
enough" we could expect such behavior. It's that magic "good enough" that
is currently missing.

> > You also seem to be using the word "behavior" in a way different from
> > me. And you are using it in a way that irritates me when I see people
> > do that.
>
> Sucks to be you. Maybe you should stop seeing demons in every shadow.

I see ghosts! :)

> Language is used to communicate, and most people use "behavior" to
> communicate outward physical action. If they want to talk about inward
> mental action, they use words like "think". The difference has nothing
> to do with dualism, which only you keep returning to.

The very fact that we have the word "behavior" that does not include
"thinking" is because of the illusion of dualism.

"thinking" is just as much a physical action as waving our hand, and it's
an action we are just as equally aware of hand waving. So why are these
two physical actions segmented into such separate domains in our language
when they aren't separate at all in the physical world? It's because of
the illusion of dualism.

The meaning of the words in our language are based on a foundational belief
in dualism. You can't speak English correctly, and talk as if you do not
believed in dualism. The language isn't structured correctly to allow us
to do that.

Now, the problem I've run into, is that even when people reject all the
dualistic nonsense like souls, and firmly believe in materialism, they
still have a nasty habit of thinking and talking, dualisticly - often with
no awareness they are doing it.

Such fundamental confusion, causes some people at times, to have very odd
views of things like self, and intelligence, and awareness, and what the
brain is likely doing. I don't know you well enough to understand what
sort of thinking you are doing yet. But your "we haven't uncovered the
root of intelligence" raises red flags for me that you might have ideas
that are tied up in this dualism confusion problem. So far however, how
you have respond to my questions, doesn't seem to indicate that to be true.

> > And how the body moves, is "it's behavior". To suggest that AI took
> > the wrong turn, by studying behavior, is to fail to grasp that the only
> > thing to study, is behavior.
>
> I never said it went wrong by studying behavior, but that it went wrong
> by studying the *products* of intelligence (learning, language, etc.)
> rather than the *source* of intelligence. Much like a Turing machine
> gives us the starting point for computability, AI needs a starting point
> for intelligibility. If you can't offer that, if most current research
> isn't moving in that direction, everyone is on the wrong path.

Yeah, well I think humans are a reaction machine that are trained by
reinforcement and that alone is the starting point for understanding
everything important about intelligence, and about human behavior.

> > The question to ask, is which aspect of behavior is the important part
> > to study. So is "winning the chess game" the import part of the
> > behavior, or the "how the move choices were selected" more of the
> > important part?
>
> It isn't even the "how" that matters! Intelligence is about the *why*
> of behavior.

reinforcement learning is the why of human intelligent behavior. :)

> > > Nope. Science does not work that way.
> >
> > You seem to be mixing up the processes of science and engineering. AI
> > is mostly a engineering problem. Not science.
>
> Again, you demonstrate the fundamental misstep that has resulted in very
> little real progress for over 50 years.
>
> > I say Skinner was right, and the "science" was complete back then in
> > terms of understanding what type of machine we were dealing with.
>
> Anybody who thinks science is ever complete is not doing science.
> Anyone who take a position of unfalsifiability is not doing science. If
> you don't start from a scientific understanding, you are unlikely to get
> to an engineered solution.
>
> > > If you really are taking steps,
> > > you could show them. Like I said, where is your chess playing system
> > > that became *intelligent* about the game of chess?
> >
> > TD-Gammon. (for Backgammon)
>
> So you claim, but where is the intelligence?

It figured out, on it's own, how to play backgammon at the level of the
best human players. It figured out things about the game, that no human had
every figured out, including the person who wrote TD-Gammon.

It did this, because it was a reinforcement learning program. And
reinforcement learning programs search for behaviors and at times, find
things that no one else has ever found. That's where creativity comes
from.

> I'll grant you that its
> play differed from commonly accepted strategies, and even played
> "better" in that regard. But you still have yet to define what about
> that difference was *intelligent*!

Again, if you look at the root cause of all examples of intelligence, you
find a reinforcement learning algorithm at the core. I claim, that the
word "intelligence" has always been used to label the behavior of
reinforcement learning processes, but that people just didn't understand
that's what they were talking about.

> > > Where is the
> > > followup step that is *intelligent* about board games in general?
> >
> > That step hasn't been taken yet. It needs to be taken. But my belief
> > is that we are one small engineering step away from a machine that will
> > demonstrate all the "next steps" you are waiting to see.
>
> Belief has no part in this. Show evidence. Make predictions. Or admit
> to being on the wrong path, and being no closer to really understanding
> intelligence than we were at the beginning of AI.

The prediction is that someday, before long, someone will create a new
reinforcement learning algorithm, that will produce life-like, and
intelligente-like behavior in our machines and robots. And not long after
that, the world will be filled with highly intelligent machines working for
us. And from that, the people doing brain research will finally understand
what they need to be looking for when they try to figure out how the brain
works (aka how it implements a reinforcement learning system), and that
will lead to great breakthroughs in understanding the brain.

And, at the same time, it will become obvious to everyone, that
reinforcement learning algorithms are actually the core technology of all
types of intelligence, and once that is well understood, people will simply
start seeing "intelligence" as being "a reinforcement learning algorithm".

That's my prediction. It will prove my point if it comes true. If we
undercover the basis of intelligence, and it's not a reinforcement learning
algirthm, but something different, where reinforcement learning is just
some side-feature of intelligence, them my position will be falsified.

My position is perfectly valid science, it makes predictions, and it's
falsifiable.

> > I have no issue acknowledging I might be on the wrong path. I just
> > assign such a low probability to that "might" that I feel it's valid to
> > act as if it were zero.
>
> It should be near 100%, or else you're just engaging in wishful
> thinking. Everything becomes a swamp of selection bias when you're so
> convinced that you're unlikely to be wrong.
>
> > > That is something *every* quack claims about their miracle cure or
> > > perpetual motion machine or whatever woo they're pushing.
> >
> > Yes, if you apply the quack-o-meter to the way I talk, I come out with
> > a very high score. But the same happens anytime that someone
> > understands some truth before it becomes socially accepted. The people
> > that talked about the earth being round when everyone knew it was flat
> > scored very high on the quack-o-meter as well.
>
> Not if they actually showed their reasoning. You, on the other hand,
> have been derisive when it comes to any sort of reasonable scientific
> approach to your sacred cow of learning. You continue to proclaim you
> know the truth, yet continue to refuse to show how you *know* it to be
> true. It is meta-ironic that the topic in question is intelligence.

I know because there's a huge wealth if evidence pilling up to support it.

To start with, if we believe in materialism, and assume the brain is just a
biological machine controlling our actions (which I do), then we can just
look at humans as black boxes and do a little simple reverse engineering.
We know that humans are not born with hardware that allows them to play a
good game of chess. It's unreadable to suggest that evolution gave us a
chess playing module (though it's perfectly reasonable to suggest as John
like to - that we have modules which evolved for other purposes, that are
useful for playing chess as well). But for chess, and the millions of
other behaviors a modern human picks up in his life time (like the very
large set of behaviors needed to write Usenet posts about AI), we know the
brain had to reconfigure itself over it's life time to turn it into a
machine, that could all the things an adult human can do in modern society.

We can ask, how does that work? How do you start with a black box, that
does not have the hardware needed to drive a car, or speak English, or type
messages on a computer, or operate a toaster, and given enough time, the
black box re configures itself to do these things?

So we know the machine can change itself over time, and it would be nice to
know how it does that (we can guess for example it grows new connections
between neurons and changes the weights of the connections - but how it
does it is not important yet). We just know it someone can change itself.

So knowing this, another interesting question shows up. If the machine has
two options of how to change itself, which one does it pick? Does it grow
a connection from neuron A to neuron B? OR does it grow a connection from
A to C?

Without knowing anything about how it changes itself, we can simply ask,
how does it decide which way to change itself?

We know how to make machines that can change themselves. Our computers can
change their own code. But if we wanted to make a computer, turn itself
into a chess playing machine, how on earth would it do that? Humans do
that - they start out no being a chess playing machines, and then they
CHANGE THEMSELVES into a chess playing machine.

In order to make a machine act like a human, this is a fundamental question
we have to answer - how does it decide how to change itself?

Why did the brain wire itself to play chess? Why did it not, make the
decision to turn itself into a chess piece chewing machine? It could wire
itself to make itself chew up, and spit out, every chess piece it could
find. But it didn't. Why?

We have figured out how to turn a computer into a chess playing machine.
But we make use of our own intelligence to do it. And it's a highly complex
process. A lot of very complex code gets written to make the computer play
chess.

How on earth does a brain turn itself into a chess playing machine? Is
there a little super smart man that knows all about how to play chess, and
it is using it's intelligence, to rewire the brain? Not very likely.

There must be some mechanism in the brain, that is making itself change,
and that mechanism, is what is responsible, for the brain starting out like
a baby, and years later, the brain has been reprogrammed, and is now a
grand master chess playing machine.

What sort of process could possibly do this? What sort of process, which
is probably simpler than the the behavior the brain ends up creating, could
possibly do this? How can complex intelligent human behavior, emerge, from
something that started out simpler?

The answer to these questions, is that something simple has to be directing
the change. It must be something simple, that is able to evolve complexity
with purpose.

The only think I know of that can do that, is a reinforcement learning
algorithm.

To explain human intelligent behavior, you have to answer this question.
And the only answer I've every found, is a reinforcement learning process.

This is one of the 1000's of points that makes be believe I'm right.


> > If you want to continue to explore and better understand all the
> > mysteries of the human brain, then that is an far more open ended
> > process that can continue long after this simpler AI goal is reached.
>
> I don't care about the human brain. I care about intelligence. That is
> present in all kinds of animal brains (to varying degrees), and
> certainly *seems* like it should be able to be created in machines (to
> varying degrees). All I'm looking for is an approach that defines
> intelligence such that it *allows* those varying degrees, instead of
> this haphazardly stabbing in the dark at replicating what a human thinks.

Intelligent behavior is the product of reinforcement learning processes.
There is an infinite degree of different reinforcement learning processes,
and the quality of behaviors they can produce.

As I've said before, evolution itself is a reinforcement learning process.
It's a large intelligent process, which creates all of that complex life we
know about. It makes the life forms do "smart" things (things that help
them survive). And it's an intelligence, that created yet another
intelligence, the human brain. It built into the brain, a reinforcement
learning process, and in doing that, one intelligence has created another.
And when humans, built other reinforcement learning processes, we are
creating more intelligence.

I'm not aware of any of our reinforcement learning processes creating yet
another new reinforcement learning process. That would be fun to make
happen.

> > > No, I think the word "learning" means "learning", as in incorporating
> > > data to a system. I do not circularly define a procedure, *call* it
> > > learning, and then proclaim that context to be the only one valid for
> > > a discussion of "learning" or "intelligence".
> >
> > Lighten up dude. Words have different meanings to different people.
>
> Says the guy who gets irritated over the common usage of "behavior".

:)

> It
> remains an issue that you use circular definitions. Stop doing that and
> things will become downright light and breezy.

:)

> > > Again, no. I simply haven't defined it circularly, or pretended that
> > > it inherently has anything to do with intelligence. As a child, I
> > > learned a lot of things about Santa Claus. Absolutely none of that
> > > speaks to how I intelligently think about Santa Claus.
> >
> > I have made no circular definition. Stop being stupid.
>
> I should have done it sooner, but now is a good time to add ad hominem
> attacks to the list of logical fallacies you have engaged in during this
> discussion.
>
> If your definition weren't circular, you'd be able to say what it was
> about your precious learning that *necessarily* resulted in
> intelligence. If you can't demonstrate, why should anyone think you're
> on the right path to even weak AI?

If I'm talking to someone that doesn't know what intelligence is, how
exactly would I show them that a reinforcement learning process was an
example of the thing he doesn't know how to define?

Since you don't know what intelligence is, and claim you can only tell it
by looking at it, I can't logical explain it to you. All I can do, is wait
until the required strong RL algorithm is created (by me or someone else),
and then demonstrate what sort of behavior it produces to you, and let you
decided if it "looks like intelligence" to you.

But since I don't have that yet, what can I do now for you? You tell me.

> you
> are unable to back up that claim with any hard (or even soft!) science.

Skinner did lots of science. He believe all human behavior was the product
of operant and classical conditioning. His his science not at least "soft"
science to you? All his work backs up what I'm saying here. Or more
accurately, I'm just backing up what he thought (because I agree with it
100%).

> By writing what I wrote, I was making the point that we (or at least I)
> can still discuss things we learned and then subsequently learned were
> not true. Somewhere in there is an intelligence that is independent of
> the learning.

You are using the word "learning" there in a fairly typical way we use it
informally, but not in a way that is at consistent with my user - which is
the far more precise concepts of operant conditioning, or reinforcement
learning.

In my use of the concept of "learning" everything you do, and think, is a
result of your learning. So if you "decide to talk about how the box you
thought was empty, really wasn't empty", that entire act of talking about
that, in that way, is itself, a learned behavior. None of you behavioral
is "separate from" what you learn. Everything you do, and every thought
that happens in your mind, is a LEARNED BEHAVIOR that has been shaped over
your life time, by operant conditioning.

To suggest there is an "intelligence" at work separate from your "learning"
is to fail to understand how I'm using the term "learning".

> That is what I'm after, while you seem content to spin
> your wheels for decades. In fairness, so has most of the other AI
> research, but that offers little comfort.

yes, decades of wheel spinning is not very good evidence to support my
view. :)

> > I don't mind people having that view. People are ignorant, it's
> > normal. We all are. But you should be honest with yourself in the fact
> > you have no good tools to evaluate wither my idea holds water or not.
> > You reject it using your gut instinct, instead of educated reason.
>
> On the contrary. I'm the one who has offered up reason and science, and
> you are the one who championed blind faith and belief.

Rah rah blind faith and belief!

> Here is my tool:
> you have yet to offer any quantum of thinking (that's brain behavior to
> you :-) that differentiates intelligent learning from unintelligent
> learning.

Humans don't have any unintelligent learning in them. You have to show me
a specific example of what you consider to be unintelligent learning, and
then I can respond to that example.

> > > I understand science. I understand evidence. I don't understand
> > > hand waving. I don't understand "oooooh, you just wait and see how
> > > clever I am!"
> >
> > Sure you understand it. You just want to see evidence that I say,
> > can't and won't exist, until after AI is actually solved. You want to
> > see the plain fly, before you will believe it's possible for a plane to
> > fly, because you don't understand the flight dynamics of airfoils and
> > the issues of power to weight it implies.
>
> Funny. You have not offered up the equivalent of flight dynamics or
> power ratios. That is *precisely* what I'm asking for. Instead, you
> keep up with the hand waving.

I've offered up all the work of Skinner to support my position. That's not
"hand waving".

> > I understand thing about the problem domain of reinforcement learning,
> > that you don't understand (as shown below). And these thing about
> > learning I do understand, is what allows me to understand how close we
> > are, despite the fact that not a single machine looks "intelligent" to
> > a laymen yet.
>
> So you continue to claim has been the case for decades. If you really
> do understand so much, you would be able to make hard predictions rather
> than your continued "needle in a haystack" evasions. So make up your
> mind: do you understand how close we are, or are you just randomly
> grasping at nothingness?
>
> > You ignorance of the subject is showing. Your understanding of
> > learning seems to be limited to the highly naive school-boy view of
> > "filling up with facts".
>
> If your understanding is greater, you could correct me. That's all I've
> been asking you to do. Show me where intelligence is necessarily the
> result of what you call learning.

It's not. It IS INTELLIGENCE. It's not the result of it. :)

> From all you state, the impression
> you give is that you're just on a random walk over the entire problem
> space.

:)

> > The desire to build a generic reinforcement learning machine that on
> > it's own, is the foundation of intelligence at the human level, is a
> > desire backed by all the work of science that tells us that is exactly
> > what a human is.
> >
> > > More to
> > > the point, you haven't established that intelligence fundamentally
> > > requires the same processing rate and environment of a human.
> >
> > My definition of intelligence defines it NOT to need that. My
> > definitions says that the TD-Gammon program is intelligent. My
> > definition says that biological DNA based evolution is another example
> > of intelligence in this universe.
>
> Right; your definition is circular. The system learns the way you've
> circularly defined learning, and so it serves as your circular
> definition of intelligence. Now break that circle and tell me *why*
> TD-Gammon is intelligent. What, buried in all the learning, makes its
> behavior notably intellligent?

> Does it really understand Backgammon
> differently, or did it merely *play* the game differently? Why did its
> progress stall 20 years ago, and what does that say about the approach
> for direction future "generic" systems should take. Why haven't future
> systems become more generic?

No one has figured out how to make them more generic yet. They understand
in principle what it needs to do, but the only way they currently
understand it, comes with the curse of demonstration which makes it
impossible to build. Its the same as understanding sorting, but only as a
bubble sort, but not knowing how to write a quick sort, but knowing that to
solve the problem, someone has to figure out how to write a sort that runs
faster.

Reinforcement learning is suck in the same place right now. People know
the brain is using some magic "quick sort" technique to do learning at a
level of performance no one knows how to build a machine to equal. We know
it's a reinforcement learning machine, and we know it's performing at
levels we would otherwise think was impossible if not for the fact we can
watch it do it.

So whether its "tick" is something we can implement in software, no one
knows, because know one knows what the trick is yet. Maybe it takes a very
different type of massively parallel machine to make it work at that
performance level, and our real problem, is wasting time trying to do the
impossible with our computers. People doing brain research are trying to
figure that out.

I'm fairly confident the "trick" can be implemented in computer software
and show substantial performance improvement over any of our current
systems. It may still require lots of parallel processors to duplicate the
power of the human brain, but I think whatever "trick" it is using to solve
learning problems in high dimension data streams, we will be able to use in
our computers.

How long will it take to figure out the trick? That's the question that
will tell us how long it will take us to build human level AI. I think
it's a trick that will be figured out and implemented in computers shortly
(decades). But if it's some trick implemented in a complex molecular
machine that really can't be implemented in electronic, then who knows how
long it will take to duplicate? 100 years? 500?

> Likewise, it is my contention that there is some minimum definition for
> intelligence beyond computabiity.

Meaning intelligence can't be implemented on a computer if that is true?
Or are you thinking something else there?

> Even though it may not be formalized,
> it is possible to ask ourselves if a learning system might meet such a
> definition. Absolutely nothing in what you've presented indicates that
> what you have is somehow *inherently* intelligent.

Well, what happens, if that we look at humans, and try to reverse engineer
how they work. When we do that, the "type" of machine we determine is at
work creating their intelligent behavior, is a reinforcement learning
machine. It's just straight reverse engineering with scientific methods -
which is what Skinner did a long time ago, and this is the conclusion he
came to. He died believing he was correct, despite that a large part of
society rejected his theory.

And they rejected it for the same reason you are rejecting it, though the
theory seemed to fit the data, no one could use the theory to build a
machine that acted intelligent. People tried to, but they didn't work, and
many took that failure to conclude that the theory was invalid.

But that's not a valid conclusion (it's a fine hunch, but not a proven
conclusion).

And here we are, many years later, and a lot of people, other than just
myself, are starting to revisit the idea of reinforcement learning as the
foundation of all human intelligent behavior, because they are wondering if
it might after all be the answer.

There are people, that agree with my view, that what we are missing here,
is not the right foundation, but just some engineering "trick" to improve
performance, like "quick sort" is an "engineering trick" to improve
performance of sorting over a bubble sort.

Since we don't know what the "Trick is" (actually I think I have a clue,
but it's unproven), then we can't know for sure if we are headed in the
right direction.

> > I don't think we are dealing with hardware limitations. I think the
> > hardware to "solve AI" existed 50 years ago. The stuff we have today
> > is so fast, and so cheap, that once the algorithm is understood, we
> > will almost instantly blow past human intelligence with our machines.
>
> Another easy claim to make when you don't understand intelligence. It
> may *indeed* be the case that hardware is not the bottleneck, but that
> doesn't amount to a prediction when you have nothing to base the
> assertion on.
>
> > I've written 1000's of posts here over the past many years. I've
> > covered all these questions many many times in my past posts. Have you
> > read them?
>
> No. Honestly, you're normally in my kill file (since 11 Dec 2008) for
> being so unreasonable in your ramblings. I see little has changed.

:) Once you understand my position, there is really no need to read it
:over and over and over and over again. :)

I don't remember you posting in the past. Is it just my faulty memory, or
are you posting under a different name? Or has it just been a long time
since you last posted here?

> > Do you really want me to write a million lines of text here to explain
> > it all to you?
>
> All I want is for you to start making sense. You could start by talking
> about intelligence coherently (in a philosophical sense, at the very
> least). It shouldn't take a book to informally give a perspective on
> thinking that distinguishes it from information, from data, or from
> learning.

Are you asking me to explain what I think "thinking" is? We haven't gone
there yet.

> That's all I'm looking for: the


> infinitesimally small quantum of difference that divides intelligent
> systems from unintelligent ones.
>
> > That's right. But I have done all the work and I'll I'm telling you
> > here, in this thread, this is where I ended up after all that work.
> > I'm not trying to explain 40 years of thinking on my part in one thread
> > here.
>
> But you should be able to. If there was any rational, scientific method
> at all behind your choice, it should be easy to drill down to the
> specific discoveries you have made that *does* define intelligence.

Everything Skinner discovered.

> Instead, you still admit to being in the dark, yet proclaim with
> confidence how far you've gotten. That's just wrong.
>
> > > Our mistake is in discarding examination of that
> > > path in the face of exponential technological growth. AI did, as you
> > > describe, become more about clever tricks of A rather than a sober
> > > examination of I. Funny thing, though, is that you're on the wrong
> > > side of that divide, but you fail to see it.
> >
> > No, I see it. I just happen to believe I have the right answer. You
> > think I just have yet another pointless clever trick.
>
> Not at all. Again, you may indeed be right, but until you can accept a
> definition of intelligence that is *distinct* from learning, you don't
> really *know* that you are right.

I'm trying to understand what type of machine a human brain is that allows
it to perform all these intelligent actions (and intelligent thinking since
you might not realize when I say "actions" I include thinking as a type of
action). The best machine descriptor I've every found to answer that
question is "it's a reinforcement learning machine". I will not accept
another definitions of "intelligence" unless someone can show me the
destitution of a class of machines, which better fits than the one I
already have.

So far, no one has produced ANYTHING that to explaining human behavior,
where as the description "reinforcement learning machine" fits all known
facts about human behavior. It's not like we have 5 contenders for the
answer, and I just have my favorite. We only have one contender, and on
the other side, a lot of people, like you, walking around with no
alternative.

The only piece missing, from proving reinforcement is the right answer, is
an engineering trick, to boost learning performance in a high dimension
environment, to that of the brain. If such a trick exists, then
reinforcement learning is a perfect fit. It explains what type of machine
the brain is and why that machine produces all the complex behaviors it
does.

> You will only have stumbled into the
> solution, but there is no reason to believe you have the foundation to
> recognize it as the correct solution before you stumble off in another
> direction. You're the stopped watch that is right twice a day. That is
> fundamentally the wrong approach.
>
> > Let me side track and explain what I see needs to be done. To create a
> > robot, that acts like a human (including with thinking power), it needs
> > to have high bandwidth parallel sensory inputs, and lower, but still
> > high, bandwidth parallel outputs controlling all its effectors.
>
> Does it? By that measure, a *human* couldn't act intelligently unless
> it had that kind of hardware.

You misread my point. I was actually saying "has the same sensory
bandwidth of a human".

> And yet I would wager that even in a
> crappy 8-bit Atari world, a human would be able to demonstrate
> intelligence far beyond your expected product of those inputs and
> outputs. I would wager that a human could drive a car by remote
> control, and do so better, by using *far* less data than is at the
> disposal of robotic cars today.
>
> Nor is it obvious to me that that outputs are necessarily lower
> bandwidth than inputs.

Human output is. That's a well known fact. The amount of information
flowing out of the brain to the muscles is far less than the informational
flowing into the brain from the sensors. It's not a requirement for
intelligence in my view, it's just how humans are built. This isn't hard
to grasp. The information flowing into your eyes and ears is far in excess
of the information flowing out of your fingers.

> One of the hallmarks of intelligence is to
> *create* information. I could easily envision a system that had a set
> of sensory inputs that were far smaller than its set of outputs.

Sure that's fine as well. Just not how a human is built. I was talking
about how to build a machine that duplicated HUMAN performance. We need to
duplicate their sensory bandwidth and motor bandwidth if we want to get
close to actually duplicating (and not exceeding) their performance.

> > The system is basically storing a complex sort of "average" of all past
> > learning experiences on top of each other. And the resulting behavior
> > that emerges, is the sum total of all past learning experiences (biased
> > by their similarities to the current context).
>
> I don't have an "average" idea of Santa Claus that guides how I think
> about him.

Of course you do. How many different versions of Santa Clause have you
seen in pictures or have you had described to you? Isn't your view some
sort of "average" of all that information? You aren't going to try and
argue you have perfect photograph memory of every picture of Santa Clause
you have ever seen, so your image of Santa Claus is actually a perfect
memory of the 285 pictures you have seen? Or you aren't going to argue
that you remember one of them perfectly, and that's what you think of when
you think of Santa Claus are you?

> There was never a point in time when his existence was even
> a 50/50 proposition. If one person tells me a ball is blue and another
> person tells me it is red, I don't "learn" that it is purple. What I do
> learn may in fact be quite independent of the particular data that
> either person gives me.
>
> > The complexity of the behavior is only limited by the effective "size"
> > of this "context sensitive memory look-up system". And humans have a
> > fairly large one, which is able to produce a very large set of learned
> > behavior, that we spit out, as we go though each day in our lives.
>
> A big database of context memory is not intelligent. If you claim that
> it is, you still need to *show* where the intelligence enters into the
> processing.

It can explain (from an engineering perspective) how we produce the large
array of behaviors we do - that is, how our "programming" is stored in the
brain, to make us do all these things.

And if we have explained from an engineering perspective _all_ our
behavior, we have explained our intelligence.

> Without that, all you have is undifferentiated feedback.
> If you want to figure out the color of the blue/red ball, you may have
> to make leaps of logic that go *beyond* just what the conflicting data
> tells you. Unless you can define that, all the hand waving about
> learning isn't going to get you an intelligent machine.

What you call "leaps of logic" can be described simply as the behaviors
that flow out from the large associative memory store.

This sort of conjecture is certainly hand waving. it makes up a mechanism
that performs magic, and then answers all question by pulling out the magic
mechanism. But it's not totally unjustified magic. There's plenty of
reasonable parallels (ANNs that do the same thing, but just not as well).

So again, the only real "magic" we are talking about here, is the same
"magic" we have working in some limited domains, being extended to a wider
more generic domain.

If someone can create a generic, non domain specific, version of the same
magic that is in TD-Gammon, then such "magic" is a perfect explanation for
all human intelligent behavior.

The reason I believe it's correct, is because it perfectly answers all the
questions as to what type of machine the human brain is. But it can only
work, if someone can actually create it. But since it's a perfect answer,
I think the odds that it can be crated are basically 100%, and that the
odds that this is what type of machine the brain is, is also 100%.

> > I don't believe there is anything more to human "intelligence" than
> > that.
>
> And that is why you're not seeing significant results.

Could be. Or it could just be, that this is a hard engineering nut to
crack, even though we know exactly what type of nut it is we are trying to
crack.

> > Lets say we had a case in the past, where we hit a button 5 times, and
> > it worked 5 times. But then later, we hit again, and it didn't work.
> > What did we learn by that? We learned that some things stop working
> > after a while. We develop the concept of "wearing out".
>
> No. All you learned was that it didn't work the 6th time. It is the
> application of *intelligence* that has us coming to conclusions as to
> the potential reasons why that happened. Until you can define *that*
> aspect, there is no difference between coming up with concept of
> "wearing out" and the concept of "it is God's will".
>
> > By suggesting what a "truly intelligent" agent would do, you are
> > ASSUMING, it has a wide background of PREVIOUS EXPERIENCE (beyond what
> > I gave in the example). YOU CHANGED THE CONDITION OF THE EXAMPLE.
>
> It is *you* who is making the assumption that previous experience alone
> is what would guide the thinking of an intelligent agent. I have no
> previous experience with FTL travel, nor do I have any expectation that
> it is even possible, but without any material reinforcement at all I can
> *explore the idea* of many such impossible things.
>
> Likewise, it makes sense to me that, even in a world where an
> intelligent agent finds everything to be 100% reliable, it might
> *explore the idea* that what it has learned is not true. Therefore, I
> would assert that an intelligent agent would *never* conclude that
> something is 100% reliable, which directly refutes your description of
> how your learning system would operate.

No, it doesn't, because your conjecture is just that, pure conjecture, not
science. You have no evidence to support the idea that an intelligent
agent would "doubt the truth". The only evidence you have is what you or a
human might do. Humans will "doubt the truth" but that's because THEY HAVE
LONG HISTORY OF PAST EXPERIENCE OF BEING FOOLED!]

If the ONLY experience an intelligence has was those three examples, what
would it conclude? We have no way to test an intelligence that way because
we have no way to limit the experiences of any human to such a small set.
Your untestable conjecture DOES NOT REFUTE my position. It's just something
you made up that sounds good to you.

> More evidence that you are
> wrong,

Evidence? Really? :)

> but I'm sure you'll find a convenient way to hand wave it away.

Doesn't even need hand waving this time. I'll save that for next time.
That last point is just unsound.

casey

unread,
Aug 15, 2011, 5:20:35 AM8/15/11
to
On Aug 15, 2:15 pm, c...@kcwc.com (Curt Welch) wrote:
> [...]
> It's the same as understanding sorting, but only as a bubble

> sort, but not knowing how to write a quick sort, but knowing
> that to solve the problem, someone has to figure out how to
> write a sort that runs faster.

> Reinforcement learning is stuck in the same place right now.


> People know the brain is using some magic "quick sort"
> technique to do learning at a level of performance no one
> knows how to build a machine to equal.

A quick sort vs. a bubble sort isn't anywhere near the difference
between current RL programs and what the brain does.

> So whether its "trick" is something we can implement in software,


> no one knows, because know one knows what the trick is yet.

Or there is no single "trick". There may be many innate "tricks"
built up over an evolutionary time scale.

One of those "tricks" in predators is stereo vision. This will
appear in humans at about three to four months.

I would say that other abilities such as chasing a rabbit through
a real forest develop as a result of fine tuning innate motor
generators we are all born with.

Contrary to your bias toward believing this is easy enough to
learn in real time (less than an hour for an antelope or bush
turkey) in fact it is hard and calculus is an easier skill to
have and only appears hard because it isn't innate and has to
be learned by a slow serial symbolic thinking process instead
of the fast parallel hardware used for seeing in stereo vision or
running through a forest.

JC

Curt Welch

unread,
Aug 15, 2011, 9:27:16 AM8/15/11
to
Burkart Venzke <b...@gmx.de> wrote:
> Am 13.08.2011 02:53, schrieb Curt Welch:
> > Burkart Venzke<b...@gmx.de> wrote:
> >> Am 08.08.2011 16:36, schrieb Curt Welch:
> >>> Burkart Venzke<b...@gmx.de> wrote:
> >>>>> It's been said that the field of AI is still trying to define what
> >>>>> AI is. There is still no general agreement on what the brain is
> >>>>> doing, or how it's doing it.
> >>>>
> >>>> It is not necessary to know how the brain work if we define AI in
> >>>> another way.
> >>>
> >>> Well, to me the "real AI" we are after is making machines that can
> >>> replace humans at any task that currently only a human can do.
> >>
> >> Really any task?
> >
> > For sure.
>
> My first question is: Do you really want this? A substition for us?
> My second question: Couldn't we have intelligence without a possibile
> (risk of) substitution?

Well, yes, I want machines to be able to do even more than they can do for
us now. But I don't want them to replace my friends in my social life. I
want everyone to lead a retired (or super-rich) life style, so we can hang
out and have fun all day doing whatever we can dream up to do, while the
machines take care of all the work for us.

> >> Also as total substitution for us human? For example, I
> >> think of love and other emotions, of getting children...
> >>
> >> For me, strong AI should have an intelligence comparable to ours but
> >> without human emotions (which otherwise could aim to big problems such
> >> as our replacement).
> >
> > No, I think that's a sci-fi fallacy.
>
> Do you think of Delta (start trek next generation)?

Do you mean Data? Yes, I think the type of character they created there is
typical of the sci-fi idea that machines can be intelligent but still won't
have emotions. Our machines don't have emotionless now because they are
not intelligent. I don't think you make it intelligent, and not give it
emotions at the same time. And that's not because I'm must defining
intelligence that way.

> For me, I. Asimov is a good guide.

Yeah, I think it's got a lot of important lessons to think about with AIs
in it. The three laws are bogus, that's not how we will motivate them.
But the potentially dangerous side effects of trying to control something
that is intelligent for our own good is a complex process. That is, trying
to enslave intelligence is tricky business. Some people like Tim think it
ultimately can't be done and the AIs will end up as the dominate force.

> > I think it's not just important to
> > add in emotions, I think it's impossible to create intelligence without
> > it.
>
> Ok, something like emotions is necessary, also for me.
> But I think that it is not necessary to copy all human emotions.

Right, there will be lots of options to tune the personalities of the AIs
to create things that are very un-like humans, but yet still intelligent.
I strongly suspect most the use of AI will be for specialized machines that
don't look or act anything like a human. The AI that drives our car is more
likely to act like a GPS than a human, even if has lots of intelligence
built in.

Most use of AI in our technology will be far less intelligent than humans.
You don't need a big powerful human-like intelligence just the wash a car
or cut the grass - but you do need some intelligence for many of these
fairly simple tasks. We might have that human-like house-AI that is the
master butler/servant that takes care of all the stuff related to the house
for us - directing and controlling all the simpler AI machines to do stuff
like clean and maintain the yard, and repair the AIs and house as needed.

I think most of society will look like it is now - filled with humans using
fancy toys and vending machines, not human-like AIs. We will just have
more "smart" machines that do stuff like drive our car for us, and talk to
us at the drive-through window, or at our table at the restaurant. They
can talk to us though a speaker at the table, they don't need to have an
body like a waiter and actually walk out to talk to us. The food can be
delivered on a cart, it doesn't need to be hand carried by a biped walking
machine (though it could).

I think for the most part, the AI machines will be in the background of our
society, as our machines are generally in the background now.

> Because of my first question above, I think of "emotions" to fulfill
> "enough intelligence", intelligence we want without problems and risks.
> Do you want an AI hating humans, extremely being a suicide attacker?

Nope I don't want that, and I think we can build them so that's not the
case. But that sort of thing is always a risk. But I think long before
the AIs take over all the work, the first thing we will do, is extensively
study the risks of using intelligent machines so we will understand how to
make use of them while keeping ourselves safe.

> > Reinforcement learning machines are emotion machines. So if you write
> > an RL algorithm, you have already built emotions into a machine.
>
> Some especially chosen are ok.
>
> > If it doesn't look like emotions to you, that's only because you
> > learning machine isn't good enough yet, not because you have left
> > something fundamental out.
>
> How about AE = artificial emotions?
>
> > What do you think love is other than a drive to be attracted to
> > something?
>
> Do we need loving AI (yet)?

You bet. One of the most attractive features is love. We love those
things that love us.

That's one of the big things about pets that make them so attractive to
people. They love us for taking care of them. There's going to be a huge
market for AI pets that truly love their masters.

Not to mention we like servants that love us. Ever been to a store where
the people remember who you are, and smile and call you by name when you
come in? Doesn't that make the place a lot more attractive to you? (they
are trained to do that exactly because it makes customers come back). If
we think the people there like us, or love us, it makes the place far more
attractive to us.

Do we want the machines that server us to love us? You bet we will.

Imagine going to drive though McDonalds, in a strange town, and the AI
taking your order actually remembers your name and remembers that last chat
they had with you, and asks you about the thing you last talked about - and
generally acts like you just made their day better by stopping by to visit.
We are going to build the AIs to make then love us because we like to be
loved. I don't mean just "act as if they loved us", but actually wire them
to truly love us - they get rewards for taking care of and making us happy
and that makes us attractive to them.

The bigger complexity to deal with is the fact we might end up liking the
AIs more than other humans.

> > Reinforcement learning machines must assign value to everything in
> > order to work. Every sensation, every action, has value assigned to it
> > (aka every state or every state actuation pair). The assigned
> > (calculated) values are what drive every action choice the machine
> > makes. What makes us seek out the company of one person, and avoid
> > another?
>
> Should an AI (as a machine) avoid persons?
> OK, one may be more helpful for it (seek out his company) but avoid...?
> What would you say if computer don't want to be used by you any longer?

Some AIs I guess will probably be built to avoid humans - like the AIs that
do cleaning and maintenance work. We don't want our life bothered by their
work, so they will likely be programmed to sneak around and do their work
when there are no humans around and to pack up and leave the area when
humans show up. But trying to program that way might be tricky because you
don't want them to see a human as a negative reward (damn humans won't let
me get my work done). That's the the type of thing that builds anger and
resentment that could lead to an AI going postal. We just have to be
careful how we engineer a (don't bother the humans) function into them.

> > What makes us eat one food,
> > and avoid another? it's just the values our learning system has
> > assigned to all these things.
>
> Yes, finding good "food" and avoiding perhaps destroying is reasonable.
>
> > Love is nothing more than a high value our system has assigned to a
> > thing.
>
> Human, natural love is too complex to described with a single value.
> For example you can love and hate one person.

Right. I was just being over simplistic to make the point.

Just because RL is driven by a single top level reward signal, doesn't mean
that RL is a "single value" system. It's not. It's just the opposite.
It's a nearly infinite value system. Everything an RL system can sense or
do is assigned a DIFFERENT value. It's all based on the system estimating
the value of everything in it's world. It would not assign a single value
to something like a person. The value of the person would be dynamically
changing based on what the person was doing, or what they were wearing, or
even the context of where the person was in the environment. It could
assign a very high value to a person at one moment, and that value could
drop quickly, when the person says "I want to break up". Or, you look at
the person, and a high value gets assigned, and then you start to have
thoughts about that thing they did you didn't like, and the value droops
(because of what you thought about, not because of anything the person just
did). The values are a highly dynamic thing that is constantly changing
based on context of the environment. But for someone you "love" the
dynamically changing values are generally higher, than for someone you
don't have such attraction to.

> We should care of what we talk about: High values assigned to an
> aspect/situation is fine, love in this sense a special notion.

I don't think love is anything put the behaviors that result in us for the
things that end up with very high evaluations.

Also keep in mind that the physiology of humans is tied into their
intelligence in various ways. happiness and sadness might be basic RL
functions, (high and low exceptions of future rewards), but in humans these
things are tied into us to make is smile and cry. Fear provokes things
like heart rate change. So even though Fear might start as a basic RL
feature, it's tired into the body to make the body respond in ways that are
useful for survival. And when the body is wired to the brain to respond in
these ways, it creates a feedback path so our intelligence becomes aware of
he changes in our body (we can sense we are crying or smiling). And that
in turn causes us to react to the fact we noticed our body just did that.

So a big part of our emotional response, is not just the basic RL feature
I'm claiming emotions come from, but the full complexity of how a human
body reacts to such situations, and then how we, in turn react to the how
our body is reacting.

For example, we sense some fear, and we start to breath differently. We
sense we are berating differently, and that causes more fear to develop (we
scare ourselves).

So full human emotional reactions is far more complex than the RL systems
that start it all because of the complex feedback that can happen.

> > Fear is just the prediction of pending loss of rewards (pending high
> > probability of lower future rewards).
>
> That is not human fear resp. it can be far more complex.
>
> > All our emotions can be explained in terms of the values our
> > reinforcement learning system is assigning to the things we can sense
> > and the actions which are selected based on those values.
> >
> > It's impossible to build a reinforcement learning machine that is not
> > emotion based.
>
> Would you accept that this emotions are artifical emotions, not the same
> humans have in quantity and quality?

No, they are the same as humans. Except, as I talked about above, human
emotions are also tied into all this complex physiological changes. So an
AI that duplicated a lot of human emotions, would also have to duplicate
the stuff like crying (or things similar to it).

Have to stop here, I'll try and respond to the rest of the post later...

Doc O'Leary

unread,
Aug 15, 2011, 12:40:47 PM8/15/11
to
In article <20110814115915.239$8...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> >
> > I can't pretend to be as smart about intelligibility as Turing was when
> > it came to computability. All I know is that seems to be some
> > small-but-extraordinarily-important difference between how an
> > intelligent system changes data into information. Maybe that seems
> > obvious, but maybe it so obvious that we haven't really thought about
> > how that difference should be impacting how we explore building AI
> > systems.
>
> Well, you are skirting on dualistic ideas again there.

You are the one who keeps bringing up dualism. My point is only that we
have some unknown factors that aren't being addressed. Just because I'm
seeing the gaps doesn't mean *you* have to go trying to fill them with
God.

> I just wanted to throw that out because I think it causes a lot of
> confusion in people, and might be causing some of that "mystery" in your
> thought process when asking yourself just what is "intelligence".

No. As I keep saying but you keep failing to understand, your dualism
"mystery" is merely an unknown. But it is a *critical* thing to examine
if you have any hope of advancing AI. If you are unable to define
intelligence in such a way that you can *explain* the
small-but-extraordinarily-important difference, you haven't got the full
answer. You can continue to hand wave and champion RL all you like, but
you have given *no* results that show you have any new insight into what
fundamentally defines an intelligent system.

> The "eureka" effect, just comes from the fact that we have almost no idea
> why we do the things we do. We rationalize all the time, to try and
> justify and explain our behavior, but for the most part, we have no idea
> and can not predict what we will do with any fine grain of accuracy.

Clearly you have never experienced a "eureka" yourself. And, no, I
don't mean that in any mystical way. I mean that in an introspective
way that allows the person to explain how things "clicked" at that
moment of discovery.

> So the first part of the "eureka" effect is simply the fact that our brain
> will do something, we didn't see coming - and it does it, by leveraging an
> entire life time of experience.

That in no way explains anything. You continue to hand wave where
someone with answers would have real answers. I'm after that underlying
*mechanism* of intelligence that seems to be most powerfully
demonstrated by a eureka moment. Whether we saw it coming or not, a
system full of data somehow got triggered into turning it into
*profoundly* useful information. If you can't explain that in a
detailed way, you're on the wrong path.

> But we recognize the value of
> out thoughts

How? That is what you continue to have wave away, but seems critically
important when it comes to defining intelligence. In the face of
volumes of data, much of it conflicting, you have to explain *how* we
recognize the value of our thoughts, and why that assigned value
*necessarily* leads to intelligent behavior.

> > And so you understand why I'm not sold on learning as being equivalent.
>
> Sure. But I don't think you have thought through all the ways that
> reinforcement learning explains every little last aspect of human ability
> and human behavior, like I have. Nor do I suspect you have spent as much
> time studying the problem of reinforcement learning itself, as I have.

No, I am indeed not engaged in your kind of biased cherry-picking. Your
sacred cow is worthless to me if it can't explain what intelligence
fundamentally is. Clearly intelligence not just learning (and learning
may not even be an immediate factor).

> So you don't see the massive number of parallels between human behavior,
> and the expected behavior of a strong reinforcement learning machine.

Neither do I see any advancements from your camp after decades of work.

> > Just as there are systems that are equivalent to a Turing Machine when
> > it comes to computability, there *may* be learning systems that are
> > equivalent to intelligibility. We won't know until we get at the
> > nitty-gritty of what intelligence itself really is, though.
>
> Yeah, true. But as it happens, I have gotten to the nitty-gritty is of
> what intelligence is so I'm already there. You just don't have enough
> knowledge yet to understand why this is the nitty-gritty of intelligence.

Hardly. You continue to offer nothing more than a circular definition
of intelligence. If you actually had a clue, you'd offer a definition
that allowed for equivalent non-RL implementations.

Doc O'Leary

unread,
Aug 15, 2011, 1:19:08 PM8/15/11
to
In article
<f344c5ce-7357-4cad...@d8g2000prf.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> So even if a person loses their memory or an animal is 100%
> innate in its sensible actions the behavior itself has been
> the result of a learning (or evolutionary) process.

Ah, yes, but then we *again* get to the difference between learning and
intelligence! If I grant you that evolution is a form of learning, you
still have to explain what is *intelligent* about it. Darwin did that,
to a degree, by developing the theory of natural selection. I even see
it in the social/world brain that you've brought up, where "selection"
happens by cultural, economic, and other forces. But I'm not so certain
we really understand how that selection process happens at the human
level.

> So without learning no new behaviors we might call intelligent
> can ever arise.

I'm not sure that matters. Just like Darwin's evolution only explains
the Origin of Species and *not* the origin of life, the fact that
intelligence exists seems like it has little to do with the way learning
happens in the system. I can easily imagine that an alien creature
could exist that acts intelligently at a neural level, but only learns
at a genetic level. That could very well be the roots of human
intelligence, too.

> So in that sense all intelligent behavior involved learning
> at some stage. Without learning there is no intelligence.

Again, I think that is the kind of perspective that has really hampered
the advances in AI. Yes, raw data has to become useful information at
some point, but isn't intelligence more the meaningful things we then
*do* with that information? Isn't there an *origin* of learning, and
wouldn't intelligence (in some form) necessarily proceed that?

Doc O'Leary

unread,
Aug 15, 2011, 1:55:53 PM8/15/11
to
In article <20110814171800.065$P...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> But where is the intelligence in that? How is knowing the state of the tic
> tac toe board fully intelligent?

As I said, I understand how it's a foreign concept, but let it sink in.
It's *not* "knowing the state" that is intelligent, it is what the
system *does* with that knowledge that, to me, should better serve as an
indication of intelligence. The intelligence is in the processing, not
in the data being processed.

> It still must act. So what controls how it acts? It could have it's
> entire behavior set hard coded into it's hardware - such a a tic tac toe
> porgram that just had a big table that said for this board position, make
> this move.

Right, just like modern chess programs have databases of opening and end
game moves. So, as I have argued from the start, intelligence needs to
be defined in a way that *does* address the issue of "what controls how
it acts".

> Such a machine, built to look like a human, could act exactly like a human
> for its entire life - assuming it had enough hardware to code that entire
> life time of behavior. So externally, we could not tell the difference
> between it, and a human. And if course, that would assume however the
> system was coded (whoever coded it) was able to perfectly predict the
> entire life of this human.

Indeed, that is the proposition of the Turing Test. And I would agree
with that *as a test*, but that is in *no* way useful in trying to
determine how to actually design the system in the first place!

> Sure. And some animals, like insects, seem to be mostly that - hard coded
> little machine with little to no learning. They do "smart" things, because
> evolution made them do it. But evolution is itself, a reinforcement
> learning process that has long term memory - stored in the DNA. Evolution
> itself is intelligence - just a very slow learning type of process.

But "learning" on the evolutionary level doesn't seem like it needs to
be at all tied to intelligence in an individual creature. Species
survive and reproduce with wildly different degrees of cognitive
abilities. It may be that what we dismiss as instinct *should* properly
be studied as intelligence without learning.

> I argue that the behaviors alone, without the underlying process contantly
> improving them, is not "true" intelligence, even though the behaviors are
> very "smart". But that's just my definition of "true intelligence".

Sure, a smarter, "true" intelligence is a preferred end goal, but nobody
is going to get there without understanding what intelligence really is.
The whole notion that it is just learning is plainly wrong, because
nowhere in the hand waving do you describe how raw data turns into
useful information.

> Spiders didn't get "smart" because they are intelligent. Spiders got smart
> because the process of evolution that built them is intelligent.
>
> A chess program didn't get "smart" because it is intelligent. It got smart
> because a process happening in a human brain was the intelligence that
> created them.

You conveniently side-step how humans got intelligent rather than just
being "smart" like a spider. My approach actually addresses that,
because I'm looking for intelligence as a property that is on an
independent scale.

Curt Welch

unread,
Aug 15, 2011, 2:01:24 PM8/15/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110814115915.239$8...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> > >
> > > I can't pretend to be as smart about intelligibility as Turing was
> > > when it came to computability. All I know is that seems to be some
> > > small-but-extraordinarily-important difference between how an
> > > intelligent system changes data into information. Maybe that seems
> > > obvious, but maybe it so obvious that we haven't really thought about
> > > how that difference should be impacting how we explore building AI
> > > systems.
> >
> > Well, you are skirting on dualistic ideas again there.
>
> You are the one who keeps bringing up dualism. My point is only that we
> have some unknown factors that aren't being addressed. Just because I'm
> seeing the gaps doesn't mean *you* have to go trying to fill them with
> God.

Duailsm is caused by an illusion explained by classical condition. That's
what I "fill" it with (when it turns out it needs filling), not "God". :)

> > I just wanted to throw that out because I think it causes a lot of
> > confusion in people, and might be causing some of that "mystery" in
> > your thought process when asking yourself just what is "intelligence".
>
> No. As I keep saying but you keep failing to understand, your dualism
> "mystery" is merely an unknown. But it is a *critical* thing to examine
> if you have any hope of advancing AI.

I have examined it and explained it. It's an illusion. The model the
brain builds of reality has an error (the model fails to match the
environment) due to a lack of sensory data needed to explain the error.

> If you are unable to define
> intelligence in such a way that you can *explain* the
> small-but-extraordinarily-important difference, you haven't got the full
> answer.

I have explained it. I believe I have the (mostly) full answer.

> You can continue to hand wave and champion RL all you like, but
> you have given *no* results that show you have any new insight into what
> fundamentally defines an intelligent system.

Right, I need to create the missing piece of technology. All I have so
far, are lots of clues indicating the technology exists in the brain, but
no proof that it does exist in the brain, or that it is something that
could be programmed into a computer. But the clues are compelling (at
least to me, and to a few others).

> > The "eureka" effect, just comes from the fact that we have almost no
> > idea why we do the things we do. We rationalize all the time, to try
> > and justify and explain our behavior, but for the most part, we have no
> > idea and can not predict what we will do with any fine grain of
> > accuracy.
>
> Clearly you have never experienced a "eureka" yourself. And, no, I
> don't mean that in any mystical way. I mean that in an introspective
> way that allows the person to explain how things "clicked" at that
> moment of discovery.

Well there's no way for me know if what I've experienced, is what you are
talking about for yourself. But I've certainly had moments of sudden
insight and clarity that I consider a eureka moment.

> > So the first part of the "eureka" effect is simply the fact that our
> > brain will do something, we didn't see coming - and it does it, by
> > leveraging an entire life time of experience.
>
> That in no way explains anything. You continue to hand wave where
> someone with answers would have real answers.

Yes, from the very begging of this thread (and for the past many years I've
posted to cap), I have made it clear I did not have the working technology
to SHOW YOU. I only have insight into what TYPE of technology I believe it
is that we need to create to create an "intelligent" machine.

My position has not changed in any of these messages, so why it is you
think I have something more to tell you, I have no clue. DO you not
understand my position, or do you?

If you don't, I can keep answering questions and writing stuff to explain
my position. But the position is not going to change. I think the core
technology we need to create to make an intelligent machine is a
reinforcement learning algorithm that operates in the high dimension real
time space the brain operates in.

> I'm after that underlying
> *mechanism* of intelligence

Yes, and so am I. And after looking for that underling mechanism, I think
I understand what TYPE of mechanism it is. I think it's a reinforcement
learning algorithm. I know what it needs to do, but I don't know how it
manages to reach the performance level it does, so I don't know how to
build it. And that's the missing part I look for and try to figure out.

> that seems to be most powerfully
> demonstrated by a eureka moment. Whether we saw it coming or not, a
> system full of data somehow got triggered into turning it into
> *profoundly* useful information. If you can't explain that in a
> detailed way, you're on the wrong path.

Well, I can explain it in far more technical details than my high level
hand waving I did above. But the basic message doesn't change when I take
the time to do that because there is still the little "and magic happens
here" part that remains unresolved.

> > But we recognize the value of
> > out thoughts
>
> How? That is what you continue to have wave away, but seems critically
> important when it comes to defining intelligence.

Do you actually understand how any real reinforcement learning algorithm
works?

> In the face of
> volumes of data, much of it conflicting, you have to explain *how* we
> recognize the value of our thoughts, and why that assigned value
> *necessarily* leads to intelligent behavior.

Do you understand how reinforcement learning algorithms calculate values
and use it to create "smart" behaviors? If you do, that is the answer to
your question. If you do not, then do you want me to teach you about
reinforcement learning algorithms? Here's a good book for beginners:

http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html

If you don't you need to learn that if you actually want to understand my
position. If you do know it, than I can answer your question based on how
reinforcement learning algorithms work.

> > > And so you understand why I'm not sold on learning as being
> > > equivalent.
> >
> > Sure. But I don't think you have thought through all the ways that
> > reinforcement learning explains every little last aspect of human
> > ability and human behavior, like I have. Nor do I suspect you have
> > spent as much time studying the problem of reinforcement learning
> > itself, as I have.
>
> No, I am indeed not engaged in your kind of biased cherry-picking. Your
> sacred cow is worthless to me if it can't explain what intelligence
> fundamentally is. Clearly intelligence not just learning (and learning
> may not even be an immediate factor).

Yes, but my answer could be the answer. And then where are we? You are
stuck with someone (who you don't respect) showing you the answer, but you
are just unable to accept it, so you reject the truth without every
understanding why it's the truth.

> > So you don't see the massive number of parallels between human
> > behavior, and the expected behavior of a strong reinforcement learning
> > machine.
>
> Neither do I see any advancements from your camp after decades of work.

Very little advancement for sure. Which is a big part of why most people
have stated away from it - it looks like a dead end based on how much
advancement has(not) happened.

> > > Just as there are systems that are equivalent to a Turing Machine
> > > when it comes to computability, there *may* be learning systems that
> > > are equivalent to intelligibility. We won't know until we get at the
> > > nitty-gritty of what intelligence itself really is, though.
> >
> > Yeah, true. But as it happens, I have gotten to the nitty-gritty is of
> > what intelligence is so I'm already there. You just don't have enough
> > knowledge yet to understand why this is the nitty-gritty of
> > intelligence.
>
> Hardly. You continue to offer nothing more than a circular definition
> of intelligence. If you actually had a clue, you'd offer a definition
> that allowed for equivalent non-RL implementations.

If it's not RL, it's not intelligent. I can't allow for a non-RL
implementable because if it's non RL, you have taken the intelligence out
of it.

What's a car? Can we create a high level definitions for the essence of
what a car it? Lets say it's a sell propelled machine with a mechanical
power source, that uses that power to turn wheels, and make the car move
forward.

Should we allow for implementations of cars, which don't have a power
source or wheels? Is a car body without an engine and without wheels still
a self propelled vehicle? Not in my book.

Asking me to conceive of intelligence with RL, is like asking me to try to
conceive of a self propelled vehicle without an engine. Yes, I can imagine
a car moving around with no engine in it, but I know enough about physics
to know such a thing is impossible.

I believe the same thing is true of intelligence. If you take out the RL,
you have taken out the the key component that makes it intelligent. Yes, I
can image a human acting intelligent without any RL engine in it, but I
know enough about this to believe (like with the non-engine car) such a
thing is impossible.

Clearly, you don't buy my story, but you must, by now, understand what my
position is right?

My position is speculation based on clues. It won't be proven true, until
the RL machine that acts like a human is actually built by someone. It
won't be proven false, until someone builds an intelligent machine that is
not RL based at the core.

Doc O'Leary

unread,
Aug 15, 2011, 2:20:00 PM8/15/11
to
In article <j29d6m$hd9$1...@news.albasani.net>,
Burkart Venzke <b...@gmx.de> wrote:

> >> What kind of intelligence do you mean? Human? Natural (animals)? Or also
> >> artifical for example for a turing test?
> >
> > I don't know that it is meaningful to say there *are* different kinds of
> > intelligence. Though human intelligence does seem to differ from other
> > animals, it only seems to do so by degree.
>
> The problem is that we have natural (human and animal) intellgence
> (though its definition is difficult or even impossble, something like a
> top down problem) whereas we try to create *artificial* intelligence (a
> buttom up problem). So we try to match (partly) our natural and our
> artificial ideal of intelligence.

Right, so I'm saying we *must* do that grunt work matching in the first
place in order for us to have any reasonable chance at solving the
problem. I want to get away from it being an issue of top-down vs.
bottom-up and treat it *independently* as a fundamental property of a
system in the same way that computability is a fundamental property.

> In my mind, "intelligence" should be interpreted as in the turing test,
> the result of a black box should be relevant.

It is relevant inasmuch as we *don't* have an AI that passes the Turing
Test. So we can continue along the road of shortcuts and tricks that
"cheat" the test, or we can turn back and properly get to the roots of
what defines an intelligent system (human or otherwise) in the first
place.

> And why do you expect us to be away from our desired destination?

That is my observation of results-oriented AI research. They pick a
sub-topic of AI to make their sacred cow, and then throw a lot of human
intelligence and hardware at it. They *do* solve that limited problem,
but in a way that results in the "AI effect", and very little real
progress when it comes to understanding fundamental intelligence.

> Learning is not everything for an intelligence, you are right, it is
> (for me) only a necessary condition ("sine qua non" is leo.org's
> translation for the German "notwendige Bedingung").

And my argument is that it is *intelligence* that is necessary to make
learning meaningful. Without the selection pressure it applies,
"learning" is nothing but an unsatisfying random walk.

> > To directly address your
> > question, yes, I can imagine intelligence without learning, just as I
> > can imagine that a system can contain true statements that are not
> > provably true within the system. Whether or not intelligence requires
> > learning, results in learning,
>
> "Results in"... what "learning" do you speak of?
>
> > or is equivalent to some forms of
> > learning *still* depends on a good definition of intelligence.
>
> What can be equivalent to learning? Learning means e.g. collecting and
> processing new data, how could this be substituted?

I'm talking about system equivalence, in the same way that universal
computability can be achieved by different systems. Clearly not *all*
learning systems are intelligent (or even achieve universal
computability), so an independent definition would have to be in place
before a researcher would be able to even say *any* learning system is
equivalent, let alone make the grand assertion that learning is the
source of intelligence.

casey

unread,
Aug 15, 2011, 4:03:06 PM8/15/11
to
On Aug 16, 4:01 am, c...@kcwc.com (Curt Welch) wrote:
[...]
> Well there's no way for me know if what I've experienced,
> is what you are talking about for yourself. But I've
> certainly had moments of sudden insight and clarity that
> I consider a eureka moment.

[...]

> ... why it is you think I have something more to tell you,
> I have no clue.

Like Paul on the road to Damascus not like Archimedes’
discovery of water displacement because he did have a clue.

>> I'm after that underlying *mechanism* of intelligence
>
>
> Yes, and so am I. And after looking for that underling
> mechanism, I think I understand what TYPE of mechanism it is.
> I think it's a reinforcement learning algorithm. I know what
> it needs to do, but I don't know how it manages to reach the
> performance level it does,

By leveraging itself on evolved mechanisms. Evolution has
produced some complex bodies and can produce some complex
innate brain solutions not possible in the lifetime of
an individual.

> ... the little "and magic happens here" part that remains unresolved.

Because you are biased to look for a single solution that will give
instant results and nature indicates no such solution exists except
to someone that wants to cherry pick the "evidence".

It might be unresolved because there is no magical instant solution.

> You are stuck with someone (who you don't respect) showing you the
> answer, but you are just unable to accept it, so you reject the
> truth without every understanding why it's the truth.

Religious rhetoric. An intuitive belief is not truth or science.

My bone of contention is not your belief in RL as a mechanism
for learning but in your belief it can be done in real time
on a blank slate. You want that to be true so AI can suddenly
appear given the "right kind of Curtian blank slate" when it may in
fact require the hard slog of an evolutionary path to produce
all those complex (or hard to find by chance) connections.


JC

J.A. Legris

unread,
Aug 15, 2011, 4:13:05 PM8/15/11
to
On Aug 15, 2:20 pm, Doc O'Leary <droleary.use...@3q2011.subsume.com>
wrote:
> In article <j29d6m$hd...@news.albasani.net>,

I'm afraid a definition of computational intelligence is doomed from
the outset. It is a contradiction in terms because intelligence is a
property of organisms, period. Even the simplest organisms have a
smattering of it, whereas even the most impressive computational
demonstrations (my current favourite is Watson) are somehow lacking.
The missing juice is quite simple to specify yet impossible to
reproduce, something that every organism has and every machine lacks:
a phylogenic history - the trial by fire and ordeal that every extant
species has undergone to gain a foothold on this planet. Rerun that
story using an alternative cast of characters (e.g. inorganic
materials and processes) and you'll find your elusive A.I. Call it
neovitalism.

--
Joe

Doc O'Leary

unread,
Aug 15, 2011, 9:43:11 PM8/15/11
to
In article <20110815001522.437$O...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
>
> > Again with the circular definitions. You have yet to tell us were the
> > *intelligence* is in your precious learning.
>
> Yes I have. You just don't understand how what I'm telling you could
> possibly be true.

It is always the convenient position of the quack to insist the world
just doesn't understand their genius.

> Well, you should give me a specific example of what you are talking about
> when you say that.

No, I shouldn't have to. If you really had a grasp on the truth, you
would have been able to give the *general* principles by which an
intelligent system operates.

> I'll then respond to your specific example. You first
> need to explain to me what type of "order from chaos" you think is a
> property of intelligence.

OK, here we go again:

Jack says the ball is red.
Jill says the ball is blue.

What are the kinds of things you might expect an intelligent system to
*think* after learning that information? Tell me how your sacred cow
incorporates such data in a way that results in some measure of
intelligence.

> "thinking" is just as much a physical action as waving our hand, and it's
> an action we are just as equally aware of hand waving. So why are these
> two physical actions segmented into such separate domains in our language
> when they aren't separate at all in the physical world? It's because of
> the illusion of dualism.

Only to you and other people who can't seem to leave dualism in the
past. Everyone else uses different words because, shockingly, the
actions are distinctly different. Everyone else also readily uses words
like "push" and "pull", when I'm sure someone like you would insist that
they all must instead *only* talk about exerting a force on an object.

> Now, the problem I've run into, is that even when people reject all the
> dualistic nonsense like souls, and firmly believe in materialism, they
> still have a nasty habit of thinking and talking, dualisticly - often with
> no awareness they are doing it.

No, it's still just you who is failing to learn that the meaning of
words has moved on. You don't instill much confidence in your arguments
for learning when you seem unwilling or unable to learn yourself.

> reinforcement learning is the why of human intelligent behavior. :)

Jack says the ball is red.
Jack will give you $100 if you say the ball is red.
What color is the ball?

> > > TD-Gammon. (for Backgammon)
> >
> > So you claim, but where is the intelligence?
>
> It figured out, on it's own, how to play backgammon at the level of the
> best human players. It figured out things about the game, that no human had
> every figured out, including the person who wrote TD-Gammon.

You are being too generous. It was *designed* to play backgammon, just
like chess programs are similarly designed. Just because it played well
or even learned to play better does *not* establish any intelligence in
general, in relation to games, or in relation to backgammon itself.

Neither does novel behavior necessary imply any sort of intelligence. A
random walk around a problem space can also result in new discoveries.
You *still* have yet to demonstrate any fundamental level of
intelligence.

> > I'll grant you that its
> > play differed from commonly accepted strategies, and even played
> > "better" in that regard. But you still have yet to define what about
> > that difference was *intelligent*!
>
> Again, if you look at the root cause of all examples of intelligence, you
> find a reinforcement learning algorithm at the core.

No, I don't. *You* certainly do, but that has gotten you no closer to
defining intelligence, never mind implementing an AI.

> I claim, that the
> word "intelligence" has always been used to label the behavior of
> reinforcement learning processes, but that people just didn't understand
> that's what they were talking about.

Only you are satisfied talking circularly.

> The prediction is that someday, before long, someone will create a new
> reinforcement learning algorithm, that will produce life-like, and
> intelligente-like behavior in our machines and robots.

That is not a scientific prediction.

> My position is perfectly valid science, it makes predictions, and it's
> falsifiable.

"My position is impossible to prove wrong."
<http://groups.google.com/group/comp.ai.philosophy/msg/3b078ce4e5888f00>

> So knowing this, another interesting question shows up. If the machine has
> two options of how to change itself, which one does it pick? Does it grow
> a connection from neuron A to neuron B? OR does it grow a connection from
> A to C?
>
> Without knowing anything about how it changes itself, we can simply ask,
> how does it decide which way to change itself?

And that is all I'm asking for in a definition of intelligence. Not
just in *how* it changes, but *why* it favors one change over the other.
That is what Darwin's why of natural selection added to the how of
evolution. Clearly you're fixed the why of reinforcement for the how of
learning, but you've yet to offer the same kind of explanatory power
that Darwin managed. By my measure, your reinforcement still needs an
underlying intelligence to evaluate the choice between B and C.

> If I'm talking to someone that doesn't know what intelligence is, how
> exactly would I show them that a reinforcement learning process was an
> example of the thing he doesn't know how to define?

By doing what I've asked and offering a definition of intelligence that
is *independent* from your precious RL, just as the definition of
computability is independent from a Turing machine. If you can't do
even that little bit, my contention is that you're on the wrong path.

> Skinner did lots of science. He believe all human behavior was the product
> of operant and classical conditioning. His his science not at least "soft"
> science to you?

No. He experimented and offered useful theories, but I see no reason to
conclude he was 100% right. Likewise, Turing made solid discoveries in
computability and offered a useful test for AI, but that doesn't mean AI
was suddenly a solved problem.

> To suggest there is an "intelligence" at work separate from your "learning"
> is to fail to understand how I'm using the term "learning".

Or it shows your failure to understand what intelligence fundamentally
is. Look, I get that you *want* to pigeon-hole everything into your
selective definitions so that you can circularly say you've already
solved the problem. There is no need to keep repeating or rephrasing
that. You simply haven't got a convincing argument, so I maintain that
you're likely not on the right path to AI.

> Humans don't have any unintelligent learning in them. You have to show me
> a specific example of what you consider to be unintelligent learning, and
> then I can respond to that example.

Are you kidding? Humans have been, and continue to be, boiling bags of
inconsistency and falsehoods. Again, I completely get that you're going
to try to argue that anything I name is behavior based on reinforced
learning and is therefore intelligent. It's a tiresome game to run in
circles.

> > Likewise, it is my contention that there is some minimum definition for
> > intelligence beyond computabiity.
>
> Meaning intelligence can't be implemented on a computer if that is true?
> Or are you thinking something else there?

No, meaning that computability is a necessary but not sufficient
component of intelligence. I'm actually even open to *that* not being
true. Since "intelligence" seems to be embodied in an ongoing process,
it might not be precisely meaningful to say it completes in a finite
amount of time. We mostly *do* expect particular answers from an
intelligent system in a finite amount of time, though, so I think it is
fair to say that it is at least a step beyond computability.

> I don't remember you posting in the past. Is it just my faulty memory, or
> are you posting under a different name? Or has it just been a long time
> since you last posted here?

I post when my interest is piqued. Usenet only does that rarely these
days, which is sad, because I still find it a much better discussion
system than web forums or social networks.

> > Not at all. Again, you may indeed be right, but until you can accept a
> > definition of intelligence that is *distinct* from learning, you don't
> > really *know* that you are right.
>
> I'm trying to understand what type of machine a human brain is that allows
> it to perform all these intelligent actions (and intelligent thinking since
> you might not realize when I say "actions" I include thinking as a type of
> action). The best machine descriptor I've every found to answer that
> question is "it's a reinforcement learning machine". I will not accept
> another definitions of "intelligence" unless someone can show me the
> destitution of a class of machines, which better fits than the one I
> already have.

And so you miss out on what we can lean from intelligence at the
evolutionary level, and what we can learn about it from the global
level. Nor do I think it serves us to isolate human intelligence from
other animal intelligence. My position remains that at *all* levels,
there is some underlying mechanism that we should be able to tease out
as an independent definition for intelligence, and thus not only achieve
AI, but unify the work of a good number of fields.

> So far, no one has produced ANYTHING that to explaining human behavior,
> where as the description "reinforcement learning machine" fits all known
> facts about human behavior. It's not like we have 5 contenders for the
> answer, and I just have my favorite. We only have one contender, and on
> the other side, a lot of people, like you, walking around with no
> alternative.

Because it's the wrong path to take. It's like you're demanding a
better system for explaining the wild motion of stars around the center
of the Universe that is our Earth. I can't give you that; nobody who is
on the path to really understanding the Universe could. Just like they
needed to throw out the notion of a geocentric Universe, so too do you
need to throw out the notion of humanocentric intelligence.

> > > Let me side track and explain what I see needs to be done. To create a
> > > robot, that acts like a human (including with thinking power), it needs
> > > to have high bandwidth parallel sensory inputs, and lower, but still
> > > high, bandwidth parallel outputs controlling all its effectors.
> >
> > Does it? By that measure, a *human* couldn't act intelligently unless
> > it had that kind of hardware.
>
> You misread my point. I was actually saying "has the same sensory
> bandwidth of a human".

No, you misread my point. What I'm saying is that humans seem quite
capable of acting intelligently with even less sensory bandwidth than is
commonly the case. Blind people can be intelligent. Deaf people can be
smart. Helen Keller was both and certainly seemed to be pretty darn
intelligent. By my measure, we could learn quite a bit more about
intelligence by putting humans in low-bandwidth environment instead of
striving to keep giving machines more and more data to process.

> > > The system is basically storing a complex sort of "average" of all past
> > > learning experiences on top of each other. And the resulting behavior
> > > that emerges, is the sum total of all past learning experiences (biased
> > > by their similarities to the current context).
> >
> > I don't have an "average" idea of Santa Claus that guides how I think
> > about him.
>
> Of course you do.

No, I don't.

> How many different versions of Santa Clause have you
> seen in pictures or have you had described to you?

Countless (but countable :-).

> Isn't your view some
> sort of "average" of all that information?

No.

> You aren't going to try and
> argue you have perfect photograph memory of every picture of Santa Clause
> you have ever seen, so your image of Santa Claus is actually a perfect
> memory of the 285 pictures you have seen?

No. Neither would I argue that memory is a very good reflection of
reality.

> Or you aren't going to argue
> that you remember one of them perfectly, and that's what you think of when
> you think of Santa Claus are you?

No. I'm going to argue that . . . send the kids out of the room . . .
Santa Claus is not real! That's not a deflection, either. It's a
starting point for you to think about what kind of mental model you
really think is created for things like that.

It's like inquiring into my guess at the length of a unicorn's horn. It
certainly isn't some kind of calculated average representation. I could
*maybe* refer to it as a prototypical representation, but that still
doesn't fully capture all the contradiction that is inherent in
discussing such a nonsense creature.

> > It is *you* who is making the assumption that previous experience alone
> > is what would guide the thinking of an intelligent agent. I have no
> > previous experience with FTL travel, nor do I have any expectation that
> > it is even possible, but without any material reinforcement at all I can
> > *explore the idea* of many such impossible things.
> >
> > Likewise, it makes sense to me that, even in a world where an
> > intelligent agent finds everything to be 100% reliable, it might
> > *explore the idea* that what it has learned is not true. Therefore, I
> > would assert that an intelligent agent would *never* conclude that
> > something is 100% reliable, which directly refutes your description of
> > how your learning system would operate.
>
> No, it doesn't, because your conjecture is just that, pure conjecture, not
> science. You have no evidence to support the idea that an intelligent
> agent would "doubt the truth". The only evidence you have is what you or a
> human might do. Humans will "doubt the truth" but that's because THEY HAVE
> LONG HISTORY OF PAST EXPERIENCE OF BEING FOOLED!]

But in *any* system of imperfect information, an intelligent agent would
necessarily be correcting "FOOLED" information all the time. Do you
really have an intelligent system if it concludes something is 100%
reliable and then *never* considers what to do if it is wrong? How does
it learn to evaluate unreliable information if it has no mental model
for that possibility?

> We have no way to test an intelligence that way because
> we have no way to limit the experiences of any human to such a small set.
> Your untestable conjecture DOES NOT REFUTE my position. It's just something
> you made up that sounds good to you.

On the contrary, we have *exactly* the kinds of technology that can be
brought to bear in giving humans a smaller set of information to work
with and then examining their behavior for signs of intelligence. It is
only you who makes untestable proclamations.

For example, we're at the point of discovering a way for computers to
drive by throwing tons of sensor data and processing power at the
problem, but where are the experiments that show how a *human* can drive
with some minimal amount of information? How well could I pilot a
vehicle with just a 32x32 pixel video feed and a 4 bit control? If it
is at least as well as a high-bandwidth artificial driver, we haven't
gone down the path of intelligence.

Doc O'Leary

unread,
Aug 16, 2011, 12:43:54 PM8/16/11
to
In article <20110815140124.352$w...@newsreader.com>,
cu...@kcwc.com (Curt Welch) wrote:

> Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> >
> > Clearly you have never experienced a "eureka" yourself. And, no, I
> > don't mean that in any mystical way. I mean that in an introspective
> > way that allows the person to explain how things "clicked" at that
> > moment of discovery.
>
> Well there's no way for me know if what I've experienced, is what you are
> talking about for yourself. But I've certainly had moments of sudden
> insight and clarity that I consider a eureka moment.

But how introspective were they? That is to say, when I have had such
moments, they did not simply tweak an "average". They changed my
fundamental view of the world, and cascaded to affect many *very*
specific ideas in my mind, both related and unrelated to the incident.
It wasn't about any abstract notion of rewards or reinforcement that
accomplished that, it was some fundamental mechanism of the brain that
determined that one idea was pivotal when other, largely similar, ideas
were not. That's the same sort of selection mechanism that seems to
have parallels in evolution and social groups, and it leads me to think
that the definition of intelligence is wrapped up in determining exactly
what it is.

> Yes, from the very begging of this thread (and for the past many years I've
> posted to cap), I have made it clear I did not have the working technology
> to SHOW YOU. I only have insight into what TYPE of technology I believe it
> is that we need to create to create an "intelligent" machine.

I'm not asking you to show me a machine, I'm asking you to show me you
understand intelligence by demonstrating the path you're on necessarily
creates it. But you can't even give me an independent definition of
intelligence that you're starting from. Instead, you just keep
admitting you're wandering around in the dark, hoping *someone* will
discover *something* at *sometime* in the future, and that we all just
need to have faith that you're in possession of The Truth.

> My position has not changed in any of these messages, so why it is you
> think I have something more to tell you, I have no clue. DO you not
> understand my position, or do you?

I do, but what amuses me is that *you* don't fundamentally understand
your own position. Try as I might, I can't get you to see how
unscientific and circular it is.

> But the position is not going to change.

That is not . . . intelligence.

> > No, I am indeed not engaged in your kind of biased cherry-picking. Your
> > sacred cow is worthless to me if it can't explain what intelligence
> > fundamentally is. Clearly intelligence not just learning (and learning
> > may not even be an immediate factor).
>
> Yes, but my answer could be the answer. And then where are we? You are
> stuck with someone (who you don't respect) showing you the answer, but you
> are just unable to accept it, so you reject the truth without every
> understanding why it's the truth.

No, we would be at the point of a stopped watch being correct. What I
think is more likely the case is that you'll end up being wrong, but
insist on moving the goal posts in a desperate attempt to prove you were
right all along. As I said, it may indeed be the case that some
learning system could be shown to be equivalent to an (as yet undefined)
intelligent system, but *you* won't have done that work. So, no, I
can't respect your position of all you can give me is the
stopped-watch-equivalent "I could be right!"

> > Hardly. You continue to offer nothing more than a circular definition
> > of intelligence. If you actually had a clue, you'd offer a definition
> > that allowed for equivalent non-RL implementations.
>
> If it's not RL, it's not intelligent. I can't allow for a non-RL
> implementable because if it's non RL, you have taken the intelligence out
> of it.

Pi is impressed with how precisely circular you are.

Doc O'Leary

unread,
Aug 16, 2011, 1:12:14 PM8/16/11
to
In article
<d505751c-c40e-4639...@l9g2000prd.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> By leveraging itself on evolved mechanisms. Evolution has
> produced some complex bodies and can produce some complex
> innate brain solutions not possible in the lifetime of
> an individual.

I don't know that that's necessarily true for an AI, though. I'll
certainly grant you that our brains have a privileged starting point
that is rooted in evolution, but it seems possible to me that, if we
generalized our understanding of intelligence, we might be able to go
back farther to some kind of "stem cell" or even "proteins and amino
acids" base that can scale up to the human level.

> when it may in
> fact require the hard slog of an evolutionary path to produce
> all those complex (or hard to find by chance) connections.

Not just the same kind of path, but the same kind of *mechanisms*.
Darwin's breakthrough idea was not evolution, but how natural selection
made it work. AI is missing that kind of explanatory power, and I think
the result is that we're also missing out on how that underlying concept
of intelligence would tie those differing complex systems together.

casey

unread,
Aug 16, 2011, 5:09:30 PM8/16/11
to
On Aug 17, 3:12 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
wrote:

casey <jgkjca...@yahoo.com.au> wrote:
>> By leveraging itself on evolved mechanisms. Evolution has
>> produced some complex bodies and can produce some complex
>> innate brain solutions not possible in the lifetime of
>> an individual.
>
>
> I don't know that that's necessarily true for an AI, though.
> I'll certainly grant you that our brains have a privileged
> starting point that is rooted in evolution, but it seems
> possible to me that, if we generalized our understanding of
> intelligence, we might be able to go back farther to some
> kind of "stem cell" or even "proteins and amino acids" base
> that can scale up to the human level.

Which I believe would amount to building evolutionary
networks rather than a simple learning network.

Humans have a number of different and, I believe, essentially
innate skills such as walking or seeing in stereo which are
fine tuned by experience. Stereo vision actually appears as
soon as the two eyes are working properly at about 4 months
in a human child. Maturation as a process, that appears like
learning, is something Curt ignores when looking for "evidence"
for his views.

However I believe an efficient network in terms of use of
resources that fine tune to do walking will be different to
one that is fine tuned to see in stereo and any general
solution that can do both will fail in the biological world
because of the need for more resources and an impossible
long time to develop those skills using any kind of general
purpose evolutionary net in the individuals lifetime. It will
fail to compete with those organisms that have a tool box
full of already working skills that only need adapting to
a situation using learning.

But I don't dismiss what I think Curt alludes to. I just
see it as a slower evolutionary way to learn. If a system
is already "near" an answer (a good set of weights) then
it will learn faster and compete better with a completely
blank slate RL network. It will however be less likely
to come up with novel solutions. It will have traded
speed of learning with the variety of things it can learn.
I suspect the human brain has compromised between both
extremes with some innate wiring between different areas
(failures of which show up as things like autism) and a
"blank slate memory system".

To use Curt's analogy I see him as being stuck with a
monolithic approach to the problem, a bubble sort. To
move toward a faster solution, a quick sort, we need
more evolved limits on the learning system, such
as modules, to solve the high dimension problems.
They have to be evolved, or built in, they cannot be
learned in the lifetime of an individual.

Curt thinks that once he, or someone else, hits upon the
right set of units and network configuration, they will
have a network that can learn anything (within the limits
of the size of the network). That amounts to an evolutionary
network in my view and although it might learn "anything"
it will not learn much in a short time. Researches are
already developing such evolutionary networks to solve
particular problems, ANY particular problem given to them.
I doubt there will arise a evolutionary network that can
explode into human intelligence just by being exposed to
the environment even an evolved human social environment.
We come equipped to learn from such an environment,
IMHO.

>> ... when it may in fact require the hard slog of an


>> evolutionary path to produce all those complex (or hard to
>> find by chance) connections.
>
>
> Not just the same kind of path, but the same kind of
> *mechanisms*. Darwin's breakthrough idea was not evolution,
> but how natural selection made it work. AI is missing that
> kind of explanatory power, and I think the result is that
> we're also missing out on how that underlying concept of
> intelligence would tie those differing complex systems
> together.

Well I think natural selection or reinforcement learning
explains how a system can become smarter unless you are
alluding to something mysterious like sentience.

Intelligence is a *description* of behaviors we call "smart".
Intelligence is not *something to be found*. We know that
smart behaviors can appear from a selective process.

In answer to the question posed in this thread, the reason
we don't have strong AI (human level?) is because it takes
time and there is no single magical answer to find.


JC

Burkart Venzke

unread,
Aug 16, 2011, 5:56:36 PM8/16/11
to
> I'm afraid a definition of computational intelligence is doomed from
> the outset. It is a contradiction in terms because intelligence is a
> property of organisms, period.

It is *your* definition of intelligence, not more and not less.
Centuries ago, "flying humans" were also such a contradiction.

> Even the simplest organisms have a smattering of it,

Sorry, I don't understand "smattering (of it)".

> whereas even the most impressive computational
> demonstrations (my current favourite is Watson) are somehow lacking.

Yes, they are still lacking because they are not intelligent, cannot
learn (well) for example.

> The missing juice is quite simple to specify yet impossible to
> reproduce, something that every organism has and every machine lacks:
> a phylogenic history - the trial by fire and ordeal that every extant
> species has undergone to gain a foothold on this planet. Rerun that
> story using an alternative cast of characters (e.g. inorganic
> materials and processes) and you'll find your elusive A.I. Call it
> neovitalism.

That's again in a way your definition of intelligence.

Do you expect a plane to swing with its wings?
I don't expect an AI to be a second human, it should "only" become quite
intelligent by (artificial) learning so that it can fulfill a lot of
tasks a human can.

Burkart

Burkart Venzke

unread,
Aug 16, 2011, 6:38:45 PM8/16/11
to
Am 15.08.2011 20:20, schrieb Doc O'Leary:
> In article<j29d6m$hd9$1...@news.albasani.net>,
> Burkart Venzke<b...@gmx.de> wrote:
>
>>>> What kind of intelligence do you mean? Human? Natural (animals)? Or also
>>>> artifical for example for a turing test?
>>>
>>> I don't know that it is meaningful to say there *are* different kinds of
>>> intelligence. Though human intelligence does seem to differ from other
>>> animals, it only seems to do so by degree.
>>
>> The problem is that we have natural (human and animal) intellgence
>> (though its definition is difficult or even impossble, something like a
>> top down problem) whereas we try to create *artificial* intelligence (a
>> buttom up problem). So we try to match (partly) our natural and our
>> artificial ideal of intelligence.
>
> Right, so I'm saying we *must* do that grunt work matching in the first
> place in order for us to have any reasonable chance at solving the
> problem. I want to get away from it being an issue of top-down vs.
> bottom-up and treat it *independently* as a fundamental property of a
> system in the same way that computability is a fundamental property.

Intelligence as a fundamental property? What do you expect of or combine
with a "fundamental property"?
We must at least have the possibility to discuss about intelligence as
even more we have no common definition.

>> In my mind, "intelligence" should be interpreted as in the turing test,
>> the result of a black box should be relevant.
>
> It is relevant inasmuch as we *don't* have an AI that passes the Turing
> Test. So we can continue along the road of shortcuts and tricks that
> "cheat" the test, or we can turn back and properly get to the roots of
> what defines an intelligent system (human or otherwise) in the first
> place.

I hope that an AI will pass the Turing Test some day without any trick.
Perhaps we should define/try something like a "(little) children"-Turing
Test where we don't compare the system with an intelligent adult but
more than with a children.

>> And why do you expect us to be away from our desired destination?
>
> That is my observation of results-oriented AI research. They pick a
> sub-topic of AI to make their sacred cow, and then throw a lot of human
> intelligence and hardware at it. They *do* solve that limited problem,
> but in a way that results in the "AI effect", and very little real
> progress when it comes to understanding fundamental intelligence.

Yes, that is often a problem. Perhaps at least a result is better than
none...

>> Learning is not everything for an intelligence, you are right, it is
>> (for me) only a necessary condition ("sine qua non" is leo.org's
>> translation for the German "notwendige Bedingung").
>
> And my argument is that it is *intelligence* that is necessary to make
> learning meaningful.

The two notions are combined somehow, however exactly.
I think that learning can be defined easier and on the other hand
intelligence is less clear.
If you base something on an unclear notion, the whole think is more
difficult to solve or explain at least.

> Without the selection pressure it applies,
> "learning" is nothing but an unsatisfying random walk.

Not necessarily. Learning can be directed by one or more humans (as (a)
teacher(s)).
As AI is a machine, *we humans" want to be satisfied.

>>> To directly address your
>>> question, yes, I can imagine intelligence without learning, just as I
>>> can imagine that a system can contain true statements that are not
>>> provably true within the system. Whether or not intelligence requires
>>> learning, results in learning,
>>
>> "Results in"... what "learning" do you speak of?
>>
>>> or is equivalent to some forms of
>>> learning *still* depends on a good definition of intelligence.
>>
>> What can be equivalent to learning? Learning means e.g. collecting and
>> processing new data, how could this be substituted?
>
> I'm talking about system equivalence, in the same way that universal
> computability can be achieved by different systems. Clearly not *all*
> learning systems are intelligent (or even achieve universal
> computability), so an independent definition would have to be in place
> before a researcher would be able to even say *any* learning system is
> equivalent, let alone make the grand assertion that learning is the
> source of intelligence.

Right. Good criteria for comparing different learing systems are a good
idea. I am not sure if there exist some that are good enough for you ;)

I think that learning is something beginning on a (small) basis and
climbing up the abstractness with more and more data/knowledge.

Burkart

Curt Welch

unread,
Aug 16, 2011, 9:43:08 PM8/16/11
to
casey <jgkj...@yahoo.com.au> wrote:
> On Aug 14, 6:21=A0am, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> >> Humans have no difficulty in translating their neural weights
> >> used to play chess into a set of rules or heuristics so there
> >> is clearly a mechanism for doing it.
> >
> >
> > Apples and oranges John. Apples and oranges. You are talking
> > about two unrelated things and you don't even realize it.
>
> > The neural weights don't get "translated" into rules or heuristics.
> > They are just used.
>
> That they are "just used" doesn't mean they can't also be translated
> into rules of heuristics.

That's true. And the idea that you could create some heuristics from the
weights is to be expected. But the idea that you could get a small simple
set of heuristics to 100% duplicate the behavior of the neural net is what
is not to be expected. The whole point of having a million weights in a
large neural network is for the purpose of creating great complexity.
"Heuristics" are by their very definition great simplicity.

It's like assuming that any photograph you take could be represent by a few
simple line drawing rules. Yes, you can translate any photograph into a
line drawing, but the line drawing is a gross simplification which leaves
out 95% of the information.

I don't really grasp why you don't get this. You keep trying to imply we
don't need a big complex brain and that all the complexity actually
translates down to a few simple heuristics. Why would you expect that to
be the case? Total human behavior is not simple exactly because it is
driven by a large complex brain. Like with the line drawing metaphor, we
can make up simple abstract rules to talk about our behavior, but those
rules don't even scratch the surface of the true complexity behind human
intelligent behavior.

> > Sometimes, the weights make us say things like "maybe I should
> > protect the queen", or "apply more pressure to the center of the
> > board". But those heuristics are not in any sense the same ones
> > the neural net is actually using.
>
> I would disagree. The weights and the resulting neural firing is
> the neural code and some of it can be translated into the code
> we use to communicate with each other. There is at least a "many
> to one" translation from a neural code to a language code. For any
> verbal behavior there is a neural code substrate.

Yeah, it's like the line drawing metaphor I talked about. We can take the
great complexity of a photograph, and translate into a line drawing which
represents the same image. But we throw out most the information in the
picture when do that.

The neural code and operation of the brain is not a line drawing. It's the
actual digital image with it's full complexity. The "line drawing" is just
some high level (and mostly lacking) abstraction we use to attempt to
communicate externally what is happening in our brain.

The underlying complexity of what the brain is actually doing (and our
total inability to describe it fully with heuristics or language of any
type) is why no one has ever made any good progress trying to duplicate
intelligence by building "line drawings" (their vague understanding of what
think their brain is doing).

> > If translating the neural nets in our brain into heuristics was so
> > easy, anyone would write a chess program to play chess at the
> > exact same level they played it at. Not only that, you would
> > write the program to play EXACTLY the same way you did. So after
> > translating the heuristics in your head, into code, your program
> > would perfectly mirror every move you made in any chess game.
>
> I am not suggesting we can translate everything into language as
> we know some things happen that we cannot translate (they are said
> to be unconscious processes).

I think "unconscious processes" is a misnomer because it implies we are
conscious of some processing. I don't think we are conscious of any of it.
All we are conscious of, is what the brain has done after it does it. And
then we use our silly little "line drawing" words to talk about what we saw
the brain do after the fact and pretend, in how we talk, that we have some
understanding of why it happened.

All we really understand, is sensory probabilities. If we see brain
behavior B follow behavior A, we claim A caused B. But the true cause, the
complexity of the processing at work in the brain, is totally hidden to us.
All we recognize in our behavior (or the behavior of others), are the
patterns that are predictable. For the behavior which we can't see
predictable correlations, we call "unconscious processing".

There is most likely no difference in what the brain is doing between the
behaviors we see correlations in, and the behaviors we don't see the
correlations in. Conscious and unconscious processing is not two different
types of brain processing. It's just the behavior we have some weak ability
to predict, vs the behavior we have little to no ability to predict.

Some of our behavior is very simple, but only because in our environment,
that type of simple behavior is what leads to the highest rewards. So the
brain, which is able to produce highly complex behavior, learns to produce
simple behaviors when those work best. If we need to press a button every
time the light flashes to get a reward, then we produce the highly simple
behavior of pushing the button. And when we see those simple behaviors
emerge from the complexity of total human behavior, we can describe them
with very simple heuristics (he pushes the button when the light flashes).

But for all the behaviors that end up being not so simple, we are totally
at a loss to describe it with simple language, because the behavior is not
simple, it's some complex function of a million weights.

> And the translation is not always
> correct as the translator found in the left hemisphere will make
> things up. But one way of following a neural process while someone
> is thinking is asking them to talk aloud about their thoughts.

Yes, but that is not following the real neural process John. At best, all
they can do, is produce through language a pale line drawing of the full
complexity at work in the brain. They don't have any awareness of how or
why the brain is selecting what to do next. They are only aware of what did
happen, after it happened. So you give someone a puzzle to solve, and he
starts thinking about it. And you ask them to talk about what they are
"thinking".

"Thinking" is not "what the brain is doing" any more than "hand waving" is
what the brain is doing. It's all just behavior controlled by the brain.
If you give someone a physical puzzle to solve, and video tape them, you
can watch the physical motions they followed to solve it. But the video
tape doesn't give you any real insight into why they brain selected that
course of action.

If you get someone to talk about what they are thinking as they work on a
mental puzzle, you are getting the same sort of information (only poorly).
You just document what they did in their mind, and you got nothing about
how the brain selected that sequence of thoughts.

If we have the source code of some computer program, and trace the
execution of the code, step by step, we can understand exactly why the
program is doing what is doing. But when we document what a human is doing
with their body, or their mind, we have no data one _why_ the brain
selected that course of action. We only have after the fact recording of
what it happened to do in that case.

Unlike our computer programs every thing the brain does, is a function of
millions of neural weights "voting" on what to do next. If we perform the
sequence of events A B C (with our body, or our thinking process), it
doesn't mean there's some simple circuit that is hard wired to produce that
sequence. IT's a huge learning machine that is calculating, based on lots
of past experience, what to do next, given the large and complex current
context. And in the current context, that large complex map, selected the
simple sequence of actions, A B C.

But we have no real way to understand that selected sequence, unless we
fully understand the brain's idea of what the current context is - and
that's something what would requires millions or billions of data points
about the states of millions or billions of neurons to just collect the
data to start with.

I strongly believe our learned behaviors (which includes our learned
patterns of thinking), are produced by a big generic sequence generator
that is programmed by reinforcement. It creates a highly complex mapping
from current state, to next best action to perform in that state.

Just like TD-Gammon uses a mapping function to map from the current board
position, to the next best move. Its mapping function only uses a few
hundred parameters, but to create human like behavior, we need a
programmable mapping function based on billions of parameters. And for
every everything we as humans do, millions and millions of those
parameters are being used to calculate what output to produce next - what
to do next. Most the time, the complexity of this function with a million
parameters is too complex for us to understand. But sometimes, it does
produce simple behavior, that could have been described with 4 parameters.
When it happens to learn to produce something simple, we can describe it,
but all the other times, we have no hope of describing it. And we have no
hope of reverse engineering such complexity by studying the behavior
produced.


> > That of course never happens. Doesn't even come close to happening.
> > People have NO CLUE what the neural weights in their head will make
> > them do, until after they have done it.
>
> I never suggested they know anything about their neural weights.
> What is happening is the neural weights form high level patterns
> that can be translated into sentences.

Sometimes, they form simple behaviors, but most actions, can not be
translated.

Whey did I type the words I typed in the above sentence? I made a few
typing mistakes, and had to correct it as I typed. Why did I make the
mistakes in the places I made them? This is the full complexity of true
human behavioral - and we have no hope of every translating such behavior
down to "he makes a typing mistake ever 20 letters". We are so used to
ignoring most of the complexity of our behavior, we seem to forget it's
even there.

I can't begin to explain why I used the words I did in the sentence above,
and if people carefully studied everything I have posted to Usenet, all
they could do is say things like, "his probability of using the word
"simple" is much higher than for other people" (and such things like
that). There are no simple heuristics to explain what I will type next.
Ever though there are some high level probabilities we do know, like "Curt
is likely to produce a shit load of hand waving rambling and never get to
his point in the next Usenet message".

> A picture is an array of
> pixel values but the picture itself is a high level representation
> and it is that representation we are talking about when we say
> it is a picture of Bill Jones.

NO it's not. The HIGH LEVEL REPRESENTATIONAL you see in the picture (such
as that's a picture of Curt) is something that exists IN YOUR HEAD. NOT IN
THE PICTURE. It's behavior your brain produced, with it's billions of
parameters in it's mapping function, in response to the complexity of the
picture.

I point my "John" machine at the picture data, and out of the machine comes
"That's a picture of Curt".

The image is just nothing more than a lot of pixels. It's only the mapping
function in your brain, that translates that high dimension sensory input,
ti a simple language description of the image.

> > If translating heuristics were so damn easy, we would have solved
> > all of AI 50 years ago John.
>
> I never said it was easy but they are working on it.
>
> > People can't translate them, which is exactly why we have made so
> > little progress in 50 years.
>
> Yeah and 500 years ago you would have said people can't fly that is
> why they haven't made any progress in 50 years.
>
> > Every time someone things they have the right "heuristics" to
> > explain aspects of human behavior, it has failed to be very
> > intelligent.
>
> Rubbish. Chess programs are very good at playing chess. Programmers
> have been very good at embodying their methods in code. When you
> play chess you use all the things you have learnt about chess and
> programs can also use all the things programmers know about chess.
> There is no reason I can see that the same couldn't work for an ANN.

Rubbish. Yes chess programs play good chess. But programmers have NOT
been good at translating how we play chess, into code. They have just
gotten good at writing chess programs and that is all there is to it.

99.999999% of what a chess player "knows" about playing chess is encoded in
billions of neurons in his head. Chess players don't have a clue what that
information is. All they can too you is, they look at the board, search
out part of the game tree, and pick the move that "looks" best. Can they
tell you why they checked some some part of the game tree, while ignoring
other parts? No of course they can't, other than to say "those other paths
were bad options so I didn't look at them".

The way humans play chess has almost nothing in common with how our
programs play chess exactly because humans have no hope of understanding,
let alone programming, the system at work in the brain, into the computer.

Humans have their great and powerful "value" mapping function in their head
which they have carefully trained by playing chess for millions of hours.
That function lets them know instinctively, which board positions, and
which moves, are better than others. And with that, they can do a deep and
very narrow game tree search.

None of our chess programs have yet duplicated such a strong "value"
function. And as such, they are force to use bad value functions, which
means they must all do wide, and not very deep, tree searches. They only
keep up with humans, because they can search the tree millions of times
faster.

It the lack of a good trainable evacuation function that keeps the chess
programs from playing the same way as human play. But that same lack, is
what keeps our computers from acting intelligent on all tasks we try to
apply them to.

> > Neural networks like TD-Gammon which only have a few hundred
> > weights, are way beyond our comprehension. A neural network
> > on the scale of a human brain with trillions of weights would
> > be so far beyond our comprehension it's silly to even consider
> > there was some possibility of "understanding the simplicity
> > behind the weights".
>
> Opinions based on intuition like the earth is flat, prove it.

Prove what? That we can describe ALL the complexity of a billion parameter
function with a few simple words?

The fact that we can describe a minute little tip of the iceberg fraction
of the great complexity with a few simple words does not in any way mean
"we can translate it simple heuristics" which is what you keep saying. It
only means we describe an insignificantly small amount of the behavior,
with a few simple words - which is not useful or important.

Your position is like trying to say we can explain how Word works by saying
things like "it prints the document when you hit the print button". Those
10 words are a simple high level heuristic that describes an aspect of the
behavior of the Word program. But they have nothing useful to do with the
million lines of code that was actually required to make the print function
work in Word. A true description of how Word works, is the millions and
millions of lines of source code, not the "it prints when you hit the print
button".

> > We can recognize some high level patterns, in the networks,
> > but we can't understand what the networks will do, or explain
> > why one specific weight is the value it is. WE an build tools
> > to so some calculations on these sorts of things for us, but
> > we can't in any sense, as a human, "understand" the full set
> > of weights and what they represent.
>
> Are you playing with the word "understand" here? I may not
> "understand" French but I can understand an English translation
> and that is what I am suggesting.

I can understand what and and gate does. I can understand the function y =
3x^3 +2x - 18.

If you write a math function that takes 100,000 pages to print all the
weights for the formula, and which takes a million number input, and
translates it to a 1000 output values, I have NO CHANCE IN HELL to
understand that function. It's way too complex for me to make any
predictions about what it might output for a given input.

If, that complex function actual reduces to something simple, like it
always outputs zero for everything input, then I can understand it, and
describe it. But the brain's huge mapping function does not reduce to
something simple like that. If it did, we would have been able to code the
function, and create AI long ago. It's not just so complex that it's
"hard", it's many orders of magnitude past being impossible for a human to
understand.

> >> http://www.scholarpedia.org/article/Td-gammon
> >>
> >> "An examination of the input-to-hidden weights in this network
> >> revealed interesting spatially organized patterns of positive
> >> and negative weights, roughly corresponding to what a knowledge
> >> engineer might call useful features for game play."
> >
> >
> > Notice the words "Roughly corresponding". That means, they "see
> > patterns" but HAVE NO CLUE WHY THOSE PATTERNS are the way they
> > are, or what they mean.
> >
> >The next sentence was:
> >
> >
> >"Thus the neural networks appeared to be capable of automatic
> > "feature discovery," one of the long-standing goals of game
> > learning research since the time of Samuel."
> >
> >
> > The point they were making is that the network _LOOKED_ as
> > if it were doing feature discovery, NOT, as you try to claim,
> > anyone had a clue how to describe, or make use, of those
> > features (outside of the neural network code of TD-Gammon).
>
> Other researchers are developing techniques to translate the
> patterns into rules. The point is; at this time you have offered
> no proof it is impossible just said such a belief is stupid.

It's stupid because such it shows you don't have even a basic understanding
of what we are facing here. Human behavior is not simple, and can not be
described with simple language - PERIOD. What I believe we can do
however, is describe with VERY simple language the system that creates the
complexity.

> Computer source code can be translated to a neural network of
> weights. A simple example is a suitably weighted neuron that can
> act as an AND gate.
>
> P =3D Q AND R
>
> +-----+
> Q --->| |
> | AND |----> P
> Q --->| |
> +-----+
>
> And the process can be reversed.

Yes. And what does that have to do with trying to reduce 100 billion bits
of data (the complexity of the human brain after a life time of training)
down to 100,000 bits (your simple heuristic description of it)?

I don't know what the researchers you mention are trying to do, but if they
are trying to do fit an elephant into a tea cup like you are doing, they
are idiots as well. Most likely, they are not idiots. and whatever they are
doing, they aren't as confused about it as you are.

They might be trying to automatically translate a complex photo, down to
line drawing, to create a high level abstract description But doing that,
is not the same as "translating it to something simple" as you keep saying.

Creating a high level abstraction automatically is not "translating".
"Translating" implies you keep all, or most, of the information. Creating
an abstraction, throws most the information away.

Curt Welch

unread,
Aug 17, 2011, 12:38:30 AM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 14, 12:53=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> > Well, the brain must work with a "simplified" representation
> > since it's impossible to have a full representation of the
> > universe in our brain.
> >
> > But I suspect you might be talking more about the simple list
> > of words that we might say as we talk to ourselves (such as
> > the symbol "ball" being a simplification of our actual sensory
> > perception of real ball. However, there's no indication that
> > the meaning our brain assigns to a simple symbol like the word
> > "ball" is anything simple at all.
> >
> >
> > Meaning seems to be highly complex - probably encoded with the
> > firing and stimulation of millions of neurons to represent
> > something "simple" like the meaning of the word "ball".
>
> There are thousands of logic gates turning on and off when
> a computer program recognizes a ball but that doesn't mean
> the "meaning is highly complex".

That's because computerate are built that way John. If they were too
complex to understand, we wouldn't be able to make them do what we want
them to. As such, a BIG part of hardware and software engineering is
finding ways to keep the working of the machine simple enough for the
engineers to understand, while still allowing it to do fairly complex
functions.

It's not natural that they are simple. It's a limitation we are forced to
cope with because humans are limited in the level of complexity they are
able to understand (just like the 7+-2 limitation). Our brain has limits,
and if the machines we try to build get too complex, we can't make the work
anymore. When they do something we don't like, we can't fix the "bug"
without accidentally adding more bugs at the same time (without even
realizing we added the other bugs). That's what happens when we let our
machine design get too complex to understand.

Evolution doesn't "understand" what it's doing. It's a pure dumb trial and
error approach. Make a change, see if it works. Keep the things that
works, throw away everything that doesn't.

Reinforcement learning is the same thing. It's trial and error learning.
It doesn't have to understand why the complex machine it has built works,
it just keeps what works, and changes what doesn't. Such systems are able
to produce things that work, but which are way too complex for a human to
"understand".

> > The firing of a million neurons is far simpler than the real
> > word ball itself, but it's not what I would every call a
> > "simple" internal representation.
>
> How the brain represents the world is a big topic and I can't
> summarize the current theories here. But that there may be
> millions of neurons involved when the brain is representing
> something like a "ball" isn't what I mean by highly complex.

If a million bits of data is "simple" to you, then what do you mean by
complex?

The point is that it's not just one fixed million bit pattern that means
"ball", there are billions of different million bit patterns that mean
"ball", and everyone of them is a slightly different type of "ball". That
is no complex to you? Billions of different types of "balls" is not
complex?

> > I was talking about writing the code to make a ROBOT actually
> > chase and catch something like a real rabbit, in a real forest,
> > full of real obstacles like rocks and trees and rivers and dirt
> > holes the rabbit can hide in.
>
> I understand your intuitive belief in the need for a complex
> solution to such apparently complex situations but based on my
> experience trying to solve vision problems that evolution had
> to solve, or a system would have to learn to solve, I hold a
> different view.
>
> I would add that a simplified representation may still require
> intensive parallel computations which the brain is good at.

What you are familiar with is CODE WRITTEN BY HUMANS. Such code is ALWAYS
simple, because humans ARE NOT ABLE to make complex code work. Anything
that is within your range of understand, is, by requirement, not complex
(by the way I've been using the word complex for this issue).

Have you written a vision system that works as well as the human vision
system? If you had, you would have solved AI.

Engineering works like evolution. We take a step at a time, making small
changes, and expanding our range of understanding. The machines keep
getting more advanced as this evolution process continues. But unlike
evolution, we don't do blind trial and error (very much - because it's far
to slow). We only modify what we can understand. As human engineers, we
are forced to stay under that tiny roof of complexity that we are able to
understand and manipulate into new designs.

This is the core reason AI has failed. They tried to hand-code by
engineering, and system that was too complex for them to every understand.
They made teh exact same error you are making - living on false hope that
there is actually "simplicity" at work, that they just haven't figured out
yet.

It's not simplicity - it's a billion parameter mapping function created by
the huge neural mesh that is way beyond any human understanding. Such
complexity can not be hand coded by an engineer. All we can understand,
and code, is the tip of the iceburg, which is why projects like CYC was
given up as a solution because no matter how far down you tried to reach to
add more complexity, the amount of complexity missing, was always orders of
magnitude more than what had been coded (they called what was missing was
"common sense").


> >> What your brain actually has to process is dramatically reduced
> >> and much of the detail you think you see "out there" is a
> >> constructed illusion.
> >
> >
> > Yes, Duh again. You and I know this.
>
> I don't think you do appreciate just how reduced it is.

Well, despite your ability to understand, TD-Gammon, with it's highly
trivial state signal (a reduced representation of the board - not even a
complete board description), is still a function too complex to understand.
The "reduction" you are talking about has nothing to with how complex the
mapping function is that works with the data after the fact. And the
TD-Gammon state signal is a many orders of magnitude smaller than the
"reduced data" you are talking about that the brain works with.

> > But the data it is processing, is still a data flow in the many
> > Mbits of information per second. It's not only high dimension,
> > it's extra super high dimension.
>
> All the vision data I use in my visual recognition programs are
> "high dimension" in that sense but I don't see the issues you
> seem to have with regards to how a system makes use of that data.
>
> I think actual examples are required to see where you find an issue.

Well, I'm losing context of which issue I might have been talking about
here.

> Just as with the tic tac toe example below, which you didn't
> follow why I mentioned it, with vision problems I started out
> with a simple input of a 5x7 binary matrix that could hold
> the patterns for the standard ASCII set. Given that such an
> input has 2^35 =3D 34,359,738,368 possible combinations what
> methods could a program use to learn to recognize (or map
> to an output) any of those possible input patterns.
>
> In my experience people at first think that a particular vision
> problem requires a complex solution because the input is so
> complex. Just recently someone was trying to solve a pattern
> extraction problem the hard way and when I to showed him there
> was a simple solution he exclaimed "Wow, thanks, that algorithm
> is amazing!" It wasn't amazing of course but it is one of the
> many times I see a newbie overestimate how complex a solution
> has to be.
>
> I think evolution would tend to find the simpler solutions first
> because a simpler solution is faster and uses less resources
> giving it a reproductive advantage over those organisms that
> use a more complex solution.

Yes, I think that logic is valid. If there is a simple solution, I think
evolution would tend to find it first. If there are many solutions, the
simpler one would have a much higher probability of being found first.

And like I said, I've not lost what the context of the above thread was.

But, human behavior is not hard coded. Babies are not born being able
write Usenet posts and drive cars and program computers. So we aren't
dealing with a hard coded solution. We are dealing with evolution creating
a learning machine.

I've always argued that the learning machine (the design of it) is actually
very very simple. None of my arguments about complexity was an argument
saying the design of the learning machine was complex. So evolution never
had to find a complex solution. It only had to find the very simple
learning machine design. And that fits your logic as well, because if it
can solve the problem by hard coding a very simple learning machine, then
it will find that before it figures out how to hard-code humans so they can
be born knowing how to drive cars.

My argument, is that the machine the learning machines builds, as it
learns, is too complex for a human to understand. So if you make the
mistake of trying to hand-code the work of the learning machine, you will
be screwed. It can't be done. The machine is too complex for humans to
understand and build. So, if all this as I believe it is true, it means
the only way for us to create human intelligence, is do the same t hing
evolution did - build a simple learning machine, and let it evolve the
complexity as it learns.

Now in terms of the high dimension issue, that just make building the
learning machines hard. It's easy to write code that deals with high
dimension data. I never said it wasn't. Our computers deal with very
high dimension data all the time.

The only problem with high dimension data, is writing strong generic
learning algorithm that operates on high dimension data. We know exactly
how to do generic learning on low dimension data. People have even proven
mathematically that the algorithms are guaranteed ton converge on the
optimal solution to the problem. But no one has figured out a good generic
algorithm that works on high dimension data. All the current low dimension
algorithms suffer the curse of dimensionality - they can't scale past low
dimension problems. The only programs that deal with high dimension data,
are generic solutions, they are only limited domain solutions.

The brain implements some sort of generic solution to the learning problem
on high dimension data.


> [...]
>
> >> The number of possible combinations for tic tac toe is also
> >> too high for a small system -unless it makes use of constraints
> >> that *can* be found in the game by a non-standard RL that
> >> doesn't just use an exhaustive search algorithm.
> >
> >
> > Huh? The entire game state space is less than 20K states
> > (without any help of reflections rotations, or eliminating
> > invalid board positions). What computer these days can't
> > track 20K variables? You have to look pretty hard these
> > days to find a computer that is too small to do that.
>
> I think you missed the reason for the statements.
>
> I was making a relative comparison where we can do it both
> ways to see how even system too small to memorize all the
> game states could still learn to play the game and where
> those techniques, such as using temporal differences and
> ANNs, can be easily illustrated to show techniques for
> dealing with problems bigger than a system can solve by
> an exhaustive search.

ok.

> The fact that a modern computer can solve the tic tac toe
> learning by brute force by playing every combination wasn't
> the point being made. Have you forgotten the email exchanges
> we had when I wrote such a program to explore the use of
> temporal differences and then went on to explore the use of
> an ANN? It is better to start with a simple example to see
> if you do understand these things which means you can code
> them. I was going to try and duplicate td-gammon but didn't
> have the time.

Yeah, though I'm not sure if tic-tac-toe is really the right "example"
space to work in if you are trying to move beyond what TD-Gammon did.
Though there might be something useful to learn there.

> > You really need to learn to use the term "high dimension".
> > You failed yet again to get it right.
>
> Well perhaps we need to flesh out what the difference is
> when I talk about "high dimensions"? I often read others
> write about it the way I do. They all seem to me to talk
> about reducing the data to a manageable level for any
> given system.
>
> jc

Curt Welch

unread,
Aug 17, 2011, 2:08:10 AM8/17/11
to
Burkart Venzke <b...@gmx.de> wrote:
> Am 13.08.2011 02:53, schrieb Curt Welch:

> And the AI brains
> > don't need vacations. They don't need to be paid.
>
> No vacation, not to be paid... If AI have human like emotions, won't
> they need something like this?

I think probably not. If we build them just like us, then yes, they will
need the same things we need, like a vacation, (and sleep).

The trick is that we will control their motivations, and we will give them
special motivations that are optimized for whatever work we want them to do
for us. Humans are motivated to for things like protecting their body, and
eating, and having sex. That's what they want to do. But they end up
having to do things they don't like to do, in order to get more of they do
want. We get up and go to work, when we would rather stay in bed, eat and
have sex, because we know that if we don't get up and go to work, the food
will soon stop coming, and the bed will also go away, and all sort of other
bad things can happen. Vacation is one of these treats we give ourselves
for all the hard work. It's part of the payout at the end of the road, for
doing all that stuff we didn't want to do.

But if we build an AI, that is actually motivated to love cutting the
grass, it won't need to escape from grass cutting to "go do something fun".
Cutting the grass is what that robot sees as fun. Cutting the grass to that
robot, is like having sex. It loves the fact that it gets to do 24 hours a
day every day.

So just because they are intelligent, doesn't mean they need to have the
same motivations humans do. We don't want them to have the same
motivations at all, We want them to love being our slaves - and we will
build them just that way - as machines that love us for allowing them to
work for us.

> > When these sorts of machine become available, the value of human
> > intelligence, will drop to below the value of these machines (which is
> > only the $1000 capital cost plus the pennies a day for the power to run
> > them). Humans basically won't be able to find work.
>
> May be... would you appreciate it?
> Or we could have other work like controlling the machines.
> No (forced) work would be acceptable if we have enough money to live
> without work. But this needs another society where we have a real
> democracy (without the gap between the rich and the poor, also regarding
> to power).

Yes, I think when we get to that point, human society will transform into a
socialistic democracy where our only job, is telling the machines what we
want them to do for us (well, that and voting on the laws that control the
society). We become full time consumers and turn the production role over
to the Machines. The machines use capitalism to optimize their production
to fit the demand of the humans, and each humans get an equal allowance to
spend every day. They buy goods and services from the machines with their
money. The machines are taxed, but the tax money just goes back to the
humans to spend again. So the robots form a big intelligent production
system. There is no "working for a living" in this new society. It's
socialism done correctly, where humans don't have to be both efficient
produces, and consumers at the same time.

> > Human doctor doing surgery? Now fucking way.
>
> Then, doctors may have more time to speak their patients, more about
> their emotions (and problems with them) - AI as a machine should not be
> better humans (at least at first).

Yeah, at first. But as technology develops, the AIs will be better at
everything, including giving psychological advice to humans.


> I don't know prositutes but when love is an aspect, a human should make
> more sense.

I think you overrate human emotions as being "special" in ways they really
aren't. The only things the AIs won't be good at, is the biological stuff.
They won't be good blood or kidney donors. But they will figure out how to
grow and make artificial replacement kidneys for us.

Humans will have an edge at first for having the experience of what is it
like to be a human. It will take some time for the AIs to learn what it is
like to be a human, so they can give humans good advice, but in the end,
the AIs will understand us better than we understand ourselves. What
happens when the AI has been around for 1000 years and has been giving 50
generations of humans the same advise over and over? Will a human that's
only been alive 30 years have any hope of being a better adviser to humans
than an AI that's been doing it for 1000 years? Not likely.

> > Humans won't be able to work - the entire notion of "working for a
> > living" will go right out the window once these advanced AIs become
> > dirt cheap. Whoever owns the most machines, will be the one with all
> > the wealth. If you fail to get on board early, by buying and owning
> > the first machines, you will be screwed. The guys that get in first,
> > will use the machines to take over all markets - and will build bigger
> > and smarter machines, to make all the invest decisions for them. The
> > world will quickly become dominated by a few huge privately held, AI
> > corporations, that don't have a single human working in them.
>
> I hope we can stop such economical war(s)...

I do to. But it's already happening and true AI isn't even here yet. Many
of the richest people in America got that rich by building smart machines
and letting the machines do all the work for them (facebook, google). The
super rich are becoming more and more about who owns the best machines.
Factory automation has been advancing for centuries. It will only get
worse as true AI is developed and companies like McDonald's and Starbucks
have to go to Google to buy the AIs to automate their stores. All the low
end labor jobs start to dry up and if you don't have an advanced
engineering or management degree you won't be able to find work or make a
living for yourself. The poor get poorer, the rich keep getting super
richer. Society keeps advancing, but more and more people are being left
farther and farther behind. It's going to be a rocky transition because
people don't like, or understand, change.

> I have recently heard that the idea of the American dream ("dishwasher
> to millinaire") is questions more that ever because of economical
> problem e.g. of overindebted house owners.
> Aren't you from the USA and can say if this is true?
> (More critical people for a better society...)

Yes, I'm from the US. Hard to say what's really at work in the short term
here. It's hard to chase the American Dream in a bad economy. Lot of
people are just lucky to be able to eat.

But certainly with the housing boom, people were "cheating" to get the
American Dream (trying to get rich off the housing bubble), and now they
are paying for their mistakes.

There can certainly be too much expectation of an "American Dream" which
makes people feel worthless if they don't get rich, which makes them take
risks they should never be taking, which in the end, causes everyone
problems. Perhaps there has been too much of that happening at all levels
that led to this current economic depression, and to our nations large debt
numbers.

> > I suspect there will be a real danger that future generations will
> > learn to like the AIs better than they like other humans. They might
> > not even want to have other humans around them, when they could
> > instead, have their "AI friends".
>
> Also I. Asimov has predicted it but only partially (such a society may
> die sooner or later).
>
> > What happens when one of the richest AI barons decides he doesn't
> > really like humans at all, and he's gained so much wealth and power, he
> > just takes over the whole world with an AI army, and kills everyone
> > except a handful of human slaves he keeps around in his "zoo"? Why
> > share the resources of the planet with billions of other humans, when
> > he can have it all for himself, and his 10 closets friends?
> >
> > To prevent a path like that from happening, society will have to make
> > some changes.
>
> Right. And we AI developers can influence it e.g. by giving the AI not
> the whole spectrum of emotions - and let forbid the others (like bad
> weapons).

Yes, but controlling AI is hard. It's not so easy as what Asmov suggested.
You can't just put in an English langue statement "don't harm humans" in
the hardware. Intelligence is an RL machine - and they are not controlled
directly, but only indirectly, by how you wire their motivations.
Controlling a machine indirectly though motivations is tricky business that
can backfire on you. The AI you motivated to love cutting the grass,
killed you in your sleep because you decided you want the grass replaced
with stones and he saw you as murdering the thing he loved most in the
world. But I'm sure, once the AI technology is developed, we will figure
out how to control it and minimize those sorts of bad side effects.

Just keep reading further down...

http://en.wikipedia.org/wiki/Strong_AI

read the section:
Origin of the term: John Searle's strong AI

Yes, but your "thinking" is actually happen at the level of neurons firing
pulses. You don't say you "think with your neurons" only because you
aren't aware that is what you are doing.

Just like the computer is doing it's work, at the level of 0's and 1's.
How _we_ think about the machine, is at a much higher abstract level of
code execution, but what the machine is actually doing, is flipping bits on
and off and allowing energy to flow in different paths though the machine.
The computer "thinks" in 1's and 0's.

> >>> The "symbols" that make up our language (words) are not a foundation
> >>> of the brain, they are a high level emergent behavior of the lower
> >>> level processing that happens.
> >>
> >> OK, "symbols" has different meanings or intentions. Every word is a
> >> lingual symbol with which we associate more or less (other) items
> >> (other symbols, emotions etc.).
> >>
> >> I think about a "stronger" (than "weak") AI which can act with and
> >> learn symbols like words. How far such a way to AI may work, I don't
> >> know.
> >
> > Right, to me a symbol is a class of patterns that can be detected by a
> > sensory system, that is unique from other symbols, in a set.
>
> Also "symbol" seems to be seen in different ways.

For sure.

> The usage I have learned is: In symbolic system, knowledge is
> represented explicitly in opposite to subsymbolic/neuronal
> systems/networks with (only) implicit knowledge.

I guess I don't know what explicit and implicit knowledge is supposed to
be. :)

> Burkart

Curt Welch

unread,
Aug 17, 2011, 10:35:02 AM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:
> On Aug 15, 2:58=A0am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
> wrote:
> > [...]

> > What, then, of humans who suffer similar memory problems?
> > Are they necessarily unintelligent? Or, really, what of
> > the countless normal people we encounter only briefly in
> > our daily lives? I think there are many circumstances
> > where intelligent behavior can be demonstrated without any
> > learning.
>
> Intelligent behaviors are the result of learning, be it the
> result of changes in the brain as a result of experiences or
> or as a result of changes in brains due to the selective
> process of evolution.

>
> So even if a person loses their memory or an animal is 100%
> innate in its sensible actions the behavior itself has been
> the result of a learning (or evolutionary) process.
>
> The behavior of a chess playing program is the result of a
> learning process in the brains of the programmer and the
> chess players who provide information to those programmers.

>
> So without learning no new behaviors we might call intelligent
> can ever arise. And learning is the result of a feedback
> process involving an evaluation to select the changes.

>
> So in that sense all intelligent behavior involved learning
> at some stage. Without learning there is no intelligence.

Sounds good to me! :)

> JC

Doc O'Leary

unread,
Aug 17, 2011, 11:33:14 AM8/17/11
to
In article <j2erhm$vrd$1...@news.albasani.net>,
Burkart Venzke <b...@gmx.de> wrote:

> Am 15.08.2011 20:20, schrieb Doc O'Leary:
> >
> > Right, so I'm saying we *must* do that grunt work matching in the first
> > place in order for us to have any reasonable chance at solving the
> > problem. I want to get away from it being an issue of top-down vs.
> > bottom-up and treat it *independently* as a fundamental property of a
> > system in the same way that computability is a fundamental property.
>
> Intelligence as a fundamental property? What do you expect of or combine
> with a "fundamental property"?

I don't understand the question.

> I hope that an AI will pass the Turing Test some day without any trick.
> Perhaps we should define/try something like a "(little) children"-Turing
> Test where we don't compare the system with an intelligent adult but
> more than with a children.

In a way, that's what I'm trying to get at. I'm actually going to the
extreme end of "little", by asking the question of what *is* that
(potentially) infinitesimally small division that separates intelligent
behavior from unintelligent behavior.

> I think that learning can be defined easier and on the other hand
> intelligence is less clear.
> If you base something on an unclear notion, the whole think is more
> difficult to solve or explain at least.

Which, again, is why I see the main issue to be clearing up what we mean
when we say something is intelligent.

> > Without the selection pressure it applies,
> > "learning" is nothing but an unsatisfying random walk.
>
> Not necessarily. Learning can be directed by one or more humans (as (a)
> teacher(s)).
> As AI is a machine, *we humans" want to be satisfied.

Intelligence is not merely learning what the satisfies the teacher.
Sometimes the teacher is wrong. An intelligent system should be able to
detect and correct that.

> Right. Good criteria for comparing different learing systems are a good
> idea. I am not sure if there exist some that are good enough for you ;)

The problem is not that I'm asking for any extraordinary "good enough"
level. The problem is that the "learning" camp here hasn't even begun
to look into it. Without that starting point, there is no reason to
conclude that any form of learning is equivalent to intelligence.

Curt Welch

unread,
Aug 17, 2011, 12:20:34 PM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:
> On Aug 15, 1:59=A0am, c...@kcwc.com (Curt Welch) wrote:
> > I have no clue what I'm going to write in one of these posts,
> > until I'm done writing. I make it up as I go. I can't
> > predict what my brain is going to make my fingers do next.
>
> That is only true in detail. I can predict in general what
> you will write about. Dualism. RL.

Exactly. Our brain "understands" by looking for temporal correlations in
sensory data. You can't predict exactly when I will next write about
dualism or RL, but from past experience, you have seen a high correlation
by posts with the "Curt Welch" name on it, and endless drivel about people
being misled by Dualism and RL being the foundation of intelligence.

When we look at human behavior, we do spot correlations. But they are only
the tip of the iceberg of the full complexity of human behavior. It's like
what we see when we look at clouds. We can spot outlines shaped, but we
can't understand the full complexity of every detail of the cloud.

The brain doesn't need to fully understand anything. If it can find any
correlations (constraints) it can make use of those, to gain higher future
rewards. The correlations the brain does find, and make use of, we call
"our understanding". The stuff we don't understand we call "the noise we
ignore". We get so used to knowing what we know, and pretending everything
we don't know doesn't even exist, it can leave us believing we know
everything there is that's worth knowing.

But when trying to build a machine that produces human behavior, you can't
just ignore that 90% of behavior which is under the water and pretend it's
not there. We have to explain how all behavior is generated, including all
the mistakes we constantly make.

> > A strong reinforcement learning machine will make action
> > choices by in effect, doing a weighted after of all past
> > experience, biased by how close each past experience, was
> > to the current context, and pick an action choice, based
> > on the sum total of all that information.
>
> I don't think it is a simple "sum total".

I don't either. It can't be. it's stored in some advanced way that makes
the stored information context sensitive. What "answer" comes out depends
on the context.

> Also there is
> a threshold effect that prunes and selects turning a
> perhaps 0 or 1 into an actual 0 or 1 so the detail from
> that point on isn't retained.

Right, that's just the feature extraction. All the raw data is classified
into different sets. We look at a cloud and it comes out as "cat", or "tea
pot on top of an elephant". Though my examples are the high level end
result of the low level classification happening, the point is the same,
the brain has to determine how to react to the data, and that reaction is
the feature extraction system at work (aka behavior selection system at
work).

> > Everything you have every done, or experienced, is likely
> > having some effect, on everything you are doing right now.
> >
> >
> > How could we possible "predict" what the output of such
> > a massive statistical process based on billions of stored
> > parameters, is going to do next? We have no hope.
>
> We can't predict that in detail

Right, but if we can't predict the details, then we don't have FULL
understanding of it. I can predict all the details of the function f(x) = x
+ 2. I can predict exactly what the output will be for any input. I don't
just has some high level understanding of, I fully understand that
function. We don't, and can't, fully understand something as complex as
the type of statistic process needed to produce human-like intelligent
behavior. It's too complex.

> but we can in general
> otherwise the statistical processing wouldn't have any
> purpose. Its purpose is to make general predictions.

Yes, the general prediction it makes is what classification of sensory
context data into behavior output data is most likely to generate the
highest future rewards. It doesn't need to make perfect behavior choices
to be useful. If it's able to find some statistical correlation, and use
them to gain a little higher rewards, then it's done something useful.

But, when we look at the behavior of a human, we can't fully understand why
the human is acting that way, because our statistical process, is not able
to spot ALL the correlations that exist in the behavior of the other human.
That's because it's too complex. The correlations are there. We just
don't see them. There's likely a correlation between what I might have
read this morning, and what I ended up writing in this post. The stuff I
read this morning, was some small part of the very complex chain of
causality that led to me witting what I did now. But so is a million other
things I've experienced in my life. You, watching my behavior, can't see
most of the correlations because they are too complex (and because you
don't know what I read this morning, so you don't even have access to the
data needed to see the correlation).

My complex human behavior has a cause. Every little last bit of my complex
human behavior has a cause. I"m just a physical machine, reacting to a
complex physical environment. But you have no hope in seeing and
understanding that cause, because the physical system that makes this
happen is way way way too complex for us to understand directly.

We can only understand the high level principles at work (like RL). But
understanding enough of the high level principles, will likely be enough
for us to build a similar sort of learning machine. How it reacts to a life
time of experience will again, make it's behavior too complex for us to
understand, but we can understand how to build the machine, since we don't
have to specify all the learned weights - it will set those on it's own
through learning.

> > We can only predict the tip of the iceberg effects by
> > doing a little high level philosophizing and hand waving.
>
> Intuitive ideas are fine but they are not the only kind
> of predicting we can do and as religion has shown without
> a rigorous scientific process the intuitive predictions
> (hand waving) can turn out to be completely wrong.

Right. But science is just an extension of what the brain is already doing.
It's not that the "intuition" process is flawed, as much as it's just
limited in scope. The rigors of science are there just to extend the scope
of the process further. For example, our intuition is based on a a fairly
short term memory. It can't spot correlations in events that are spread
too far apart in time. But we can fix that, by recording events on paper,
and then studying them, by bribing the events close together in time so our
intuition system can spot it We record data for a year, and then spend 10
minutes looking over all the numbers at once, and when we do that in a 10
minute span, our brain is able to see correlations, it wasn't able to see
in real time.

Our intuition system is the core of our science, not what science is used
to "fix". Science is used to extend that core technology to a wider and
higher resolution scope.

> > TD-Gammon works in a way that is far more similar to how
> > humans play games. It uses "gut instinct" to pick moves
> > using a large statistical process that merges together
> > all past game playing experience.
> >
> > What TD-Gammon doesn't do like humans, is to learn how
> > to use language to talk to itself about what moves it
> > should make.
>
> Essentially it has no understanding of why its "gut instinct"
> might work. It can't explain itself or use logic. This is
> the weakness of an associative network. Association and doing
> statistics isn't wrong it just isn't sufficient by itself to
> explain the powers of the human brain.

It fully explains all of it.

Our logical thinking process is created by that underlying intuition
system. Logical thinking is not a separate module that is used in addition
to our intuition. It's something the intuition module learned to do as a
useful behavior.

We talk informally about intuition and logic being two different things,
but that talk is just about patterns of behavior. Sometimes, our intuition
will choose to produce language based rational language behavior, and allow
our actions to be controlled by the results of that process, or sometimes
our intuition will choose to allow our actions to be controlled by our gut
instinct.

For example, we are driving, we are lost, and we come to an intersection.
Which way do we turn? We have a gut instinct telling us to go straight.
But then we start to talk to ourselves, and say something like, "we were
headed north, but then we were forced to make one right turn, and two left
turns", so we should need to turn right now, to get back headed north again
(assuming all the streets are straight and the turns were 90 deg turns).

How do we act? Do we follow the results of that rational language based
logic, and turn right, or do we use our gut feeling and go straight? The
logic might be wrong because we don't know if the streets were all really
straight - they curved a bit, but was the net effect of the curves work out
to straight or not?

Our _intuition_ decides for us whether to trust our intuition, or trust the
results of that little language exercise.

At the same time, it was our intuition, that decided to produce the
language exercise in the first place because it's a behavior tool the
intuition system has learned to be useful in these sorts of contexts.

The underlying intuition system is driving all of it. It's deciding what
the brain is going to do next. It decides whether it should produce some
language in its head, or step on the gas and drive straight.

TD-Gammon, is on the right track, because all it's behavior is driving by
an intuition system like that.

It's not as advanced as the human intuition system, because the only
behavior options that little 400 node network had to pick from, were
backgammon board moves. It, unlike our brain, didn't have the behavior
option of "produce rational English speech about how many turns you have
made".

This talking to ourselves is a very powerful tool, but before any AI could
use that tool to help it make better action choices, it must have enough
learning power to learn to talk.

We learn behaviors, because they have been shown useful in some context.
But once we have them as a learned behavior, the intuition learning system
can refine it's understanding of when it's best to use that behavior. It
can learn that the behavior which was learned in one context, is also
highly useful in some other different, but similar context.

When we learn the physical motions needed to search for our baby bottle,
and grab it with our hands, we can then later learn to re-use those same
learned behaviors, when we need to search for our car keys.

Searching by turning our head and scanning with our eyes becomes a useful
tool in our behavior toolbox that the intuition system learns to take
advantage of for other context. That way, it doesn't have to re-learn,
from the ground up, how to search for car keys. It can re-use a lot of
what was learned when searching for the baby bottle.

Rational language and thinking is just more learned behavior, that our low
level behavior selection hardware (our intuition system) learns to re-use
in appropriate situations.

TD-Gammon is on the right track, because it's core is a statistical
(intuition driven) learning system. It can't equal the level of
"understanding" the brain has, only because it can't equal the level of
complexity in behavior the brain produces - it's not big enough to learn to
make it's lips move and produce language. Without all those behavior tools
to work with, especially without the behavior tool set of language, it
won't have the same level of "understanding" of the game of Backgammon we
do. But it's only a 400 node network vs an 100 billion node network in the
brain so how many advanced behavior tools can you expect to learn?

The point is, even though humans produce lots of different types of
behavior, from car driving, to rational language use, it's all just learned
behavior, and all learned behaviors, must be selected by a single, large,
underlying, reward trained, statistical system that decides which of all
those behaviors, it should "pull out of the toolbox" at each instant of
time.

It's _ALL_ just one big association and statistics problem.

Doc O'Leary

unread,
Aug 17, 2011, 12:37:43 PM8/17/11
to
In article
<ede179bf-f440-4f0f...@e35g2000yqc.googlegroups.com>,
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 17, 3:12 am, Doc O'Leary <droleary.use...@3q2011.subsume.com>
> wrote:
> >
> > I don't know that that's necessarily true for an AI, though.
> > I'll certainly grant you that our brains have a privileged
> > starting point that is rooted in evolution, but it seems
> > possible to me that, if we generalized our understanding of
> > intelligence, we might be able to go back farther to some
> > kind of "stem cell" or even "proteins and amino acids" base
> > that can scale up to the human level.
>
> Which I believe would amount to building evolutionary
> networks rather than a simple learning network.

But, abstractly, there is no difference. How the network gets formed is
a computable problem. The mechanisms might vary, but if we can subsume
them all under a general understanding of "intelligence", the AI would
then embody whatever complexity was necessary to get there.

> However I believe an efficient network in terms of use of
> resources that fine tune to do walking will be different to
> one that is fine tuned to see in stereo and any general
> solution that can do both will fail in the biological world
> because of the need for more resources and an impossible
> long time to develop those skills using any kind of general
> purpose evolutionary net in the individuals lifetime. It will
> fail to compete with those organisms that have a tool box
> full of already working skills that only need adapting to
> a situation using learning.

Sure. That's sort of a "solved problem" intelligence that evolution
passes on. But I see no reason that it necessarily needs to be encoded
in a "generational" form like DNA; an AI need not be so limited.

> > Not just the same kind of path, but the same kind of
> > *mechanisms*. Darwin's breakthrough idea was not evolution,
> > but how natural selection made it work. AI is missing that
> > kind of explanatory power, and I think the result is that
> > we're also missing out on how that underlying concept of
> > intelligence would tie those differing complex systems
> > together.
>
> Well I think natural selection or reinforcement learning
> explains how a system can become smarter unless you are
> alluding to something mysterious like sentience.

Without a solid definition for intelligence, I'm not even going to make
the leap to sentience. Not all "learning" turns raw data into useful
information. Even ambitious projects like Cyc have shown limited
success in turning information into any useful form of intelligence.

> Intelligence is a *description* of behaviors we call "smart".
> Intelligence is not *something to be found*. We know that
> smart behaviors can appear from a selective process.

That is the misstep I keep talking about. So long as we choose to see
intelligence that way, I think we're missing something critical about it
that keeps us from creating an AI. I think we need to get away from the
grand notions of "smart" and get back to, like computability, a basic
notion of what makes data/computation meaningful in such a way that a
system built with those principles can be described as smart.

So just join me for a second on my thought experiment. Take a step back
and say "I don't really know what intelligence is". Is it necessarily a
big, complex system? If not, how small a thing can it be? What is the
*minimal* description of "behaviors" that can be called "smart"? What
properties can we abstract from those behaviors to simplify/generalize
our description? What other systems exhibit those behaviors?

Curt Welch

unread,
Aug 17, 2011, 1:20:10 PM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 15, 7:18=A0am, c...@kcwc.com (Curt Welch) wrote:
> > So you can talk about a given behavior and how "smart" that
> > behavior is, or we can talk about the process that created
> > the behavior.
>
> And they are not the same thing which is why I am critical
> of your notion that learning =3D intelligence when we use the

> words to describe two different things.

Our use of the word "intelligence" in our society is VERY SLOPPY. The
reason it's very sloppy, is because no one really understands what they are
talking about. But, once we understand that intelligence is the result of
a reinforment learning process, we can choose to be less sloppy, which is
what I suggest, when I say a better definition of intelligence is
"reinforcement learning".

I'm NOT trying to say, my definition is a perfect match for the sloppy way
people currently use the word. It's certainly not consistent with how
people tend to use it, because my definition makes TD-Gammon clearly
intelligent, where as most people would say that TD-Gammon was not an
example of intelligence. Many would say it has NO intelligence at all.

You are being critical, beaus my "new and improved" definition of
intelligence doesn't fit the sloppy use. You missed the point by being
critical like that. My point was to improve and refine the concept of
"intelligence", not to just accurately describe sloppy use by others.

I like the word "smart behavior" to talk about the behavior learned,
whether it was learned by evolution hard coding it into an insect, or
learned dynamically by a learning brain. Our behaviors indicate how
"smart" we are, but our ability to learn and apply smart behaviors,
indicates how intelligent we are. That's how I like to use those words.

> The word "intelligence" is applied to how "smart" it is not
> to the process "learning" that created it.

Yes, I now how sloppy people are with the word. I'm creating a new and
better definitions that is a better fit with the underlying process that
creates it.

> Intelligence is
> the painting not the painter.

Smart is the painting, intelligent is the painter (by my new and improved
definitions).

People do at times, actually use the words as I define them. They just
happen to also use them other ways.

> To say learning =3D intelligence


> just confuses the reader who knows others mean it to be
> smart behavior.

I only say that in the context of these discussions John. It's not like
I'm trying to do teach 3rd graders what the "Welch approved" definition of
"intelligence" is.

You should be able to use your context sensitive brain to know that when I
write the word here in cap, it has different meaning, than when most other
people write the word - and even when I use the word outside the context of
these types of discussions.

> We may not know the actual source of the
> smart behavior only that it was at some point in time the
> result of a learning process.
>
> The problem is the word "intelligence" has been given a
> status it doesn't deserve and thus everyone wants to
> "explain" it when there is nothing to explain. What is
> to be explained is the self organizing mechanisms that
> produce behaviors we call intelligent.

Yeah well I agree. That's part of the problem of defining "intelligence".
If you try to define it only based on how it's used, you can't create a
good definition because it's use is sloppy. But if you tweak it's use a
little bit to fit the hardware that is the source of intelligence, then we
can create a very simple and clean definition, aka "intelligence is a
reinforcement learning process".

> I think the high status of the word is also related to
> the fact many see it as meaning "being alive".

DUALISM! I get to bring DUALISM into the post again! :)

Yes, and not only do they see it mixed with "being alive", they see all
this confusion over dualism mixed in there as well. They see it as
connected with "being conscious". It's one big confusing ball of wax known
as "being human" that people don't know how to untangle.

> > Most AI research has been working in duplicating the
> > "smart" behaviors, instead of duplicating the process
> > that gave rise to those behaviors in the first place.
>
> True. But that is itself a learning process.

Yes, true. But a learning process that doesn't have the power to learn
what it's trying to learn (human behavior is beyond human understanding).

> > Spiders got smart because the process of evolution that
> > built them is intelligent.
>
> I would say it was a learning process. No need to equate
> it with the end product, intelligent behavior.

I was just refusing to be sloppy and keeping "intelligent" to mean
"learning process", and "smart" to mean, "what was learned". Follow the
context - this is Curt speaking and I speak in a language called curt-ese,
not English.

> > TD-Gammon got smart on it's own. It's ability to play
> > a good game of Backgammon was not programmed into it by
> > an intelligent creator. Instead, intelligence itself
> > was programmed into TD-Gammon, and TD-gammon used its
> > on intelligence, to learn how to make really good moves
> > in the game of backgammon.
>
> I would say a learning algorithm was built into td-gammon
> that resulted in intelligent behaviors which remain smart
> (intelligent) behaviors even though the learning algorithm's
> ability to make it smarter has leveled out. The learning
> system moved to some stable state (maximized reward) and
> stopped. All behaviors can be reduced to systems moving
> to stable states.

Humans don't ever stabilize. I don't act the same way today, that I did
yesterday. That's because the environment is too complex to fully
understand. The environment keeps changing in ways we can't predict, so we
have to keep re-adjusting to the new environment. When a small system
tries to learn a much larger and far more complex system, it will never
stabilize. We are unstable learning machines because the environment is
far too large for us to stablize on any fixed set of behaviors.

We certainly do learn (change) must faster when we are young, because our
behaviors are really bad when we are young (bad in the sense of being poor
at getting us rewards), and they do tend to change slower as we reach our
limits of learning. But once the environment shifts, like we move to a new
town and get a new job, we suddenly have lots of re-learning taking place.

> Learning systems have been around since the start of AI.
> The first game playing learning system I think was Samuel's
> Checker Player.

Might be. It's certainly the best known early work on learning applied to
games.

> http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node109.html

Yes it was even a big subject in the early days. They understood how
important it was. It even took up a big part of Turning's famous "Turing
test" paper in 1950. It was the subject of Marvin Minsky's PhD thesis I
believe (his SNARC neural network learning machine). The perceptron was a
big part of early AI.

But the problem, is that no one figured out how to do learning correctly,
and didn't have any real clue how to make it work, so they moved on to
different approaches and gave up on learning. I think a lot of people
ended up with the believe that generic learning was not just hard, but
impossible. And that the only possible path, was innate modules to do all
the heavy lifting, with learning features to "tune" the heavy lifting work
done by the innate modules.

In those days, people didn't even understand how to train a multilevel
network because back propagation hadn't become well known yet. It took
them from around 1950 when the SNARC and Perception were being created as
as single level learning network, to around 1980 for people to understand
at least one approach to making multilevel networks learn. And that was
supervised learning not reinforcement learning. It wasn't until the mid
90's that TD-Gammon and like showed how these multilevel networks could be
used to build a reinforcement learning system that showed hints of real
intelligence developing. So we are looking at 20 year gaps between each of
these small but important steps in how to build strong generic
reinforcement learning systems. And we are not at the end of the road yet,
because these current neural networks. like the one used by TD-Gammon,
really isn't the correct solution. They don't do feature extraction
correctly in my view.

I think of these steps, we are really just one more big step forward in
learning networks before we hit the "jackpot" and finally have generic
learning systems that work as well as the brain.

Curt Welch

unread,
Aug 17, 2011, 1:28:45 PM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 15, 2:15=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > [...]
> > It's the same as understanding sorting, but only as a bubble
> > sort, but not knowing how to write a quick sort, but knowing
> > that to solve the problem, someone has to figure out how to
> > write a sort that runs faster.
>
> > Reinforcement learning is stuck in the same place right now.
> > People know the brain is using some magic "quick sort"
> > technique to do learning at a level of performance no one
> > knows how to build a machine to equal.
>
> A quick sort vs. a bubble sort isn't anywhere near the difference
> between current RL programs and what the brain does.
>
> > So whether its "trick" is something we can implement in software,
> > no one knows, because know one knows what the trick is yet.
>
> Or there is no single "trick". There may be many innate "tricks"
> built up over an evolutionary time scale.
>
> One of those "tricks" in predators is stereo vision. This will
> appear in humans at about three to four months.
>
> I would say that other abilities such as chasing a rabbit through
> a real forest develop as a result of fine tuning innate motor
> generators we are all born with.
>
> Contrary to your bias toward believing this is easy enough to
> learn in real time (less than an hour for an antelope or bush
> turkey)

Don't be silly. Those aren't learned. They are mostly innate with a
little learning added for tuning.

Humans take a year to learn to walk. That's what happens when it's mostly
by generic learning hardware instead of by innate hardware.

> in fact it is hard and calculus is an easier skill to
> have and only appears hard because it isn't innate and has to
> be learned by a slow serial symbolic thinking process instead
> of the fast parallel hardware used for seeing in stereo vision or
> running through a forest.

The fast parallel GENERIC LEARNING HARDWARE. We don't have any serial
symbol learning hardware. We use our fast parallel hardware to learn the
simple serial tasks. :)

> JC

casey

unread,
Aug 17, 2011, 2:48:02 PM8/17/11
to
On Aug 18, 2:20 am, c...@kcwc.com (Curt Welch) wrote:

> It's _ALL_ just one big association and statistics problem.

All brains capable of learning make associations and do statistics
but I don't believe that is sufficient to explain how the high
dimensional data is handled or that the solution is in the form
of a 100% simple blank slate monolithic network.

I have come to different conclusions based on reading the work
of others and seeing the issues a learning system would have to
contend with as a result of trying to duplicate (learn to do
myself) problems a learning system would have to solve itself.

Abstraction amounts to throwing away the details that you seem
to think we must "fully understand" and this reaches its extreme
in the abstract world of the scientist. Abstraction means taking
out only what is salient to the problem to be solved and leaving
all the other details behind.

Evolution learnt what to throw away (or keep) by trial and error in
order
for the first neural systems to have any chance of improving the
reproductive success of its owner using touch, sound and vision.

To handle a complex world small insect brains have to be highly
specialized although when needed they have developed some very
sophisticated albeit specialized learning abilities. Larger brains
with spare computing power are capable of implementing more innate
skills and generalized learning but I do not believe they have
any simple single solution to the high dimension problem anymore
than an insect does.


JC

casey

unread,
Aug 17, 2011, 2:53:32 PM8/17/11
to
On Aug 18, 3:28 am, c...@kcwc.com (Curt Welch) wrote:

And what complex behavior they have if you agree it is innate.
So what is the issue that it can't be innate and fine tuned in humans.

> Humans take a year to learn to walk. That's what happens when it's mostly
> by generic learning hardware instead of by innate hardware.

Maturation not learning alone. Humans are born early due to the need
to
grow a bigger brain after being born.


> > in fact it is hard and calculus is an easier skill to
> > have and only appears hard because it isn't innate and has to
> > be learned by a slow serial symbolic thinking process instead
> > of the fast parallel hardware used for seeing in stereo vision or
> > running through a forest.
>
> The fast parallel GENERIC LEARNING HARDWARE.  We don't have any serial
> symbol learning hardware.  We use our fast parallel hardware to learn the
> simple serial tasks. :)

Language is serial even if it uses parallel hardware.

Read Dennett if you don't understand what it.


casey

unread,
Aug 17, 2011, 3:37:33 PM8/17/11
to
On Aug 18, 3:20 am, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
>> Intelligence is the painting not the painter.
>
>
> Smart is the painting, intelligent is the painter (by my new
> and improved definitions).

Disagree. Smart is an action not a thing. Paintings aren't smart.
The person who does the painting may be called smart but that is
because they are capable of *doing* something smart like painting.

You always have had trouble with this abstraction of the action
from the thing doing the action haven't you. Where are those
"powers of abstraction" you claim to have.

>> The learning system moved to some stable state (maximized
>> reward) and stopped. All behaviors can be reduced to systems
>> moving to stable states.
>
>
> Humans don't ever stabilize.

But they slow up their rate of learning. Sure you can learn
some new things each day but most of your core learning took
place when you were still young.


> I don't act the same way today, that I did yesterday. That's
> because the environment is too complex to fully understand.
> The environment keeps changing in ways we can't predict, so

> we have to keep re-adjusting to the new environment. We are


> unstable learning machines because the environment is far

> too large for us to stabilize on any fixed set of behaviors.


>
>
> But once the environment shifts, like we move to a new
> town and get a new job, we suddenly have lots of re-learning
> taking place.

There is much truth in that but it applies to a network as
well. If you change the input from backgammon to some other
game the learning system will start moving again. However
most of the changes in our life are not a completely new
game so the moves really only adjust a little. The bigger
the change the less able an older person is to adapt.

Most of what we know is applicable to a new town and new job.
The big learning events like learning a language is the same,
the walking is the same, navigating the buildings are the same,
human interactions are the same. Any change is small.

The changes might be in some skill or new knowledge but even
that isn't learnt as well as in the past. We appear to learn
more but it is all leveraged on the bulk of what we learn
early in life.


> I think a lot of people ended up with the believe that generic
> learning was not just hard, but impossible. And that the only
> possible path, was innate modules to do all the heavy lifting,
> with learning features to "tune" the heavy lifting work done by
> the innate modules.

We still await your magic net to prove otherwise ;)

jc

casey

unread,
Aug 17, 2011, 3:42:18 PM8/17/11
to

casey wrote:
>> Intelligence is a *description* of behaviors we call "smart".
>> Intelligence is not *something to be found*. We know that
>> smart behaviors can appear from a selective process.
>
>
> That is the misstep I keep talking about. So long as we choose
> to see intelligence that way, I think we're missing something
> critical about it that keeps us from creating an AI.

Well I think we have created AI but not at a human level.

> I think we need to get away from the grand notions of "smart"
> and get back to, like computability, a basic notion of what
> makes data/computation meaningful in such a way that a system
> built with those principles can be described as smart.

It is a self organizing process (learning/evolution) that enables
the organism to behave in such a way it can solve problems that
enhance its reproductive success.

> So just join me for a second on my thought experiment. Take
> a step back and say "I don't really know what intelligence is".

But I do know what it looks like. If you can solve problems I
can't then you have more intelligence than I, that is, you can
solve more problems than I can.

> Is it necessarily a big, complex system? If not, how small
> a thing can it be?

I don't see any "THING" in an intelligent system.
All I see is neurons in brains and logic gates in computers.
They can *behave* in ways we call intelligent.

JC

Curt Welch

unread,
Aug 17, 2011, 4:05:19 PM8/17/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110814171800.065$P...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > But where is the intelligence in that? How is knowing the state of the
> > tic tac toe board fully intelligent?
>
> As I said, I understand how it's a foreign concept, but let it sink in.
> It's *not* "knowing the state" that is intelligent, it is what the
> system *does* with that knowledge that, to me, should better serve as an
> indication of intelligence. The intelligence is in the processing, not
> in the data being processed.

I've lost the context of what I wrote above. So I don't even know what
point I was making at the time. :)

But, in relation to your reply, I think building a strong RL machine is
really two problems in one. The first, is building a good internal model
that efficiently represents the state of the environment, from the raw
sensory data. And the second step, is doing the reinforcement learning
using that state model. The reinforcement learning is responsible for
assigning values to the states.

The first step is what allows the system to "understand" the true state of
the environment from all the chaos of the raw sensory data. But the first
step doesn't answer the question of what the agent needs to do in the
environment. That is answered by the reinforcement learning, which gives
the machine purpose, and a high level goal - maximize the reward signal.

Without the second part, the machine has no goal and no way to "act
intelligently". But without the first part, the quality of the actions
would be nearly worthless. If it doesn't have a good internal model to
learn from, then it can't learn. A bad model makes it "blind" to the true
state of the environment.

The model building, is what causes classical conditioning. The reward
maximizing, is operant conditioning. Both have to be solved correctly
before we will see machines acting like humans. I think the second is
already well understood, and it's the first step, that has alluded everyone
for so long.

I think both are equally important for creating "intelligence". If the
machine can't recognize the light is on, it will have no hope of learning
to push the button.

Currently, reinforcement learning works well if some intelligent first
figure out how to hard code some mapping from sensory data, to an internal
model. The structure of the internal model is conceived, and implemented
by the human. RL is then applied to that model, and if the model is good,
the learning is good.

But for general intelligence, that model has to be built automatically, and
change dynamically. Such as when we learn to play a new game, the brain
learns to construct an internal model to represent the important state
details of the game. Without that high quality automatic model
construction at work, reinforcement learning has no good understanding to
work from.

> > It still must act. So what controls how it acts? It could have it's
> > entire behavior set hard coded into it's hardware - such a a tic tac
> > toe porgram that just had a big table that said for this board
> > position, make this move.
>
> Right, just like modern chess programs have databases of opening and end
> game moves. So, as I have argued from the start, intelligence needs to
> be defined in a way that *does* address the issue of "what controls how
> it acts".

Sure. And you don't think that "it's actions are controlled by a
reinstatement learning process" doesn't start to address that question?

> > Such a machine, built to look like a human, could act exactly like a
> > human for its entire life - assuming it had enough hardware to code
> > that entire life time of behavior. So externally, we could not tell
> > the difference between it, and a human. And if course, that would
> > assume however the system was coded (whoever coded it) was able to
> > perfectly predict the entire life of this human.
>
> Indeed, that is the proposition of the Turing Test. And I would agree
> with that *as a test*, but that is in *no* way useful in trying to
> determine how to actually design the system in the first place!

Right. It tells us what it has to do, but not how to structure the
machine.

A lot of machines we design and build, have a fairly direct mapping from
internal machine feature, to external behavior. So we build these machines
by first figuring out what we want them to do, then we build internal
features, that control each external behavior. To reverse engineer this
type of machine, we identify the classes of behavior, and build features to
implement each behavior. You want the machine to walk, you add a walking
module. You want the machine to reach and grab something, you had the
"reach and grab" module.

I think the brain however, is a type of machine, where the external
features, don't look much of anything like the internal mechanism. And
this difference, is one of the things that makes reverse engineering AI so
hard. What it does, looks nothing like the hardware that makes it happen.
There are almost no hints about how to structure the internal hardware, by
looking at the external behavior.

We can test the external behavior of any machine we build to see if it
matches the typical behavior of a human, and that will tell us if we got
the internal hardware correct. But such a test, doesn't tell us what we
got wrong, when the behavior fails to match. So the Turing test tells us
when we got the answer correct, but it gives us no insight, into what we
got wrong. That, combined with the hardware not mirror the behavior, makes
it a very long and slow trial and error search to try and figure out the
right design for the hardware. We we only know if we are on the right
path, after we are done, and see it acting c correctly.

> > Sure. And some animals, like insects, seem to be mostly that - hard
> > coded little machine with little to no learning. They do "smart"
> > things, because evolution made them do it. But evolution is itself, a
> > reinforcement learning process that has long term memory - stored in
> > the DNA. Evolution itself is intelligence - just a very slow learning
> > type of process.
>
> But "learning" on the evolutionary level doesn't seem like it needs to
> be at all tied to intelligence in an individual creature.

Well, humans have dynamic intelligence in that we learn and create new
things after birth. Most of the intelligence in our behavior, was learned
after birth, because we have so little "intelligent beahvior" pre-wired
into us.

Other animals, have lots of pre-wired smart behaviors in them, and that
"intelligence" was created by evolution, not created by a learning process
that happened after birth.

> Species
> survive and reproduce with wildly different degrees of cognitive
> abilities. It may be that what we dismiss as instinct *should* properly
> be studied as intelligence without learning.

Yeah, I think that's correct. At least to the "no learning after birth".
But not really "intelligence without learning". The learning is there, it
just happened over a million years of evolution instead of in the small
window of the life of a single member of the species.

If we look at all the members of a species currently alive, we can see that
entire population of animals, as one big distributed learning machine.
It's a big learning machine that keeps throwing away parts, and building
new ones, to make the entire machine stronger over time. The entire
population of any species can be seen as one big, long lived, reinforcement
learning machine.

> > I argue that the behaviors alone, without the underlying process
> > contantly improving them, is not "true" intelligence, even though the
> > behaviors are very "smart". But that's just my definition of "true
> > intelligence".
>
> Sure, a smarter, "true" intelligence is a preferred end goal, but nobody
> is going to get there without understanding what intelligence really is.

Of course. Which is why I first stopped in my goal to figure out what
intelligence really is - the behavior of reinforcement learning machine.
(Have I said that too many times yet).

> The whole notion that it is just learning is plainly wrong, because
> nowhere in the hand waving do you describe how raw data turns into
> useful information.

Have you seen the design of my pulse sorting networks? I've talked about
them for years here. That is how raw data is transformed into information.

But they don't do a good job of that first step - the step of transforming
raw data to a good internal model. And the fact they get that wrong, means
the quality of the "information" they put out, really sucks.

> > Spiders didn't get "smart" because they are intelligent. Spiders got
> > smart because the process of evolution that built them is intelligent.
> >
> > A chess program didn't get "smart" because it is intelligent. It got
> > smart because a process happening in a human brain was the intelligence
> > that created them.
>
> You conveniently side-step how humans got intelligent rather than just
> being "smart" like a spider. My approach actually addresses that,
> because I'm looking for intelligence as a property that is on an
> independent scale.

I don't side step it. It's just not important. Do I really need to fully
and correctly document every step of evolution from the big bang to humans
before we can explain how a human works? Of course not. The evolution is
not important, as long as we understand where it ended up (which we don't
yet). The evolution can help us to understand where it ended up, but it's
not important, it simply might be useful.

I can guess who humans got intelligent, but it's only a wild guess. Hard
wired circuits had to be replaced with generic learning circuits - which is
not an easy step. The hard wired circuits are likely structured very
differently from generic learning circuits, so we have to come up with an
answer as to how this could happen.

I would guess it happened because of the fact that the hard wired circuits
have to be grown from a single cell. How might that work? Something in
the DNA had to control how the cells grew, and wired themselves together
into a larger circuit. So not only did the hard wired circuits have to
evolve, some mechanism for making them grow and wire themselves had to
evolve at the same time.

I suspect the technology that controlled how the neurons wired themselves
into a fixed circuit, had adaptive growing features show up first. So
though the growth regulation system would produce a fairly standard
circuit, it would allow that circuit to adapt itself to body features while
it formed - such as adapting the "walking" circuit, to deal with different
sized legs. Such adaptive growth features, would allow evolution more
flexibility to experiment with changes to the body, like the length of the
legs, without having to evolve in perfect lock step, the wiring of the
control system.

Once adaptive feedback systems were added, it was a learning system. But
it was biased to add learning on top of a fair fixed function module. But
then, it could just gradually drift to more adaptive system, and less fixed
function, until you get to the point where almost all the circuits were
adaptive.

So the system that regulated the growth of the body, and which would
probably "switch off" to some extent once the body was fully grown, could
have morphed into a generic learning control system, that allowed the
control network to keep adapting all it's life.

And once you have that adaptive control system, you have intelligence
implemented in the individual life forms, instead of only at the species
level.

Curt Welch

unread,
Aug 17, 2011, 4:33:39 PM8/17/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <j29d6m$hd9$1...@news.albasani.net>,
> Burkart Venzke <b...@gmx.de> wrote:

> > Learning is not everything for an intelligence, you are right, it is
> > (for me) only a necessary condition ("sine qua non" is leo.org's
> > translation for the German "notwendige Bedingung").
>
> And my argument is that it is *intelligence* that is necessary to make
> learning meaningful. Without the selection pressure it applies,
> "learning" is nothing but an unsatisfying random walk.

Reinforcement learning is not a random walk. It's a goal directed walk
that comes with selection pressure. The goal is reward maximizing. That's
what creates the selection pressure to push the walk down a specific path
towards "smarter" behaviors. Other learning, like supervised learning, is
just a "follow the leader" approach. It can only learn what something
"smarter" shows it. Reinforcement learning is unique in that it doesn't
need something "smarter" to show it what to do. It finds the smarter
behaviors on its own.

> > What can be equivalent to learning? Learning means e.g. collecting and
> > processing new data, how could this be substituted?
>
> I'm talking about system equivalence, in the same way that universal
> computability can be achieved by different systems. Clearly not *all*
> learning systems are intelligent (or even achieve universal
> computability), so an independent definition would have to be in place
> before a researcher would be able to even say *any* learning system is
> equivalent, let alone make the grand assertion that learning is the
> source of intelligence.

There's a HUGE difference between Reinforcement learning, and the other
types of learning. I never said "learning" (all of them) are intelligence.
I said only reinforcement learning is intelligence (a very specific and
well defined type of learning).

Training a neural net with back prorogation for example is NOT
reinforcement learning, and it is not intelligence to me. It can be part
of a larger reinforcement learning system as it done in TD-Gammon, but back
propagation learning alone is not intelligence.

That type of learning is just mimic learning. It attempts to transform a
neural network into a function with a given output property. Something has
to tell it what the correct "output" is before it can be trained. In
TD-Gammon, a reinforcement learning algorithm is training the neural
network to be reward predictor that is used in the reinforcement learning
algorithm.

Intelligence requires goal seeking behavior. Reinforcement learning, is
the most generic type of goal seeking possible. All other types of goal
seeking are sub-problems of the larger generic reinforcement learning
problem.

Curt Welch

unread,
Aug 17, 2011, 4:54:19 PM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:

> On Aug 16, 4:01=A0am, c...@kcwc.com (Curt Welch) wrote:


> My bone of contention is not your belief in RL as a mechanism
> for learning but in your belief it can be done in real time
> on a blank slate. You want that to be true so AI can suddenly
> appear given the "right kind of Curtian blank slate" when it may in
> fact require the hard slog of an evolutionary path to produce
> all those complex (or hard to find by chance) connections.

Non of the learning really happens by random chance. It happens by hill
climbing. When it's doing correctly, it becomes a straight forward
multi-dimensional hill climbing problem. That's why humans don't learn by
waiting for the brain to randomly wire itself correctly. It's a slow hill
climbing transformation from one set of learned behaviors, to a new
improved set of learned behaviors.

The trick is that learning is not just by the rewards John. You don't have
to have someone hit the reward button for you ever time you do something
right.

Strong RL requires that the system evolve a reward prediction function as
part of the solution. That reward prediction function then learns how to
transform all the sensory clues, into a probability of a reward. You don't
have to eat the cookie to get the reward, all you have to do is see the
cookie jar, and the reward function produces a jolt of "predicted reward".
If some behavior manages to get the lid of the jar off, the prediction
function gives another jolt of predicted reward.

It's the use of these reward prediction functions that turns behavior
search from a random walk that would take a billion years, into a hill
climb problem that happens quickly.

Of course, it still wouldn't work if not for the fact the environment is
filled with clues that act as predictors of rewards. But our environment
is, so the learning system leverages that fact to use all those predictors,
to guide it's continual learning process. The reward predictor output is
constantly jumping up and down in response to every little move we make.
And it's constantly training us, in the process. Every little eye motion,
and head nod, and body moment, is being trained by the reward predictor -
which is also, training itself at the same time.

You think this learning problem is impossible, when it's actually easy,
when it's done correctly.

Burkart Venzke

unread,
Aug 17, 2011, 6:28:14 PM8/17/11
to
Am 17.08.2011 17:33, schrieb Doc O'Leary:
> In article<j2erhm$vrd$1...@news.albasani.net>,
> Burkart Venzke<b...@gmx.de> wrote:
>
>> Am 15.08.2011 20:20, schrieb Doc O'Leary:
>>>
>>> Right, so I'm saying we *must* do that grunt work matching in the first
>>> place in order for us to have any reasonable chance at solving the
>>> problem. I want to get away from it being an issue of top-down vs.
>>> bottom-up and treat it *independently* as a fundamental property of a
>>> system in the same way that computability is a fundamental property.
>>
>> Intelligence as a fundamental property? What do you expect of or combine
>> with a "fundamental property"?
>
> I don't understand the question.

I have understood that you see intelligence as a fundamental property.
If so: What consequences does this have for you?

>> I hope that an AI will pass the Turing Test some day without any trick.
>> Perhaps we should define/try something like a "(little) children"-Turing
>> Test where we don't compare the system with an intelligent adult but
>> more than with a children.
>
> In a way, that's what I'm trying to get at. I'm actually going to the
> extreme end of "little", by asking the question of what *is* that
> (potentially) infinitesimally small division that separates intelligent
> behavior from unintelligent behavior.

I think you know that there is no clear boundary, don't you?
Call it "fuzzy". Better: Some behaviour may be seen as intelligent by
some persons, other may think that it is not intelligent.
It's "only" a question of individuell definition/thinking.

>> I think that learning can be defined easier and on the other hand
>> intelligence is less clear.
>> If you base something on an unclear notion, the whole think is more
>> difficult to solve or explain at least.
>
> Which, again, is why I see the main issue to be clearing up what we mean
> when we say something is intelligent.

So you don't want to speak about aspects of intelligence? Or what else?

>>> Without the selection pressure it applies,
>>> "learning" is nothing but an unsatisfying random walk.
>>
>> Not necessarily. Learning can be directed by one or more humans (as (a)
>> teacher(s)).
>> As AI is a machine, *we humans" want to be satisfied.
>
> Intelligence is not merely learning what the satisfies the teacher.

But exactly that can be an *artificial* intelligence if we want it in
that way.

OK: What is intelligence then?

> Sometimes the teacher is wrong. An intelligent system should be able to
> detect and correct that.

Right, it should be able if it has learned enough to do so - but only
then. No young children will question "1+1=2" (I do in some sense).

>> Right. Good criteria for comparing different learing systems are a good
>> idea. I am not sure if there exist some that are good enough for you ;)
>
> The problem is not that I'm asking for any extraordinary "good enough"
> level. The problem is that the "learning" camp here hasn't even begun
> to look into it.

I think that the steps before for the development of learning has not
ended sufficently.

> Without that starting point, there is no reason to
> conclude that any form of learning is equivalent to intelligence.

Equivalent? Too much for me, learning is only an (even though important)
aspect.

Burkart

casey

unread,
Aug 17, 2011, 7:18:03 PM8/17/11
to
On Aug 18, 6:54 am, c...@kcwc.com (Curt Welch) wrote:
>
> You don't have to eat the cookie to get the reward, all you have
> to do is see the cookie jar, and the reward function produces a
> jolt of "predicted reward". If some behavior manages to get the
> lid of the jar off, the prediction function gives another jolt
> of predicted reward.

I understand secondary reinforcers that doesn't nullify anything
I have written.

> It's the use of these reward prediction functions that turns
> behavior search from a random walk that would take a billion
> years, into a hill climb problem that happens quickly.

I understand hill climbing. I don't know why you are going on
about random walks to find a solution that would take forever
for anything but a very small state space.

> Every little eye motion, and head nod, and body moment, is
> being trained by the reward predictor - which is also,
> training itself at the same time.

Only in your imagination. You have not shown any reward predictor
at work in a new born child that could result in the all the
changes in behavior observed. We don't look for a complicated
solution when a simpler solution would explain it. Human babies
are born early due to a full sized brain being incompatible with
the size of the birth canal. It matures after it is born and
you see this maturation process as learning because that would
fit your bias not because it is the correct explanation.


> You think this learning problem is impossible, when it's
> actually easy, when it's done correctly.

You haven't shown there is any "correct" way of doing it,
by which I understand you to mean a blank slate network
"solving" a high dimensional problem, that would explain
human level intelligence.

JC

Curt Welch

unread,
Aug 17, 2011, 8:52:19 PM8/17/11
to
Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> In article <20110815001522.437$O...@newsreader.com>,
> cu...@kcwc.com (Curt Welch) wrote:
>
> > Doc O'Leary <drolear...@3q2011.subsume.com> wrote:
> >
> > > Again with the circular definitions. You have yet to tell us were
> > > the *intelligence* is in your precious learning.
> >
> > Yes I have. You just don't understand how what I'm telling you could
> > possibly be true.
>
> It is always the convenient position of the quack to insist the world
> just doesn't understand their genius.

I told you I scored high on the quack-o-meter didn't I!

> > Well, you should give me a specific example of what you are talking
> > about when you say that.
>
> No, I shouldn't have to. If you really had a grasp on the truth, you
> would have been able to give the *general* principles by which an
> intelligent system operates.

I can. And I did. Reinforcement learning.

> > I'll then respond to your specific example. You first
> > need to explain to me what type of "order from chaos" you think is a
> > property of intelligence.
>
> OK, here we go again:
>
> Jack says the ball is red.
> Jill says the ball is blue.
>
> What are the kinds of things you might expect an intelligent system to
> *think* after learning that information?

Whatever it has been conditioned to do. Generally intelligent systems can
be conditioned to respond in any way to any stimulus. We, could for
example train the intelligent agent to pull out a gun and shoot Jack and
Jill when they say that. And such a response would be "intelligent" for
that agent, because of the training it had received in it's life time.

> Tell me how your sacred cow
> incorporates such data in a way that results in some measure of
> intelligence.

That "data" is part of the context of the environment. A strong
reinforcement learning machine, can be trained to respond in any way, to
any context (withing the perceptual and action limits of the AI).

But lets assume my cow was trained in a typically human environment, and
ran into Jack and Jill who told him these things.

My AI would incorporate all that sensory data to update it's internal model
of the state of the environment. It would use far more than just the words
they spoke. It would use every subtle clue in the environment, and update
the model, based on past experience. The end result would be some internal
model that included information about a colored ball with some probability
as to whether the ball was real or not, whether both jack and Jill were
talking about the same ball, and whether the ball, if real, and if one
ball, was more likely to be red, or blue, or neither. The large learning
matrix that took on the data and updated the internal model would use clues
like who spoke first, and how tall Jack and Jill were, and what they looked
like, and what clothes they were wearing, and what was the rest of the
context of the environment in which this communication happened. All that
sensory information would be used to create a probabilistic based
understanding about what the message might mean.

So, if right after this, Bob asked the AI, what color do you think the ball
is, it would respond based on how it was trained to respond to humans,
which might be something like, "Gee, I doubt there even is a ball, I think
Jack and Jill are just part of an AI experiment".

But again, the basic force at work is that the AI is just conditioned to
respond the way it does, based on a life time of experience. And by
shaping the environment correctly, you can pretty much make it respond any
way you want. Especially if you can control it's reward system as well
(which in RL is normally considered part of the environment relative to the
RL system).

> > "thinking" is just as much a physical action as waving our hand, and
> > it's an action we are just as equally aware of hand waving. So why are
> > these two physical actions segmented into such separate domains in our
> > language when they aren't separate at all in the physical world? It's
> > because of the illusion of dualism.
>
> Only to you and other people who can't seem to leave dualism in the
> past. Everyone else uses different words because, shockingly, the
> actions are distinctly different. Everyone else also readily uses words
> like "push" and "pull", when I'm sure someone like you would insist that
> they all must instead *only* talk about exerting a force on an object.
>
> > Now, the problem I've run into, is that even when people reject all the
> > dualistic nonsense like souls, and firmly believe in materialism, they
> > still have a nasty habit of thinking and talking, dualisticly - often
> > with no awareness they are doing it.
>
> No, it's still just you who is failing to learn that the meaning of
> words has moved on. You don't instill much confidence in your arguments
> for learning when you seem unwilling or unable to learn yourself.
>
> > reinforcement learning is the why of human intelligent behavior. :)
>
> Jack says the ball is red.
> Jack will give you $100 if you say the ball is red.
> What color is the ball?

Red. Now where's Jack so I can go get my money!

> > > > TD-Gammon. (for Backgammon)
> > >
> > > So you claim, but where is the intelligence?
> >
> > It figured out, on it's own, how to play backgammon at the level of the
> > best human players. It figured out things about the game, that no human
> > had every figured out, including the person who wrote TD-Gammon.
>
> You are being too generous. It was *designed* to play backgammon, just
> like chess programs are similarly designed.

Yes, the framework is hard coded to play blacjack.

> Just because it played well
> or even learned to play better does *not* establish any intelligence in
> general, in relation to games, or in relation to backgammon itself.

Well, you keep saying all these things are not intelligent, but you have
not yet even once in all these messages explained what test for
intelligence you are using to back up your claim. All you have said is, "I
can call anything I want intelligent, and anything I want not-intelligent
and I don't know why I pick one over the other, but I just do".

I at least, have given you a test you can use.

> Neither does novel behavior necessary imply any sort of intelligence. A
> random walk around a problem space can also result in new discoveries.
> You *still* have yet to demonstrate any fundamental level of
> intelligence.

You have still let to give even ONE definition of what your test of
intelligence is. Stop asking me to show you intelligence, when you can't
define what IT is you are asking me to show you! You are wasting your time
asking nonsense questions of others. I, unlike you, have defined the word
intelligence in a way that YOU can test to see if some machine is
intelligent by my definition.

> > > I'll grant you that its
> > > play differed from commonly accepted strategies, and even played
> > > "better" in that regard. But you still have yet to define what about
> > > that difference was *intelligent*!
> >
> > Again, if you look at the root cause of all examples of intelligence,
> > you find a reinforcement learning algorithm at the core.
>
> No, I don't. *You* certainly do, but that has gotten you no closer to
> defining intelligence, never mind implementing an AI.
>
> > I claim, that the
> > word "intelligence" has always been used to label the behavior of
> > reinforcement learning processes, but that people just didn't
> > understand that's what they were talking about.
>
> Only you are satisfied talking circularly.
>
> > The prediction is that someday, before long, someone will create a new
> > reinforcement learning algorithm, that will produce life-like, and
> > intelligente-like behavior in our machines and robots.
>
> That is not a scientific prediction.
>
> > My position is perfectly valid science, it makes predictions, and it's
> > falsifiable.
>
> "My position is impossible to prove wrong."
> <http://groups.google.com/group/comp.ai.philosophy/msg/3b078ce4e5888f00>

That's only true today. My position is falsifiable because when someone
discovers how the brain works, either it's core technology will be a
reinforcement learning machine, or it will be something else. If it's core
technology is a reinforcement leaning machine, my position is proven right.
If it's shown to be something else, my position is proven wrong.

Today, no one knows the answer to what makes the human brain behave like it
does, so we don't have the data to prove my position right or wrong.

> > So knowing this, another interesting question shows up. If the machine
> > has two options of how to change itself, which one does it pick? Does
> > it grow a connection from neuron A to neuron B? OR does it grow a
> > connection from A to C?
> >
> > Without knowing anything about how it changes itself, we can simply
> > ask, how does it decide which way to change itself?
>
> And that is all I'm asking for in a definition of intelligence. Not
> just in *how* it changes, but *why* it favors one change over the other.
> That is what Darwin's why of natural selection added to the how of
> evolution. Clearly you're fixed the why of reinforcement for the how of
> learning, but you've yet to offer the same kind of explanatory power
> that Darwin managed. By my measure, your reinforcement still needs an
> underlying intelligence to evaluate the choice between B and C.

Then you don't understand the power of evolution, or the power of
reinforcement learning.

The choice between B and C is "which one has led to more rewards in the
past".

> > If I'm talking to someone that doesn't know what intelligence is, how
> > exactly would I show them that a reinforcement learning process was an
> > example of the thing he doesn't know how to define?
>
> By doing what I've asked and offering a definition of intelligence that
> is *independent* from your precious RL, just as the definition of
> computability is independent from a Turing machine. If you can't do
> even that little bit, my contention is that you're on the wrong path.

What form would that definitions take in your mind? What on earth are you
even thinking intelligence might be?

> > Skinner did lots of science. He believe all human behavior was the
> > product of operant and classical conditioning. His his science not at
> > least "soft" science to you?
>
> No. He experimented and offered useful theories, but I see no reason to
> conclude he was 100% right. Likewise, Turing made solid discoveries in
> computability and offered a useful test for AI, but that doesn't mean AI
> was suddenly a solved problem.
>
> > To suggest there is an "intelligence" at work separate from your
> > "learning" is to fail to understand how I'm using the term "learning".
>
> Or it shows your failure to understand what intelligence fundamentally
> is. Look, I get that you *want* to pigeon-hole everything into your
> selective definitions so that you can circularly say you've already
> solved the problem. There is no need to keep repeating or rephrasing
> that. You simply haven't got a convincing argument, so I maintain that
> you're likely not on the right path to AI.

That's a logical reaction to not being convinced. It doesn't however make
me wrong. It just make me poor at convincing people something they aren't
ready to hear.

> > Humans don't have any unintelligent learning in them. You have to show
> > me a specific example of what you consider to be unintelligent
> > learning, and then I can respond to that example.
>
> Are you kidding? Humans have been, and continue to be, boiling bags of
> inconsistency and falsehoods. Again, I completely get that you're going
> to try to argue that anything I name is behavior based on reinforced
> learning and is therefore intelligent. It's a tiresome game to run in
> circles.

It's a valid argument whether you like it or not. I can train anyone to say
"1 + 1 = 3". Does that make them a boiling bag of inconsistency? No, it
means they were trained to act that way. Logic and reason is itself a
trained behavior. Some people have been highly conditioned to try and act
rationally and double check all their beliefs with the language of logic,
while others have not.

> > > Likewise, it is my contention that there is some minimum definition
> > > for intelligence beyond computabiity.
> >
> > Meaning intelligence can't be implemented on a computer if that is
> > true? Or are you thinking something else there?
>
> No, meaning that computability is a necessary but not sufficient
> component of intelligence. I'm actually even open to *that* not being
> true. Since "intelligence" seems to be embodied in an ongoing process,
> it might not be precisely meaningful to say it completes in a finite
> amount of time. We mostly *do* expect particular answers from an
> intelligent system in a finite amount of time, though, so I think it is
> fair to say that it is at least a step beyond computability.

I'm a bit lost as to what you are suggesting by "commutable".

Is a transistor radio an example of something that is "commutable" to you?

I think the entire concept of commutable doesn't even apply to AI. AI is a
reaction machine - something that interacts with an environment.
Computability is about the operation of closed finite state machine that
doesn't interact with an environment (isn't it?).

> > I don't remember you posting in the past. Is it just my faulty memory,
> > or are you posting under a different name? Or has it just been a long
> > time since you last posted here?
>
> I post when my interest is piqued. Usenet only does that rarely these
> days, which is sad, because I still find it a much better discussion
> system than web forums or social networks.

Yeah, I have the same sadness. I really like facebook, for what it is, but
not for debates like these. And the web forums just suck and are too
scattered and unorganized. And the user interfaces just suck.

> > > Not at all. Again, you may indeed be right, but until you can accept
> > > a definition of intelligence that is *distinct* from learning, you
> > > don't really *know* that you are right.
> >
> > I'm trying to understand what type of machine a human brain is that
> > allows it to perform all these intelligent actions (and intelligent
> > thinking since you might not realize when I say "actions" I include
> > thinking as a type of action). The best machine descriptor I've every
> > found to answer that question is "it's a reinforcement learning
> > machine". I will not accept another definitions of "intelligence"
> > unless someone can show me the destitution of a class of machines,
> > which better fits than the one I already have.
>
> And so you miss out on what we can lean from intelligence at the
> evolutionary level, and what we can learn about it from the global
> level. Nor do I think it serves us to isolate human intelligence from
> other animal intelligence. My position remains that at *all* levels,
> there is some underlying mechanism that we should be able to tease out
> as an independent definition for intelligence, and thus not only achieve
> AI, but unify the work of a good number of fields.

I agree completely, It's called reinforcement learning and the concepts
span, and unify, many desperate fields. The key element is what I like to
call simply directed change. There must be change over time happening, and
there must be a consistent bias to the change forcing it in some direction.
Non-directed change is just a random walk. But directed change, takes a
system down a path toward some goal. Evolution is directed change down the
path of survival in the face of competition. A reinforcement learning
machine like the brain or one of our computer programs directs the path of
change (learning) down a path to the goal of increased rewards (where
"rewards" are some arbitrarily defined internal signal).

Yeah, it's called a Skinner box and Skinner did learn a lot by doing just
what you suggest.

All my R&D work on this has been with trivially small bandwidth experiments
because it allows me to understand what's working, and what's not working.

We use language to do that sort of stuff. You seem to be a fairly
language-centric person because all your AI examples are language based.
Language is a behavior (you saw that coming didn't you), and the brain must
build internal models to support that behavior. Those models form as we
are trained to respond to, and produce, language behaviors. I don't spend
much time trying to figure out what type models we form in order to use
language, because I see our strong language use as the last thing to solve
in AI - the thing that came last and only exists in one animal so strongly.
I think the foundation technology to produces all our intelligent behavior
is the same type of hardware, and if we can figure out the simpler things
it does first, like allowing us to learn to walk, then I think making it
learn and use language will be fairly straight forward, and mostly, just
the right application of more of the same core "learning" system configured
to match the learning needs of language behavior.

> > > It is *you* who is making the assumption that previous experience
> > > alone is what would guide the thinking of an intelligent agent. I
> > > have no previous experience with FTL travel, nor do I have any
> > > expectation that it is even possible, but without any material
> > > reinforcement at all I can *explore the idea* of many such impossible
> > > things.
> > >
> > > Likewise, it makes sense to me that, even in a world where an
> > > intelligent agent finds everything to be 100% reliable, it might
> > > *explore the idea* that what it has learned is not true. Therefore,
> > > I would assert that an intelligent agent would *never* conclude that
> > > something is 100% reliable, which directly refutes your description
> > > of how your learning system would operate.
> >
> > No, it doesn't, because your conjecture is just that, pure conjecture,
> > not science. You have no evidence to support the idea that an
> > intelligent agent would "doubt the truth". The only evidence you have
> > is what you or a human might do. Humans will "doubt the truth" but
> > that's because THEY HAVE LONG HISTORY OF PAST EXPERIENCE OF BEING
> > FOOLED!]
>
> But in *any* system of imperfect information, an intelligent agent would
> necessarily be correcting "FOOLED" information all the time. Do you
> really have an intelligent system if it concludes something is 100%
> reliable and then *never* considers what to do if it is wrong? How does
> it learn to evaluate unreliable information if it has no mental model
> for that possibility?

Have you studied how reinforcement learning algorithms deal with this? The
answer to your question is to look at any of the current algorithms.

You are right, it never assumes anything is 100%. I was saying 100% to
make a simplifying point. Each experience allows it to adjust it's
expectation and if something is behaving 100% the same way, the RL
algorithmic will typically converge towards a 100% belief, but never
actually get to 100%.

> > We have no way to test an intelligence that way because
> > we have no way to limit the experiences of any human to such a small
> > set. Your untestable conjecture DOES NOT REFUTE my position. It's just
> > something you made up that sounds good to you.
>
> On the contrary, we have *exactly* the kinds of technology that can be
> brought to bear in giving humans a smaller set of information to work
> with and then examining their behavior for signs of intelligence. It is
> only you who makes untestable proclamations.
>
> For example, we're at the point of discovering a way for computers to
> drive by throwing tons of sensor data and processing power at the
> problem, but where are the experiments that show how a *human* can drive
> with some minimal amount of information? How well could I pilot a
> vehicle with just a 32x32 pixel video feed and a 4 bit control? If it
> is at least as well as a high-bandwidth artificial driver, we haven't
> gone down the path of intelligence.

Or, we have gone down the path, but just not very far yet. We might be at
the "bubble sort" point of intelligence, where humans are using the quick
sort solution.

Curt Welch

unread,
Aug 17, 2011, 11:02:49 PM8/17/11
to
casey <jgkj...@yahoo.com.au> wrote:
> On Aug 17, 3:12=A0am, Doc O'Leary <droleary.use...@3q2011.subsume.com>

> wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
> >> By leveraging itself on evolved mechanisms. Evolution has
> >> produced some complex bodies and can produce some complex
> >> innate brain solutions not possible in the lifetime of
> >> an individual.
> >
> >
> > I don't know that that's necessarily true for an AI, though.
> > I'll certainly grant you that our brains have a privileged
> > starting point that is rooted in evolution, but it seems
> > possible to me that, if we generalized our understanding of
> > intelligence, we might be able to go back farther to some
> > kind of "stem cell" or even "proteins and amino acids" base
> > that can scale up to the human level.
>
> Which I believe would amount to building evolutionary
> networks rather than a simple learning network.
>
> Humans have a number of different and, I believe, essentially
> innate skills such as walking or seeing in stereo which are
> fine tuned by experience. Stereo vision actually appears as
> soon as the two eyes are working properly at about 4 months
> in a human child. Maturation as a process, that appears like
> learning, is something Curt ignores when looking for "evidence"
> for his views.

Where is YOUR evidence that stereo vision is innate? It just _happens_ to
take 4 months of exposure to real data before it forms, but yet you argue
it's innate? Where is the evidence that this innate "tuning" is not the
very learning I've been talking about?

It is loading more messages.
0 new messages