Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

No easy road to AI

27 views
Skip to first unread message

casey

unread,
Dec 6, 2009, 3:15:45 PM12/6/09
to
Biological brains start with a default set of connections
which are then modified by experience.

There are those who want to skip the hard slog of inventing
those innate default circuits that allow a brain to convert
a complex sensory input into an simpler internal model of
the brain's environment.

My hunch is we are going to have to reproduce the innate
circuitry possessed by real brains. I suspect we are not
this general purpose machine some imagine we are simply
because we can do a lot of things. We will have to invent
these innate modules or evolve them.

Imagine if your robot had a visual module that converted
the complex changing pixel values from a video camera into
a description of a scene with objects, positions and actions.
I think adding the bit to allow it to drive a car would become
easier. It is the innate modules that took evolution millions
of years to develop that are difficult not the high level
symbolic reasoning on top.

We can program our computers to do logic and math because
it is simple compared with the complex sensory analysis
and complex motor synthesis done by biological brains.


JC


Wolf K

unread,
Dec 6, 2009, 7:37:30 PM12/6/09
to
casey wrote:
> Biological brains start with a default set of connections
> which are then modified by experience.
>
> There are those who want to skip the hard slog of inventing
> those innate default circuits that allow a brain to convert
> a complex sensory input into an simpler internal model of

The number of connections at birth/hatch/etc varies enormously. The
amount of destruction and rebuilding that occurs at later stages (eg,
metamorphosis, puberty, etc) also varies enormously. What we know for
sure is that the networks are modified by inputs to the neurons,
including those supplied the changes in the chemical bath that
surrounds them. Some of those inputs originate in the environment
outside the organism, some originate within the organism itself. We
label the former "learning" and the latter "development." But from the
POV of the neuron/neural networks, it's all one. IOW, our concepts are
mistaken. I suspect our bias towards consciousness as the essence of
self that causes the mistake.

> My hunch is we are going to have to reproduce the innate
> circuitry possessed by real brains. I suspect we are not
> this general purpose machine some imagine we are simply
> because we can do a lot of things. We will have to invent
> these innate modules or evolve them.

IOW, "learning" is modification of existing networks.

Just what I've been preaching all along. ;-)

> Imagine if your robot had a visual module that converted
> the complex changing pixel values from a video camera into
> a description of a scene with objects, positions and actions.
> I think adding the bit to allow it to drive a car would become
> easier. It is the innate modules that took evolution millions
> of years to develop that are difficult not the high level
> symbolic reasoning on top.

Quite so. But casualness wetware is not very good at programming itself
as a digital (== logic) engine, we _feel_ that logic is difficult, while
all those other things we leearned in the first few years of life are
easy. Must be easy - an infant can do it! ;-)

> We can program our computers to do logic and math because
> it is simple compared with the complex sensory analysis
> and complex motor synthesis done by biological brains.

Simple/easy are at right angles to easy/difficult.

cheers,
wolf k.

Wolf K

unread,
Dec 6, 2009, 7:40:16 PM12/6/09
to
Wolf K wrote:
> casey wrote:
[...]

>> Imagine if your robot had a visual module that converted
>> the complex changing pixel values from a video camera into
>> a description of a scene with objects, positions and actions.
>> I think adding the bit to allow it to drive a car would become
>> easier. It is the innate modules that took evolution millions
>> of years to develop that are difficult not the high level
>> symbolic reasoning on top.
>
> Quite so. But casualness wetware is not very good at programming itself
> as a digital (== logic) engine, we _feel_ that logic is difficult, while
> all those other things we leearned in the first few years of life are
> easy. Must be easy - an infant can do it! ;-)

oops, that should read "But because wetware..." (dang spellchecker!)

wolf k.

zzbu...@netscape.net

unread,
Dec 6, 2009, 9:05:48 PM12/6/09
to

A lot of that is because the cranks in AI have become so encrusted
with Turing Machine holdovers, that they do network programming
and call it logic. So the people who know how logic actually works,
work on digital books and desktop publishing, rather than Firmware,
And work on home broadband hdtv, blue ray, distributed processing
software,
and rapid prototyping, rather than X-windows backward
compatability. And work on Holographic Memory
rather than IBM Cobal Short Courses. And work on Optical Disk
Technology,
Compact Flourescent Lighting, Cyber Batteries, :USB, GPS, Data
Fusion,
and Self-Replicating Machines. rather than programmable GE Optical
Computers,
And work on UAVs, Drones, Pv Cell Energy, Hybrid-Electric Energy,
neo Wind Energy,
Solar Energy, Biodiesel, mp3, mpeg, Digital Terrain Mapping, Self-
Assembling Robots,
and Multiplexed Fiber Optics, than repeated GM Junkets.

Curt Welch

unread,
Dec 6, 2009, 11:06:02 PM12/6/09
to
casey <jgkj...@yahoo.com.au> wrote:
> Biological brains start with a default set of connections
> which are then modified by experience.
>
> There are those who want to skip the hard slog of inventing
> those innate default circuits that allow a brain to convert
> a complex sensory input into an simpler internal model of
> the brain's environment.
>
> My hunch is we are going to have to reproduce the innate
> circuitry possessed by real brains. I suspect we are not
> this general purpose machine some imagine we are simply
> because we can do a lot of things.

Well, your wording there can be a bit misleading.

It's not what a given person is able to do which is amazing about the
brain. It's what a human has the power to _learn_ that's amazing. And we
know that not by testing a single human, but by looking across the human
population as a whole and seeing how widely varied their behaviors are.
90% of what I have learned to do, you probably can't do, and 90% of what
you have learned to do, I can't do.

This is because each of us have adapted to our own environment and
developed a wide range of behaviors that works only in our own environment.
I know how to perform "go get a picture of my wife's system Nancy" behavior
in my house but you wouldn't be able to respond to such a request. I know
how to drive around the streets of my home town - you probably don't know
how to drive around these streets because it's a set of behaviors you have
never learned.

We are obvilus "general purpose" in teh sense that we adapt to our own
enviornment. And even though we are both humans in a modern world, we learn
a huge number of behaviors we don't even realize we have learned.

One one thing I learned on my trip to Australia, was not only do the people
there drive on the other side of the street, they walk on the "wrong" side
of the side walk, and walk on the "wrong" side of stairs as well. I was
walking around the town (Sydney I think it was) trying to keep to the right
which is the custom in the US on a crowded sidewalk or stairs, and I kept
running into all these people walking on the wrong side of the sidewalk. I
felt like a salmon trying to swim up stream. I didn't even realize at
first what was going on. Then it hit me that I was not following the
correct local customs, and had to try and re-learn how to walk!

That is just one example of the stuff we are _conditioned_ to do by our
environment, but for the most part, never think about it.

What's amazing about the brain, is how it's able to learn this stuff though
interaction. Evolution didn't hard-wire the brain to make us walk on one
side of the sidewalk. It's one of billion learned behaviors that are
conditioned into us though experience.

> We will have to invent
> these innate modules or evolve them.
>
> Imagine if your robot had a visual module that converted
> the complex changing pixel values from a video camera into
> a description of a scene with objects, positions and actions.
> I think adding the bit to allow it to drive a car would become
> easier. It is the innate modules that took evolution millions
> of years to develop that are difficult not the high level
> symbolic reasoning on top.

I think the issue is irrelevant and a HUGE cop-out.

Engineers have been hard-wiring fixed-function modules since the beginning
of humans doing engineering. In order to do learning, we have to start
with some fixed function features of the machine, and add learning "on top
of" that.

But learning is, and always will be, a TOTALLY SEPARTE PROBLEM from the
fixed-function hardware.

Most AI projects have been attempts to create fixed-function hardware to
duplicate some LEARNED behavior of a human. We create fixed-function chess
programs to try and duplicate the learned behavior of chess playing in a
human. The end result of that research is a very high quality
fixed-function machine that can perform that limited task better than any
human.

This story of building fixed-function hardware to duplicate learned
functions in humans has been the de facto work of AI for 50 years now. And
the end result is that none of these fixed-function machines are
intelligent - for the very reason they are fixed function machine and can't
learn anything on their own - like which side of the sidewalk to walk on.

To solve AI, we have to solve the learning problem.

But instead of working on learning, you try to argue we need to continue to
work on more and better fixed-function machines.

If the goal is to solve AI, we need to solve the learning problem and stop
wasting our time on more fixed function machines.

It absolutely makes NO DIFFERENCE AT ALL - what the brain does here.
Whether you example of a visual system that is hard-wired to detect objects
and motion is not in the least bit relevant. If it is hard-wired as you
suggest, then that's the part we need to ignore because what we need to do,
is solve what no one in the past 50 years has solved - how to do the
learning.

If you are correct, and the visual system is a complex hard-wired
fixed-function circuit created by evolution that creates a simplified model
of the world, then you might try to argue that we have to build that first,
before we can solve the learning problem. But we don't. We can simply
simulate the "output" of your "objects and motion" modules, and use that
simulated data to drive our high level learning system. We can build an
entire simulation of a simple environment based on that high level model of
objects and motions, and use that simulated environment as a direct input
to our learning agent and in structuring our development system that way,
we completely bypass the need to work on, or build, the hard wired fixed
function vision system.

And then we can work on the learning problem, and create real intelligence
in the simulated environment without wasting a second of time building the
fixed function modules you think are so important.

They AREN'T IMPORTANT to the real problem of AI - which is building the
learning system. So if you goal is to create intelligence, then that hard
wired sensory data -> internal model is not important. And if you think it
is, then it's a cop-out and an excuse for you not to work on what really
needs to be worked on - which is the learning hardware - the hardware that
self-configures itself to adapt to the environment it's placed in like
humans do.

> We can program our computers to do logic and math because
> it is simple compared with the complex sensory analysis
> and complex motor synthesis done by biological brains.
>
> JC

The complex sensory and motor synthesis IS the problem with learning. It's
not made simply by building fixed-function "object and motion" detectors.
Learning is not happening on some dumb-downed simplification of the
environment.

When a learning agent learns on it's own, how to walk on the right (or walk
on the left if you live in John's part of the world), how does that work?
The concept of "which side of the street I'm walking on" is NOT AN innate
output of the low level innate sensory processing system created by
evolution. If the low level hardware doesn't give the learning system a
simple signal that indicates what side of the street it's walking on, then
how does the learning system learn to this behavior?

The answer to this problem, and many others I can point out, is that the
learning problem and the sensory processing problem can't be solved
separately. Learning effects how the sensory processing works. You can't
have hard-wired fixed function sensory processing at one level, and
adaptive learning at a higher level, and still make it learn the type of
things humans can learn. Learning must shape our perception of reality
(which is a well known fact these days about human perception). Human
perception is not fixed function.

The reason AI research has not yet created intelligence, is because most of
AI research was mis-directed to creating fixed-function machines instead of
creating learning systems. The researches were blind to the important
distinction between the behavior, vs how the behavior was learned. They
saw the behavior as being our intelligence instead of seeing our real
behavior, and our real intelligence, was the longer behavior of how we
change in response to our environment.

The reason AI has not worked more in learning, is because no one has yet
figured out how the hell to do it. It's too hard of a problem. So they
cop-out, and work on the easier one first. If we can't figure out how to
make a program learn how to play chess on it's own, then maybe we should
just start by trying to make it play chess and worry about learning later?

Well later is here now and it's long since past time to stop palying this
"learning is too hard" so lets work on the fixed function stuff first.

And that' exactly what you (John) are trying to argue here. You are trying
to argue that the learning systems you have tired to created didn't work
well, so maybe the way to solve that, is hard-code most the solution for
it, so it has less to learn.

No, that's not the answer. The answer is that we have to find solutions to
building stronger learning algorithms. If the learning system can't solve
the perception problem at the same time it's trying to learn high level
behavior, it's never going to work.

It simply makes no difference how much of a real brain has been hard wired
by evolution. It's not the hard wired parts that are keeping us from
creating intelligent machines. We no how to build hard-wired
fixed-function robots. It's the stuff zz loves to list in every one of his
posts. Those aren't intelligent because those don't include the strong
learning powers that humans have.

If you want to move AI forward, stop worrying about what evolution did or
didn't hard code in the human brain, and start working on building better
self adapting systems.

For example, feed ASCII text to a computer, and make it learn to
communicate with us. With ASCII text, there is no "perception problem"
that needs to be hard coded because instead of a very high bandwidth audio
stream that has to be decoded into "words" the data is already in "words"
when it's feed as ASCII.

If letters are not good enough, then write a parser that breaks a message
into words and send a code number what each word is for the computer to
learn with. Such as "1" for the word "the", and "2" for the word "a", etc.
Then let it learn from that stream of word numbers you feed it. That takes
out the need for some sensory processing system, and puts the problem all
in the learning module. Write the learning system that can take that, and
learn to respond in the ways humans learn to respond when they interact
with their environment.

Solve that learning problem, and you have solved AI - and you don't have to
deal with any of the fixed function modules evolution created for us or
deal with the issue of where the "modules" end and the learning begins.
It's just not important to AI.

The only reason there is no easy road to AI is because 1) learning is not
easy and 2) there are too many people that think like you and refuse to
even work on learning because you keep making up excuses about why
something else has to be worked on first.

The longer you, and everyone in AI like you keeps rationalizing why they
don't have to work on learning, the longer the field of AI will continue to
fail to make anything intelligent.

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

casey

unread,
Dec 7, 2009, 5:39:04 PM12/7/09
to
On Dec 7, 3:06 pm, c...@kcwc.com (Curt Welch) wrote:
> ...

> The concept of "which side of the street I'm walking on"
> is NOT AN innate output of the low level innate sensory
> processing system created by evolution.

I disagree. The two stimuli, walking on the left or walking
on the right, have to be discriminated for any associations
to take place.


> If the low level hardware doesn't give the learning system
> a simple signal that indicates what side of the street it's

> walking on, then how does the learning system learn this
> behavior?

I believe the innate hardware does provide such a signal.

> The answer to this problem, and many others I can point out,
> is that the learning problem and the sensory processing
> problem can't be solved separately.

I don't see the problem. If you lack the innate hardware to
discriminate between red and green you will never be able
to learn that red is associated with one thing and green
is associated with another thing.

JC

Curt Welch

unread,
Dec 7, 2009, 9:07:47 PM12/7/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 7, 3:06=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > ...
> > The concept of "which side of the street I'm walking on"
> > is NOT AN innate output of the low level innate sensory
> > processing system created by evolution.
>
> I disagree. The two stimuli, walking on the left or walking
> on the right, have to be discriminated for any associations
> to take place.

I agree. Before we can learn which side to walk on, we must recognize
enough clues to determine where we are walking. That was not my point.

My point was on the question of whether such signal was produced innately.

But my position in that argument is weak in the sense that who knows how
important it might be in general to have good innate perception circuits to
aid navigation that might well have included features that allowed the
innate recognition of "sideness" throughout our history.

> > If the low level hardware doesn't give the learning system
> > a simple signal that indicates what side of the street it's
> > walking on, then how does the learning system learn this
> > behavior?
>
> I believe the innate hardware does provide such a signal.

Well, that's possibly valid. My view is that the innate hardware is
generic perception hardware that adapts to whatever data it is fed and
produces the best internal representation of the data possible by a
compression technique that removes as much redundancy and presents as much
usable information about the environment as possible with the least amount
of correlation between parallel signals as possible. That is the data the
innate system needs to produce so why hard-code any attempt to decode an
environment when it would be easier and better to learn the decoding on the
fly?

Anyhow, none of that is important to the main point of my argument - which
is the requirement that the learning problem must be solved no matter how
the perception system is working and that it's trial to create good
stand-in perception systems for toy environments today so we don't need to
even deal with the perception system in order to work on, and solve, the
learning problem - which is the only part still completely missing from our
AI programs.

> > The answer to this problem, and many others I can point out,
> > is that the learning problem and the sensory processing
> > problem can't be solved separately.

Which is a funny and well timed quote on your part based on what I just
wrote above (saying that learning can be solved without dealing with
perception).

> I don't see the problem. If you lack the innate hardware to
> discriminate between red and green you will never be able
> to learn that red is associated with one thing and green
> is associated with another thing.

There's a huge difference between a low level color camera and a low level
B&W camera vs the ability to discriminate which side of the road you are
on.

If the sensors don't give you the data, then that's the end of the story.
You have to have sensors to sense the data you want to learn to react to.

But given the signal from a color video camera is a long way off from
learning to turn right when the sign says "turn right here if this sign is
red".

Starting at some video signal from a color video camera, we need lots of
processing to make a robot turn right in response to that stimulus (and
many others of similar complexity at the same time).

You suggest a lot of that processing is done by complex innate modules
designed by a long slow process of evolution and that learning starts later
in the chain at the high level. I suggest that most all the work is done
by the learning module that includes an innate ability to solve perception
problems - that what it learns is in fact, entirely perception. It learns
to perceive when it's the correct time to lift it's right hand. There are
many different patterns in the data that will classify as "the correct time
to raise the right hand" but yet, that's the class of patterns it must
learn in order to produce intelligent actions.

My main point is that we must solve the learning problem and that it's the
only major problem left unsolved in AI and is, and always has been, the
problem that should have been solved first. And had it been solved 50
years ago, we would have had intelligent machines around us 50 years ago.
This type of learning is HARD. That's why it hasn't been solved. It's so
hard, many people who have looked at it wrote it off as _impossible_ and as
such, convinced themselves that the solution to human intelligence must be
somewhere else - because they believe it's not possible to explain our
intelligent actions as being learned.

But it's not impossible because it's what the brain is doing.

It makes no difference what signals you send to the learning module, even
if you do lots of pre-processing to give it "you are on the left side of
the sidewalk" signal to learn from, it's learning task is still _HARD_
because it will be receiving millions of signals like that, and will still
have to solve a high dimension real time temporal pattern matching problem
to get the final result of knowing the huge and complex set of temporal
patterns that should make it "lift its right arm".

When I study the learning problem that must be solved, I find that lo and
behold, what it has to learn, is perception and nothing else! And if, in
solving the learning problem, we end up solving the perception problem,
then we will have the hardware needed to solve all our perception needs as
well, so there will simply be little to no need to hard-code the perception
system.

If you don't see (or believe that), then feel free to work in the learning
problem and see if you can find a way to solve it without at the same time,
solving the perception problem. I'm open to any approach for solving the
learning problem you can find.

> JC

casey

unread,
Dec 8, 2009, 5:25:50 AM12/8/09
to
On Dec 8, 1:07 pm, c...@kcwc.com (Curt Welch) wrote:
> ...
> My view is that the innate hardware is generic perception
> hardware that adapts to whatever data it is fed and produces
> the best internal representation of the data possible by a
> compression technique that removes as much redundancy and
> presents as much usable information about the environment
> as possible with the least amount of correlation between
> parallel signals as possible. That is the data the innate
> system needs to produce so why hard-code any attempt to
> decode an environment when it would be easier and better
> to learn the decoding on the fly?

Redundancy can be useful in a noisy communication channel.

My challenge before on this topic was that if you really
were performing lossless compression in your pulse sorting
network you should be able to convert the result back to
the original form.


JC

Curt Welch

unread,
Dec 8, 2009, 11:52:41 AM12/8/09
to
casey <jgkj...@yahoo.com.au> wrote:

Sure, IF it were lossless. Which my networks are not. So there is no
"challenge" to be met there. It's obvious my network designs are not
lossless.

I've spent some time mulling over the issue of whether it should be, and
even if it could be, lossless (even at only the first levels) and haven't
come to any conclusions about that issue. I don't see how to prove it
can't be lossless, but at the same time can't see how it could possibly be
lossless and still do what needs to be done.

If the processes were lossless, then we would be able to fully predict what
a person was looking at (bit for bit) by their actions - which is clearly
not possible. The sensory->action process as a whole can not be a lossless
compression processes. The data input is far higher than the data output
and the output is not an attempt to compress the input. But might it be
helped along if driven by lossless data process at the beginning of the
chain? I know it is helped along by a data normalizing process at the
beginning of the chain, but whether making it truly lossless is good, or
even possible, I'm still undecided about.

I do believe however that one of the major faults of my one input two
output node design and the associated chicken-wire (as we incorrectly call
it) network design is that it fails at the task of correctly normalizing
the data. The one input two output node process is fully reversible, but
it's a data expansion process not a data compression process. The way the
network then merges two signals into one is not reversible, and is clearly
a data loss process.

What I would like to create however, is some network process that could
take N input signals, and produce N (or greater) output signals, without
data loss, and which at the same time will be normalizing the data so that
there is less correlation between the output signals, then there was
between the input signals. I've not yet figured out how to do that using
pulse signals and using the data sorting paradigm.

There are well known mathematical linear transforms for doing this with
sets of numbers however - such as PCA:

http://en.wikipedia.org/wiki/Principal_component_analysis

and that's the basic type of idea of what I believe must be done in these
networks. Except that type of transform is purely spatial. That is, the
transform only takes into account the current input values in order to
calculate the current output values. I believe what is needed here is a
temporal transform that is able to map a large set of past inputs into the
current output instead of just the current inputs. With that type of
transform, the outputs will be representing not just spatial patterns
across multiple current inputs, but temporal patterns across both multiple
inputs and across time (back in time that is).

My current nodes only use the last two pulse inputs to make their pulse
sorting decision which means they are only a function of the last two
inputs, and not the the last many inputs. My thought with trying that was
that multiple nodes like this working together in a network then be able to
combine their results to create outputs that effectively represented many
past inputs to the network instead of just the last two. My networks do
that to some extent, but I don't then the approach works as well as I had
hopped. The effect only works well to extend back in time if you create an
expending network topology. That is, say 10 inputs that expand to 1,000
nodes wide in a middle layer. But then the distance back in time the
network is able to represent is limited somewhat by that expansion factor.
For 10 to 1000, that's an expansion factor of 100, which means such a
network is able to represent roughly 100 times further back in time than
the individual nodes - i.e, 100 past pulses. But 100 pulses temporal
patterns for these types of networks really isn't very far at all time wise
compared to the length of temporal patterns the human brain can recognize.
To get the length of temporal pattern recognition I think is needed, the
network would have to be what strikes me as unworkable wide in the middle
layers.

All this leads me to suspect that limiting the nodes to working with only
information from the last two input pulses won't work. Which makes me
think I should explore algorithms for the nodes that use some sort of
decaying average calculation of all past input pulses to make pulse sorting
decisions with. Such an approach is also closer to what real neurons are
doing than my gap sorting nodes. That would make a single node responsive
to a very long temporal pattern (though it would be weighted to be more
sensitive to the more recent pulses) and would make it possible for a large
network to be responsive to extremely long temporal patterns even without
making it fan-out to large numbers of internal nodes in the middle layers.

I've been thinking this sort of node would be better suited for what I
think the network needs to do for a few years now, but haven't explored
implementing such nodes and experimenting with them yet. It's one of the
directions I would like to explore if I get the time to get back to that.

With those sorts of nodes, it might just be possible to create an
effectively lossless transform of N inputs to N outputs using pulse signals
and a pulse sorting paradigm.

The idea as always with my approach, is to make the nodes perform the
default transform by analyzing the data fed to them and adjusting the
transform to fit the real constraints of the data, but then re-shaping the
specifics of the transform by reinforcement. The default transform
effectively implements the high quality perception system (the one I
believe you are thinking should be done by innate modules coded by
evolution to fit our environment (and our sensors)), and the adjustments
applied by reinforcement are how the behavior is shaped to fit the actual
environment (which I believe is somewhat consistent with your idea that
learning is applied at a high level to the output of the innate perception
modules). The only implementation difference is that in my type of
network, the "low level" perception and "high level" learning is happening
in each node of the network, not in two different network modules.

casey

unread,
Dec 8, 2009, 3:57:54 PM12/8/09
to
On Dec 9, 3:52 am, c...@kcwc.com (Curt Welch) wrote:
> ...
> On Dec 8, 1:07=A0pm, c...@kcwc.com (Curt Welch) wrote:
>>> ...
>>> My view is that the innate hardware is generic perception
>>> hardware that adapts to whatever data it is fed and produces
>>> the best internal representation of the data possible by a
>>>compression technique that removes as much redundancy and
>>>presents as much usable information about the environment
>>> as possible with the least amount of correlation between
>>>parallel signals as possible. That is the data the innate
>>> system needs to produce so why hard-code any attempt to
>>>decode an environment when it would be easier and better
>>> to learn the decoding on the fly?
>>
>> Redundancy can be useful in a noisy communication channel.
>>
>>
>> My challenge before on this topic was that if you really
>> were performing lossless compression in your pulse sorting
>> network you should be able to convert the result back to
>> the original form.

> It's obvious my network designs are not lossless.

But you seem to think they have to be as lossless as possible
whereas I don't see that as required in real brains and thus I
see it as very much an overkill. Why make it harder than it
needs to be?

I believe you are trying to solve problems that don't exist
because you spend too much time with your imaginings rather
than with real systems that solve real problems. It makes
more sense to start with simple problems and from those
simple problems extract general principles as to how problems
are solved and then use the examples to demonstrate the
principles. Nature has plenty of working examples of brains
and although we don't know the details we do know some of
the basic design principles they use, most of which you deny.

What I understand you are trying to do is build a universal
input to output mapper that can deal with temporal as well
as spatial patterns. The learning being the connections made
between the output of the encoder and input of the decoder
as a result of a reward signal.


+----+ +----+
| 00 |----o o----| 00 |
-->| 01 |----o o----| 01 |-->
-->| 10 |----o o----| 10 |-->
| 11 |----o o----| 11 |
+----+ +----+


You suggested the temporal patterns required a memory [D].

+----+ +----+
+-[D]->| 00 |----o o----| 00 |
| | 01 |----o o----| 01 |-->
| | 10 |----o o----| 10 |-->
--+----->| 11 |----o o----| 11 |
+----+ +----+

You realized the combination explosion problem of such a
scheme and suggested that is solved by replacing a complex
input with a single pulse representing that input.

And so on ...

These are things I thought about and read about in books
on AI and cybernetics a long time ago but the best clues
I have found is from biological brains.

You seem to have a bee in your bonnet with respect to any
"species learning" as if that can be replaced with learning
taking place in the lifetime of an individual.

Lets consider open ended learning which is what evolution does.

First there is no magic guarantee of a system getting smarter.

The species gets "smarter" via natural selection. Here
we have millions of individuals and their experiences over
millions of years to come up with solutions embodied in
their networks. The key here is inheritable variations.

Unlike other animals humans depend heavily on inheriting the
learning and discoveries of their ancestors not via their
genes but by the spoken word and writing. So even to the
extent we learn most of our behaviors these behaviors were
not all learned by the individual but rather by the species.
Without this inheritance your so called generic learning
would leave you running naked around the forest eating
berries and insects like any other dumb animal.


JC

Curt Welch

unread,
Dec 8, 2009, 7:01:13 PM12/8/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 9, 3:52=A0am, c...@kcwc.com (Curt Welch) wrote:
> > ...
> > On Dec 8, 1:07=3DA0pm, c...@kcwc.com (Curt Welch) wrote:
> >>> ...
> >>> My view is that the innate hardware is generic perception
> >>> hardware that adapts to whatever data it is fed and produces
> >>> the best internal representation of the data possible by a
> >>>compression technique that removes as much redundancy and
> >>>presents as much usable information about the environment
> >>> as possible with the least amount of correlation between
> >>>parallel signals as possible. That is the data the innate
> >>> system needs to produce so why hard-code any attempt to
> >>>decode an environment when it would be easier and better
> >>> to learn the decoding on the fly?
> >>
> >> Redundancy can be useful in a noisy communication channel.
> >>
> >>
> >> My challenge before on this topic was that if you really
> >> were performing lossless compression in your pulse sorting
> >> network you should be able to convert the result back to
> >> the original form.
>
> > It's obvious my network designs are not lossless.
>
> But you seem to think they have to be as lossless as possible

There is no "seem" about. It's required in order for RL to work.

If you throw out some data, how is the RL system going to learn to respond
to that data? What if the data you drop, is the only data it had to use?

It's like trying to learn to play tic tac toe when you randomly decide it's
ok to drop the contents of half the cells.

There's a different aspect of this issue I've disused with you in the past
but I doubt you every realy understood it. But I'll mention it again.
There's the question you ask about losses which implies any transformation
you perform on the sensory data is reversible. I'm not sure if that's even
possible, let alone required. Then there's the other aspect, which is the
question of whether the information exits at all in the data after the
transform, so that it can be decoded over time using statistics. If the
transform removes the information completely from the data stream, then the
RL algorithm has no hope of learning to respond to it (it's not there to
respond to). But if the data in question is simply hidden in the data
becuase it's mixed with other data, then there are statistical techniques
for identifying, and extracting, the data.

What's important, is that the data is not lost so that it can be found by
the learning learning algorithm using statistical techniques over time.
That's what is not just important, but required.

> whereas I don't see that as required in real brains and thus I
> see it as very much an overkill. Why make it harder than it
> needs to be?

You don't seem to grasp what I'm talking about. It's not optional. It's
required or else learning can't work. Whicgh is why I keep telling you to
forget what you think real brains are doing and start studying the learning
problem that needs to be solved. Once you understand the learning problem,
you have better appreciation of what real brains are in fact doing.

> I believe you are trying to solve problems that don't exist
> because you spend too much time with your imaginings rather
> than with real systems that solve real problems.

Here in c.a.p. all I do is theorize and imagine. In real life, for my
entire carer, all I have done is built real systems that solve real
problems. Virtually every day of my life is spent dong a bit of that. I'm
more than a little experienced with what it takes to make "real systems"
work at all levels.

> It makes
> more sense to start with simple problems and from those
> simple problems extract general principles as to how problems
> are solved and then use the examples to demonstrate the
> principles.

As I have done for you and for myself many times.

> Nature has plenty of working examples of brains
> and although we don't know the details we do know some of
> the basic design principles they use, most of which you deny.

I deny that YOU understand how brains work.

> What I understand you are trying to do is build a universal
> input to output mapper that can deal with temporal as well
> as spatial patterns.

Yes. Very much so. One that will adjust it's mapping by reinforcement.

> The learning being the connections made
> between the output of the encoder and input of the decoder
> as a result of a reward signal.

Well, kinda of. But that seems oversimplified.

> +----+ +----+
> | 00 |----o o----| 00 |
> -->| 01 |----o o----| 01 |-->
> -->| 10 |----o o----| 10 |-->
> | 11 |----o o----| 11 |
> +----+ +----+
>
> You suggested the temporal patterns required a memory [D].

Logically yes. But the decoding approach you are suggesting in this
diagram is unworkable because it can't scale. Conceptually however, it
demonstrates the idea. But it also fails to explain how such a system
could learn from reinforcement.

> +----+ +----+
> +-[D]->| 00 |----o o----| 00 |
> | | 01 |----o o----| 01 |-->
> | | 10 |----o o----| 10 |-->
> --+----->| 11 |----o o----| 11 |
> +----+ +----+
>
> You realized the combination explosion problem of such a
> scheme and suggested that is solved by replacing a complex
> input with a single pulse representing that input.

No, there is nothing I see about pulse signals that makes me believe they
help the combinatorial explosion problem. The advantage I see in using
pulse signals is that they seem to be a better way to express the data when
you need to process the temporal information in the data.

Often, when trying to solve a hard problem, using a different langauge to
solve the problem can make all the difference in how hard the problem is.
We can use a Fourier transform to change a time series into a frequency
domain without loosing the data. Some problems are easy to solve when they
are represented in the time domain, and nearly impossible to understand
when represented in the frequency domain, Some problems are just the
inverse. Understanding how filters work becomes easy when you study them
in the frequency domain, and nearly incomprehensible in the time domain.
Using a different language, allows us to look at a problem from a different
direction, and in doing so, we can see solutions that were hidden before.

What strikes me as the apparent advantage of the pulse abstraction is that
it's inherently temporal in nature. When the task at hand is to process
the information that exists in the temporal domain, expressing the data in
a langauge that expresses the data in the domain we have to process seems
like the correct approach. It's that belief that attracted me years ago
(and continues to attract me) to the use of the pulse abstraction in my
designs.

> And so on ...
>
> These are things I thought about and read about in books
> on AI and cybernetics a long time ago but the best clues
> I have found is from biological brains.
>
> You seem to have a bee in your bonnet with respect to any
> "species learning" as if that can be replaced with learning
> taking place in the lifetime of an individual.

I have no issue with species learning. I've talked many times about the
fact that an entire species can, and should be, looked at as a single large
reinforcement learning machine which stores what it has learned in the
collective DNA code of all currently living members of that species.

It's that learning machine that has "engineered" the design of the entire
human body. To solve AI, we have to duplicate the engineering that was
done by that learning machine. But since we are not building an entire
human body, we clearly don't have to duplicate all the engineering that was
done. In AI, we are just duplicating part of the control system that
regulates human behavior.

The point of contention between you and I, is the question of just what
that part of the human brain is doing. I speculate that it's mostly the
entire neocortex that we need to duplicate and that the entire neocortex is
fundamental a large generic learning machine. But that's pure speculation
on my part and not even important to my position.

My real point is simply looking at real human behavior, and trying to
speculate what type of machine we need to create it. From this direction,
the details of the brain are irrelevant. The question remains, what type
of machine is required to create the adaptive learning behavior which
humans have?

> Lets consider open ended learning which is what evolution does.
>
> First there is no magic guarantee of a system getting smarter.

I don't these we have any agreement on what you mean by "smarter" so I
suspect you are heading off into nonsense here.

> The species gets "smarter" via natural selection.

Do you mean that growing teeth so it can chew food is "getting smarter"?

Or do you mean "getting more complex" is "getting smarter"?

Or are you using "smarter" to mean "more intelligent" (which is some sort
of reference to brain power which we aso can't agree on)?

> Here
> we have millions of individuals and their experiences over
> millions of years to come up with solutions embodied in
> their networks. The key here is inheritable variations.

Sure, we know for a fact that evolution has built lots of hard-coded
networks that got more complex in some species over time - just like our
machines and programs get bigger and more complex over time as engineers
add more features to them.

> Unlike other animals humans depend heavily on inheriting the
> learning and discoveries of their ancestors not via their
> genes but by the spoken word and writing.

yes.

> So even to the
> extent we learn most of our behaviors these behaviors were
> not all learned by the individual but rather by the species.

That doesn't fit how I use the word "learn".

Even if you _first_ learn how to pick an apple, I have to _learn_ it as
well. In one case, you learn it by exploring behaviors you suspect might
be useful until you find the good apple picking behavior, and in the other,
I adjust my list of "suspected good behaviors" based on what I see others
doing, so that I try (and learn) the "good" behavior far sooner. But in
both cases I learn it the same way using the same learning hardware of my
brain. It's just much easier to learn when there is someone else around to
give you hints about where the solution is.

> Without this inheritance your so called generic learning
> would leave you running naked around the forest eating
> berries and insects like any other dumb animal.

Yeah, but that's just stupid to say John. It has nothing to do with the
debate you and I are having.

Our debate is only about the suspected structure of the machine that we
will agree solves the AI problem. In general, there is still no agreement
in the AI community about what that structure or, or how we we know when we
have found it.

I believe it will be almost 100% learning code with very little
non-learning code. The only non-learning code is just the stiff required
to connect the sensors and effectors to the learning module. Though I
believe there are reasons to add non-learning functions to a machine that
uses a strong learning algorithm, I don't believe _any_ of that code is
_required_ to make the machine act intelligent. And also believe, no
amount of non-learning fixed function code will ever make a machine seem
very intelligent to us. Which is why every time we write better
non-learning code, the result is still a machine that seems to have _no_
intelligence. The goal pole isn't moving. It's just that no one has yet
kicked the ball over it.

You seem to believe that the solution will require a lot of fixed-function
modules (90%?) and some learning modules (10%)? I believe it will be more
like 90% learning and 10% non-learning (where the non-learning is located
only in the module generating the reward signal). We have no data to
really answer this question at this point. It's the same debate as the
nature/nurture debate. There's no real way to know where the line is
drawn. Which is why I don't debate that line so much with you anymore.

My point to you, is that we know how to build non-learning systems. It's
what 99% of all engineering is about (both in AI and out of AI). What we
don't know how to do, is build a learning system that can learn as well as
humans. If you think the learning powers of the human are made "simple" by
the fixed function modules, you need to demonstrate this by showing us a
learning module, and showing us how it's made simpler by the addition of
the fixed function modules - and then explaining how various learned
behavior of humans - like car driving, or Usenet debating - is learned by a
machine of your suggested architecture. And very important, you need to
explain how your suggested architecture solves the combinatorial explosion
problem.

casey

unread,
Dec 9, 2009, 5:04:15 AM12/9/09
to
On Dec 9, 11:01 am, c...@kcwc.com (Curt Welch) wrote:
> ...

> If you throw out some data, how is the RL system going to learn


> to respond to that data? What if the data you drop, is the only
> data it had to use?


If the data it drops is the only data it had to use to solve the
problem then clearly it would fail. A frog will apparently starve
if all it has to eat are dead insects as its visual system can
only see flying insects.


> It's like trying to learn to play tic tac toe when you randomly
> decide it's ok to drop the contents of half the cells.


Every cell value counts in tic tac toe but every pixel value
doesn't count in recognizing someone in a picture. The values
however are not dropped at the start they are dropped as part
of the transformation process. If the goal is to recognize
a persons face then only the data required to do that has to
get through to the recognizer. Often there is enough data in
a binarized image to recognize someone.


> What's important, is that the data is not lost so that it

> can be found by the learning algorithm using statistical


> techniques over time. That's what is not just important,
> but required.


That sounds fine in theory but in practice I don't see any
practical system having the capacity to hold all that detail.
let alone the capacity to process it.

There are real constraints in the world and you don't have to
store everything for what is important keeps repeating over
time and a statistical abstract model of the environment can
be built up from that. The real world is also its own data
base so in that sense an organism has access to all the
data without having to store it all.


>> whereas I don't see that as required in real brains and thus
>> I see it as very much an overkill. Why make it harder than
>> it needs to be?
>
>
> You don't seem to grasp what I'm talking about. It's not

> optional. It's required or else learning can't work. Which


> is why I keep telling you to forget what you think real
> brains are doing and start studying the learning problem
> that needs to be solved. Once you understand the learning
> problem, you have better appreciation of what real brains
> are in fact doing.


Well I simply disagree with you as to what needs to be solved.
I don't believe you have to store every detail that happens.

I see the brain as converting the input data into a simple
and hierarchical description of its world. The details may
be lost within the machine but not the general gist.

If you look around you will see a stable 3D world of objects,
surfaces and substances that behave in predictable ways. That
is what the brain converts those noisy retina images into.

You see it as a high dimensional reinforcement learning
problem whereas I see it as reducing it to a not so high
dimensional reinforcement problem.


>> Nature has plenty of working examples of brains and although
>> we don't know the details we do know some of the basic design
>> principles they use, most of which you deny.
>
>
> I deny that YOU understand how brains work.


I wrote we understand *some* of the basic design principles, which
I can elaborate for you if you want to contest them, and I also
wrote that we don't know how it worked in detail, enough detail
to actually build one.


>> What I understand you are trying to do is build a universal
>> input to output mapper that can deal with temporal as well
>> as spatial patterns.
>
>
> Yes. Very much so. One that will adjust it's mapping by
> reinforcement.

See, I do understand what you are trying to do ;)

JC

Curt Welch

unread,
Dec 9, 2009, 2:40:56 PM12/9/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 9, 11:01=A0am, c...@kcwc.com (Curt Welch) wrote:
> > ...
>
> > If you throw out some data, how is the RL system going to learn
> > to respond to that data? What if the data you drop, is the only
> > data it had to use?
>
> If the data it drops is the only data it had to use to solve the
> problem then clearly it would fail. A frog will apparently starve
> if all it has to eat are dead insects as its visual system can
> only see flying insects.

Yes, and for any circuits created by evolution, we can assume evolution
managed to find what was needed and what wasn't. So we trust frogs have
the ciricuts they need to find the type of food they need to survive.

And we can assume lots of that happens with humans as well. The fact that
we have eyes that are sensitive to a very specific range of the EM spectrum
but blind to others (IR for example) is where we know evolution had tuned
our innate sensory systems to fit what worked well for our survival.

But at some point, somewhere, and somehow, in the brain's signal processing
paths, those innate highly tuned circuits end, and dump their processed
data into a generic learning system that must find, in that data, what is
important to respond to, and what is not. And no matter how much
pre-processing help the generic learning system gets from the
preprocessing, the learning task left to solve is still just as hard (order
of magnitude wise) as if there were no preprocessing at all.

If the visual data for example is pre-processed into a representation of
objects and 3D locations and textures and shapes and direction of movement
and velocity, that would be a big step forward. But the learning task left
still presents the same type of scaling problem as if none of that had been
done. You are still looking a huge bandwidth flow of information to a
learning system which must map all that data, to a very large set of
possible learned actions.

Lets assume we have some high quality visual code that does something like
the above. It takes a raw video signal and translates it into a large set
objects and shapes etc. We assume this was created by evolution for us
based on the environment we have lived in, so the circuits are all well
tuned for decoding a 3D representations of the 2D temporal data and
producing a complex internal representation of all the objects in our
visual field.

How much data do you think that would end up being in terms of the number
of signals, or some measure like Mbit per second of information?
Basically, as I look around the room, I can see all sorts of features with
my eyes. I don't just see "computer monitor", I recognize all sorts of
features about it's size and shape and color and the way the light causes
shadows to show up.

Though the translation of the raw data into a compact internal
representation might create a large amount of compression, we are still
left with a data flow that is substantial. Maybe 1 Mbit per second? Maybe
100 Kbits per second?

Even if it's only 10 Kbits per second, how do we take that data stream, and
build a learning algorithm to map that to the required behavior outputs to
create human (or animal like) behaviors and allow that mapping to be
constantly adjusted by a reward signal so that the behaviors keep getting
better and better over time in terms of finding ways to get increasingly
higher rewards from the environment?

What makes this hard is not the bandwidth of the data stream as much as the
state space size/complexity of the environment that is ultimately the cause
of the signals.

Solving any reinforcement learning problem requires that the learning
system works with some internal model of the external environment. It must
statistically track (somehow) the rewards associated with state->action
pairs so it can determine how well a given action works for a given state.

For toy environments that have a very small state space, it's easy to find
ways to represent and track state->action rewards. Such as tic-tac-toe
where the state space is small enough that we can hold in memory a
statistical measure of worth (average rewards) associated with every move
from every board position.

But the problem of making this work in the real world is that the state
space of the universe is far too large to track it all. That's true on
both sides of the problem. That is, the number of states represented by
the sensory inputs, as well as the number of possible actions represented
by a good sized set of actuators. There are too many states and too many
possible actions to statistically track all possible combinations. The
high-dimension sensory and effector systems creates a combinatorial
explosion problem in the number of possible state->action pairs there are
to explore when looking for "good" actions, as well as a combinatorial
explosion problem for just trying to statistically track them to find the
pairs.

This problem exists whether we are looking at the raw video stream, or the
highly pre-processed stream. It's still many orders of magnitude to large
to use any of the simplistic algorithms in RL that work for the toy domains
like tic-tac-toe.

We are left with the exact same learning problem, with or without, the data
preprocessing. It's a learning problem that as far as I know, no one has
solved. And it's the learning problem the brain _has_ solved, and it's the
learning system that is responsible for creating in us, all our most
interesting intelligent behaviors.

No matter how much people work on building better visual perception
systems, it will make ZERO difference to the learning problem. This
learning problem can't be solved, no matter how good your
visual-preprocessing hardware is.

And far more interesting in my view, is that I'm fairly sure, that when you
do solve the learning problem, you won't need to hand-design or hand
optimize that large complex visual perception system because the stuff a
system like that is doing, is the same stuff that must be done generically,
to solve the generic learning problems that comes later.

The problem here, is that even a system as large at the brain with it's
billions of neurons and trillions of synapses can, in the end, only
statistically tracks a finite number of variables at once. And even with
trillions of variables tracking trillions of features in that state->action
space, it's not putting even a dent in what needs to be tracked.

That is, the state->action space that has to be statistically tracked for
making a robot interact like a human in the real world, might be 10^100 in
size, but even a huge brain can only track 10^10 variables at once leaving
10^100-10^10 left unmonitored.

What I'm getting at above is one possible way to attack this problem which
simply won't work. That's to track as much as as you can, and once you
determine that some actions are of no use to track, you switch that
hardware to track other actions for a while. The idea is to use the
learning hardware you have to "study" (so to say) as much of the state
space as you can, and as you determine some part of that space is of no
use, you re-allocate that hardware to study other parts of the state space.

Any approach like that simply won't work here because the state space is
too large. It would take billions and billions and billions of years to
study enough of the state space to find even the first half way useful set
of actions.

We can see this in something as simple as the game of chess. How many
possible board positions are there and how many moves from every board
position? Lots of course. If we have learning hardware that can only
represent 10^100 different board->move pairs to track the value of, we can
make it play games, and record each board->move pair it comes across and
start to track it until it fills up it's memory with all the board
positions it's seen.

The problem here of course, is that most positions will never be seen
again. So when it's played enough that it's memory fills up, it will have
to start throwing out information. It will have to decide what information
to keep, and what to throw out. The state space of the problem is so huge,
that even a machine as large as the human brain can't track enough of the
space to every become much better than picking moves at random, at playing
chess by using this approach.

And even if you had a machine larger than the entire mass of the universe
that had enough memory to track every board position and every move in
chess it still would take billions and billions of years to get as good at
playing chess as a human is, simply because it would take that learn to
visit every state space enough times to know which ones really work well
and which don't.

To solve the RL problem we are looking at here, we have to statistically
track value of states->actions, in a state space that is too large to make
it possible to trace every state->action pair. We need a different
approach to solving this class of problem, not just more hardware.

The type of pre-processing you keep talking about doesn't (and can't)
transform the problem to a simpler type of problem that can be solved with
these toy learning algorithms. NO matter how much work evolution has done
for us, the learning problem remains just as hard. When we talk about a
state space this size, it makes no difference whether it's 10^100 or 10^50
in size. It's still many orders of magnitude too large which means we have
to create a solution to these high-dimension learning problem no matter
what evolution has done for us.

The solution requires that we abstract out features of the environment
which are _important_ to our survival - which are _important_ to getting
higher rewards. The solution is that we throw out the data that's not
important, and only use our limited hardware we have to track the features
which are important.

I'll talk more to that after more comments to your ideas....

> > It's like trying to learn to play tic tac toe when you randomly
> > decide it's ok to drop the contents of half the cells.
>
> Every cell value counts in tic tac toe but every pixel value
> doesn't count in recognizing someone in a picture. The values
> however are not dropped at the start they are dropped as part
> of the transformation process. If the goal is to recognize
> a persons face then only the data required to do that has to
> get through to the recognizer. Often there is enough data in
> a binarized image to recognize someone.

Yes, we don't need to use all the data to find the stuff that's important
to us. Much of it can be (and is) ignored. But that's where the problem
lies, how do we know what to keep and what to ignore? (don't just answer
evolution solves that for us - I'll show below how that answer can't
explain the power the brain has).

> > What's important, is that the data is not lost so that it
> > can be found by the learning algorithm using statistical
> > techniques over time. That's what is not just important,
> > but required.
>
> That sounds fine in theory but in practice I don't see any
> practical system having the capacity to hold all that detail.
> let alone the capacity to process it.

Right, and this is where your logic falls apart. You don't see how it can
be solved, so you assume it's NOT solved. How weak of an argument is that?
Pretty damn weak.

> There are real constraints in the world and you don't have to
> store everything for what is important keeps repeating over
> time and a statistical abstract model of the environment can
> be built up from that. The real world is also its own data
> base so in that sense an organism has access to all the
> data without having to store it all.

That's all mostly true. But the "you don't have to store it is wrong".

First, even though our sensory data is full of what you call constraints,
that's just redundancy in the sensory data. Once you remove the redundancy
by compression, what you have left is sensory data that is reporting some
small part of the current state of the environment to us. The data we are
trying to get access to is the state of the environment around us - and
that state is FUCKING HUGE. No amount of compression of sensory data to
get at that raw state data is going to make this problem easy because the
real state of the world around us FUCKING HUGE. (sorry for being a bit
blunt and crude here but I'm trying to make a point that I've been trying
to make to you for 5 years and which you still haven't grasped).

No amount of compression is going to make the data easy to deal with
because the state of the environment we are trying to sense simply is too
large to be made easy.

On the issue of not needing to store the state.... that's true because we
don't need to store state. We need to know how to _react_ to whatever the
current state is. And when we build fixed-funtion hardare, all we do is
build into the hardware those fixed reactions. We write code defines how
we react like:

IF right bumper sensor on, and left bumper sensory not on, and right
wheel turning forward, and left wheel not turning, THEN put right wheel
into reverse.

Our "code" is a large list of hard-coded specifications of how to react to
the small part of the environment we can sense. When we right code like
that, we decide what part of the state we need to sense, we build sensor
systems that are able to detect that part of the state of the environment,
and we build effector systems that can produce the behaviors we need to
have produced, and we create the map that specifies which behaviors are
trigger in response to which sensory conditions. The whole package then
becomes our hard-coded behavior generation system. No where was the system
required to attempt to represent the entire state of the environment
internally. It used the real external environment and just sensed it.

But that's how non-learning systems our build. We aren't solving that
problem here. We have to solve the problem of how to build a learning
system.

And that is where storage becomes a problem. To solve our high dimensional
learning problem, we have two problems that need to be solved at the same
time. We need to not only create the mapping system that maps sensory data
to actions that defines our current set of behaviors. We also have to
statistically track lots of possible alternatives. It's not hard to see
how a system as large as the brain is able to encode our current behavior
map in it's huge networks. But how does it do learning at the same time?
How does it statistically track all the possible things we could do, to
figure out what alternatives it should try to make our behavior works even
better in the future?

The storage problem for learning, is the fact it needs to _store_
statistical worth of potential state->action pairs. The large state space
makes the statical tracking of potential actions a combinatorial explosion
problem.

The partial answer as you and I both talked about above, is that we can't
track everything, so instead, we track just what's important and useful to
track. But for the generic learning side of the problem, again, we are
faced with a combination explosion. Of all the abstractions we can create
from the raw data, which ones are the ones that are useful to track, and
which ones can we ignore? And how do we find the ones that are useful?

> >> whereas I don't see that as required in real brains and thus
> >> I see it as very much an overkill. Why make it harder than
> >> it needs to be?
> >
> >
> > You don't seem to grasp what I'm talking about. It's not
> > optional. It's required or else learning can't work. Which
> > is why I keep telling you to forget what you think real
> > brains are doing and start studying the learning problem
> > that needs to be solved. Once you understand the learning
> > problem, you have better appreciation of what real brains
> > are in fact doing.
>
> Well I simply disagree with you as to what needs to be solved.
> I don't believe you have to store every detail that happens.

WE KNOW what needs to be solved (at least I do). It's easy to PROVE what
needs to be solved here by looking at what humans can do. My only problem
is getting you to understand the true nature of the problem here. If you
read my words, and think about them, you should be able to understand that
what you are suggesting can't work. I'll continue to explain with more
examples below...

> I see the brain as converting the input data into a simple
> and hierarchical description of its world. The details may
> be lost within the machine but not the general gist.

Yes, I agree completely. That's exactly how it must be done.

> If you look around you will see a stable 3D world of objects,
> surfaces and substances that behave in predictable ways. That
> is what the brain converts those noisy retina images into.

That's fine. But as I talked about above, that's not enough. That's still
way too much data to do simplified learning with. That might convert a
problem space of size 10^100 into a problem of size 10^90 by doing the
100000000:1 data reduction you are talking about where we still can't solve
it unless it's converted to a problem space of size 10^6 with our toy
learning algorithms.

Start size:
100000000000000000000000000000000000000000000000000000000000000000000000000
000

Cleaned up size using yhour type of pre-processing (and I'm being over
generous here)
100000000000000000000000000000000000000000000000000000000000000000000

Size we need to make it possible with current toy algorithms using every
computer in the world running in parallel:

100000000

You are still 10^80 times too large of a problem space to make it easy to
solve the learning problem.

Yes, these are made up numbers, but if you like, I can create some real
numbers using the highly tribal environment of the game of chess and show
you how no amount of yout type of "clean up of data" brings the learning
problem into the range where we can attack it with our current types of
learning algorithms.

We need a totally different type of learning algorithm that actually works
well in these high dimension learning spaces. Your approach, seems to be
that you can't grasp how such a learning algorithm can even exist, so you
just ignore it, and assume it's solved by pre-processing that makes the
learning easy enough so it can be solved. But I can show you if you pay
attention, that that is impossible. And I can also show you (conceptually)
just how these high dimension problems _are_ solved.

> You see it as a high dimensional reinforcement learning
> problem whereas I see it as reducing it to a not so high
> dimensional reinforcement problem.

Yes, but that's impossible and that's what I'll continue to show you. You
can't reduce it to a non high-dimension learning problem. We MUST create a
learning algorithm that solves high dimension problems with small amounts
of hardware, is small amounts of time.

> >> Nature has plenty of working examples of brains and although
> >> we don't know the details we do know some of the basic design
> >> principles they use, most of which you deny.
> >
> >
> > I deny that YOU understand how brains work.
>
> I wrote we understand *some* of the basic design principles, which
> I can elaborate for you if you want to contest them, and I also
> wrote that we don't know how it worked in detail, enough detail
> to actually build one.

Yeah, fine, not important (my comment or yours).

> >> What I understand you are trying to do is build a universal
> >> input to output mapper that can deal with temporal as well
> >> as spatial patterns.
> >
> >
> > Yes. Very much so. One that will adjust it's mapping by
> > reinforcement.
>
> See, I do understand what you are trying to do ;)

Yes, but you don't yet grasp that it's trivial to prove that you CANNOT get
around the fact that it is a high dimension learning problem and that no
amount of help by evolution can change that fact.

So let me continue to explain why this is so.

As you pointed out before, in order to explain our behavior (ignoring our
ability to learn) we see that the sensory data has to be abstracted down to
just the gist (as you described) of the state of the environment. This
internal model is a highly simplified model, but it represents what is
_important_ to creating the behavior we need to create. There's nothing
hard to understand about this. We do the same thing all the time when we
build machines. If we build a car that can drive itself, we don't create
an a sensory system that attempts to count the number of leaves on the
trees on the right side of the road and compare that to the number of
leaves on the trees on the left side of the road. We know that data is not
important to this problem, so we don't include hardware that can do that in
our design.

You often mix in these arguments the idea of "using what is important" with
the idea of "the data is constrained so the real amount of data there is
actually a lot less".

As I hope I showed above, you shouldn't confuse these two things. They are
very different issues. yes the data is constrained, which just means it
can be compressed down to something a lot less which (if we could do
perfect compression) would have no constraints, but would be a lot less
total data than we started with. This is just standard compression theory
and nothing hard to understand. If we compressed everything the video
sensors for our car was sensing, it would be a lot less data (but the same
amount of _information_), but it would include enough information to
actually count leaves on the trees we were driving past.

But that leave count is not important to _this_ set of behaviors (driving a
car) so it can be tossed out because for these behaviors, that information
is not important.

The data compression that can be done because of the constraints, does not
make it easy to create the right behaviors, or to do learning, but it is
important and useful.

And we can assume that evolution built a system to reduce the data by some
type of compression using the constants to help make this all work better.
But that system, however it works, does not change the learning problem
into a low dimension learning problem.

We still need to throw out a lot more data before we get down to triggering
the right behaviors. So please, stop confusing these two data reduction
effects in these debates.

So, after it's all nicely compressed, what do we thrown out and what do we
keep?

For anything that has been important to us for millions of years, we can
again, turn to evolution, and assume it's built custom hardware in our
brain to filter out the crap, and keep the important stuff. It's been
valuable to us to recognize humans for millions of years, so it's valid to
argue that evolution has built custom face recognition circuits that keep
the important details needed to discriminate faces and ignores the data
that is not useful for discriminating faces. This same logic can be allied
to a wide range of items, and behaviors, that have been important to us for
millions of years - like eating, and finding food, and having sex, and
navigating in a 3D world with earth-like gravity using 2 legs and 2 arms.
etc etc.

A lot of what we do, even in this very different modern environment, can be
tied back in some way, to basic needs we have had for millions of years.
And as such, we can argue as you do, that evolution build for us complex
special filters that picked out of the compressed sensory information the
features which are important, and filtered out, the features which were not
important.

But that argument has its limits. And its easy to show those limits by
testing the learning powers of humans. We could create a series of
experiments to test a person's power to learn in a limited domain like
vision for example. We can show them a picture and ask them to press one
of two buttons. If they press the right button they get a reward, if the
press the wrong butting, they get some punishment. It could be rather
sever like an eclectic shock or no electric shock. Or it could just be a
green or red light the lights up and they are told to try and always push
the button to make the green light show up. We could give them a dollar
every time the green light lights up to make sure they are really motivated
in this learning experiment. Whatever.

We then run the test, using some set of features in the image to determine
which button is correct. We could start simple, using only simple
geometric line drawings and making the button rule something like if there
is only one figure in the image, press the first button, if there are more
than one, press the second, and then run lots of pictures past lots of
subjects and see how well their learning works. Does the behavior improve
over time? For something simple like that rule, I would expect most people
to start scoring 100% accuracy before too long.

We could continue this experiment with lots of different subjects, using
lots of different pictures, with increasing amounts of detail and
complexity, with increasing complex rules to define what button to push.
What we would find, is that some rules are so complex, that none of our
subjects would learn them in the time frame we ran the experiment for. For
example, we could make a rule for pictures which were just numbers, that
said if we multiply the number by PI, and the twelfth digit of the product
was a 3 or a 6 or a 7, push button one, else push button 2. It's unlikely
anyone would figure that out after testing for a few thousands pictures.
That particular rule represents a feature that their brain just isn't wired
to normally sense in images.

But by doing lots of testing like this, we can collect information on the
range of features typical humans _can_ learn to recognize in the images,
and can learn to use their perception of that feature to control their
behavior (their button pushing behavior). We could even do some very long
term experiments like this for very hard features, where we had test
subjects come in for testing a few hours every week for 10 years, and see
just how their performance changed for a very hard to discriminate feature
over a very long time frame (assuming we could find the funding to do the
tests :)).

What we would end up, is some rough measure of the size of the feature
space that the human brain was effectively analyzing as it tried to improve
it's button pushing behavior in these experiments. And at the same time,
by noticing how fast the subjects as a whole learned, we would be getting a
measure of how fast their brains were searching the feature space and
finding the right answer.

What you will find, is that no learning algorithm we have yet created,
could come close to the performance of the human brain on this learning
task. And by "close" I mean, that even if we got every computer in the
world running in parallel trying to learn how to push these buttons, the
monster learning machine still wouldn't come close to the performance of
the humans at this "simple" classification task.

If we used features that humans might have specialized circuits to
recognize, like human faces, then we can argue that the all the results
shows, is that humans have specialized hardware to spot these features.

But what we can do, to verify the point I'm making here, is make sure we
pick a wide range of features that have no known connection to human
survival over the past 10 million years, and see how well the humans
perform on those features.

For example, we could make a rule that said if the upper right part of the
picture was more RED in color than the lower left part, push button one,
else push button two. Or if the shape of the object in the upper right was
the same as the shape of the object in the center of the screen, then push
button one, else button two. Or if the object closest to the center has
more than 10 sharp points on it, push button one, else button two. Or if
there's an even number of animals in the picture push button one, but an
odd number, push button two. And on and on making up specific features
with our rules that have no evolutionary advantage to our survival. And
then see what rules the subjects in general can learn, and how many
pictures on average they have to be exposed to (training examples) before
their generic learning system is able to start producing the correct
behavior at a some rate higher than random.

Now, many of the examples I talked about are are easy to describe with
language (which I'm using langauge to describe the experiment here on
Usenet). And our subjects are likely to use their langauge skills to try
and help them with this task. That is, they will probably talk to
themselves about what they see in the picture, and then look for patterns
in their own language descriptions. When that technique works for them,
they are really using their well trained skills for translating pictures
into descriptive language as a crutch to improve their performance on this
task. If we pick features of the pictures which are simple to describe in
langauge, then that crutch is likely to help them. And if they use that
crutch, then what we would be testing, is their combined ability to
translate pictures to words, and their ability to spot patterns in words.
That too is all part of our generic learning powers, but if we want to
prevent them from using those skills, and instead, try to get at their pure
lower level image feature extraction system and behavior learning, we could
use only patterns that are easier to "see" than to "describe".

Either way, what we end up with, is a rough idea of how large the feature
space is that the person is searching as they learn this task. And we see
how fast they are able to search that space, and how fast they are able to
converge on the correct answer (or at least on an answer clearly better
than random).

And what you will find, when you compare those two numbers, is that there
is no learning algorithm known to man, that can equal the learning speed of
a human at these generic learning tasks. Learning tasks that can not be
made easier by a million years of evolution building custom filters to pick
out what features are important and which are not - because we would use
only features that have no apparent value to humans in any other aspect of
their current lives, or in the millions of years humans have recently been
evolving.

Now to really drive this point home, I would need to actually carefully
design this experiment, perform it, analyze the results, publish it, let it
be debated, let others duplicate it, and then we would would have the facts
of what I'm suggesting. So maybe, without this careful experimentation you
will still doubt what I'm saying.

But my point is, we have the ability to learn to recognize features of our
environment though experience, that have nothing to do with what was
important to us for the past million years. And if you look at the size of
this feature space, you find it's huge. It's so huge, that you can't
explain using simple algorithms, how ti's searched so quickly, and how the
"correct" features are so quickly found in the space, and then "wired" to
the arm to make it push the right button in response to the feature.

Each trial in such a test creates learning in the brain. When the wrong
light lights up, the subject knows he's made a mistake, and it's processed
as a minor "punishment" by the brain.

Now, we we try to use things like neural networks to do image recognition,
and we find they can't perform as well as humans at something like face
recognition, we can at first just write it off (as you seem to like to do)
as innate hardware tuned by evolution to pick out the important stuff from
the data stream. But much of what we have to learn to recognize as
"important" in this modern world has nothing to do with what was important
to our ancestors for the past million years. They didn't see a 50 dollar
bill laying on the ground as "an important feature of the environment".
They didn't see an red traffic like as an important feature of the
environment. They might have learned to see a special type of flint rock
laying on the ground as a highly important feature of their environment
however - where I see it as noise because I don't collect flint rocks to
make arrows to get my food.

After you factor out everything evolution could possibly have done to
simplify the learning problem, we are still no where near close enough to
calling the problem "easy". Even after reducing the sensory data by
compression to remove all the constraints, and after the filtering and
highlighting of "important" features by evolution based on a million years
of evolutionary learning, the amount of learning still left to be done, is
huge. It's still, by a wide wide margin, a high dimension learning problem
that no one has yet created a solution for.

Because I've been exposed to English writing all my life, and because much
stuff written in English turns out to be highly important to my survival -
I've developed in me, a highly tuned set of filters that receives high
level bandwidth information about lines and arcs and and converts that into
highly abstracted high level information about ignore all the features of
those lines and arcs which are not important, and using all the features
which are. And none of that filter system was developed in me because
evolution hard-coded those filters over the past million years. That
complex set of filters that allows me to recognize the importance and to
correctly classify English signs was conditioned in my by a life time of
past reinforcement events.

How does such a complex English-sign-decoding network form in us (so
quickly) in response to be being exposed to that data? It forms as a
result of the strong learning powers that exist in our brain, not because
of evolution building "helper" circuits to tell us what to filter out and
what to use.

What evolution builds for us, and what we have to duplicate to solve AI, is
a strong learning system that solves these high dimension learning problems
as well as the brain solves them. Working on better, and more, hard-coded
"helper" modules is POINTLESS to solving AI because that's not what we are
missing. That's the part we already understand how to build and have been
building for the past 60 years in AI - and getting no closer to AI than we
were 60 years ago because we have been building the WRONG THING. We have
to stop building these systems by hand, using our own intelligence, and
instead, build a system that has the power to do that work for us - for
itself. Only when we do that, will have created intelligence. TO solve
AI, we have to stop building chess computers, and instead, build a computer
than can program itself to play chess. And that's what RL algorithms are.
Except of course, none of them are good enough let to learn chess - to
write its own "chess" program - aka to re-configure itself into a strong
chess player though the experience it gains by playing chess.

I have workable ideas on exactly how these high dimension learning problems
can be solved. It's not all that hard. But it's too much to describe in
this overly long post so I'll post more if I can find the time ...

My main point in all the above is just to try and get you to understand
that no amount of "help" from evolution can remove the fact that humans do
today have a brain that solves high dimension learning problems that
couldn't possibly be explained by "help" from a million years of evolution
building custom filters for us because after you factor out everything
evolution could have done to help in that way, the power we have to solve
made up random high dimension learning problems is still far greater than
any learning algorithm anyone has yet created. Until we can build machines
to duplicate that aspect of our intelligence, we won't have come close to
making a machine intelligent. And until we have that solution in hand, and
know what it can and can not do, we don't know how much of these "helper"
systems are or are not needed to explain human behavior.

pataphor

unread,
Dec 10, 2009, 6:49:45 AM12/10/09
to
Curt Welch wrote:

[..]

> My main point in all the above is just to try and get you to understand
> that no amount of "help" from evolution can remove the fact that humans do
> today have a brain that solves high dimension learning problems that
> couldn't possibly be explained by "help" from a million years of evolution
> building custom filters for us because after you factor out everything
> evolution could have done to help in that way, the power we have to solve
> made up random high dimension learning problems is still far greater than
> any learning algorithm anyone has yet created. Until we can build machines
> to duplicate that aspect of our intelligence, we won't have come close to
> making a machine intelligent. And until we have that solution in hand, and
> know what it can and can not do, we don't know how much of these "helper"
> systems are or are not needed to explain human behavior.

A very interesting read. The way I think humans (and other intelligent
organisms) solve this problem is by having some neural layers that
extract features out of the incoming sensory input, while there are also
motor neuron activation patterns. Then, if a certain sequence of
activation of these feature detectors gives some pleasurable effect, the
brain tries to replay motor sequences that may lead to feature detector
state 2 when it sees feature detector state 1. If successful, the whole
sequence is completed until the pleasurable outcomes repeats itself.
After a few successful trials the whole sequence is automated and can be
used as a building block for higher patterns. In this way the complexity
is overcome, because it builds on sequences of sequences.

But there is still the problem of what is the goal of all this activity?
Even if evolution selects for those biological entities that survive and
have offspring, it cannot predict what will be the best answer to each
problem the organism faces because there is too much variation in the
environment. So it makes general purpose feature detectors, connected to
more or less hardwired pleasurable outcomes, and a learning systems that
chips away at the complexity by abstraction from patterns of patterns.
In humans, the patterns have become so complex that newborns are
unlikely to stumble upon the things their parents have found so we have
a long educational period.

However this still means the process is far from optimal. It will take
some time before we can produce an AI that from the moment it is started
optimally rewrites its future light cone by using the set of actions
that are available to it. Then the question arises, what would such a
complete solution of the problem look like? Suppose the AI has solved
its universe like it was some giant rubik's cube, it is now in the
optimal state. No need to change anything. Is this death? And also, once
one can reach any state in theory, is there really any need to actually
do something, because the optimal end state is just as much a 'death' as
any other state.

P.

casey

unread,
Dec 10, 2009, 2:54:09 PM12/10/09
to
On Dec 10, 6:40 am, c...@kcwc.com (Curt Welch) wrote:
> ...
> You don't see how it can be solved, so you assume it's NOT
> solved. How weak of an argument is that? Pretty damn weak.

Some problems are not solved by finding a solution but rather
by a demonstration that the problem is wrongly conceived or
based on erroneous assumptions.

> No amount of compression of sensory data to get at that raw
> state data is going to make this problem easy because the
> real state of the world around us FUCKING HUGE.
>
> (sorry for being a bit blunt and crude here but I'm trying
> to make a point that I've been trying to make to you for 5
> years and which you still haven't grasped).


There is nothing to grasp. You are making a false assumption
because of the way you are viewing the problem.

We have circuits that can produce an answer regardless of what
combinational input is given to them. A circuit can compute
a particular response to a particular stimulus without having
ever received a reward in its lifetime for that particular action.

We may need a reward to initiate and maintain a learning
activity such as learning to recognize the characters used
in English but the actual circuits to process, store,
retrieve and form associations with such symbols can be
innate.

It is the circuit that is rewarded by the reproductive success
of the organism not by any stimulus response reward during
the life time of the organism.


> If we build a car that can drive itself, we don't create
> an a sensory system that attempts to count the number of
> leaves on the trees on the right side of the road and
> compare that to the number of leaves on the trees on the
> left side of the road. We know that data is not important
> to this problem, so we don't include hardware that can do
> that in our design.
>
>
> You often mix in these arguments the idea of "using what
> is important" with the idea of "the data is constrained
> so the real amount of data there is actually a lot less".


I don't think I confuse anything. There are laws of physics
that restrict what can and cannot happen "out there". Even
so it can be very "complex" such as the motion of atoms in
a gas in which case we can't deal with it. Or we can find
a simplified model (the gas laws) which we can deal with.


> We could create a series of experiments to test a person's
> power to learn in a limited domain like vision for example.


Psychologists have done lots of experiments with learning in
humans (and other animals).

There are all sorts of input output experiments that reveal
things about how the brain "deals with data".


> ... what we can do, to verify the point I'm making here,


> is make sure we pick a wide range of features that have
> no known connection to human survival over the past 10
> million years, and see how well the humans perform on
> those features.


Such experiments on human learning, memory and perception have
been going on for decades. May I suggest you read some of them?


> I have workable ideas on exactly how these high dimension
> learning problems can be solved. It's not all that hard.
> But it's too much to describe in this overly long post so
> I'll post more if I can find the time ...


Why not have a web page which you can simply edit again and
again to fine tune your views and answer the critics rather
than write post after post?


JC


casey

unread,
Dec 10, 2009, 2:55:17 PM12/10/09
to
On Dec 10, 10:49 pm, pataphor <patap...@gmail.com> wrote:
> ...

> Suppose the AI has solved its universe like it was some
> giant rubik's cube, it is now in the optimal state. No
> need to change anything. Is this death?

If its only goal is to understand the Universe and if it
reaches that goal then yes there is no where else to go.
It is like a calculator. Give it a goal (make the system
unstable by pressing buttons) and it will run to a stable
state (solution) and stop.

Biological machines however are only dynamically stable
like a whirlpool or a flame. They require a continual
inflow of energy and matter to exist. They have to maintain
thier essential variables. They have to keep finding food.


JC

Curt Welch

unread,
Dec 10, 2009, 3:04:23 PM12/10/09
to

Let me talk about this paragraph of your post here and I'll make other
replies to talk about your other points.

I think in general, I basically agree with what you are thinking above.
But there are lots of important specifics still missing from such a high
level abstraction - some of which I feel I can fill in, and some which I
can't yet. I was going to follow up with my ideas on how to attack these
high dimension learning problems anyway, and your ideas is a good place to
start that.

To start with, the raw sensory inputs are already features of the
environment. So before we do any processing, we already have a large and
complex set of features to work in the form of parallel signals - each
signal representing one feature. From the eye, the one signal for single
nerve fiber will be a feature that roughly indicates the amount of light of
a given frequency range failing on one part of the retina. So from this
perspective, we don't have to build a network to create features. We
already have features.

So what then in the purpose of the processing? I believe it's best seen as
a data transform function, instead of as a feature extraction system
(though I too often talk in terms of a feature extraction system). An FFT
algorithm is an example of a transform. It doesn't add information to the
signal, or remove it. It simply transforms the data so we get a different
view of what's there. In one view, we have features that represent signal
magnitude at a point in time. With the transformed view, we have frequency
magnitude across the entire signal. It's just two different views of the
same data. But we can just as easy talk about it as extracting frequency
features of the input data set.

I think the overall goal of the processing needed is to transform the
feature set input, into an output feature set. The output feature set is
the behavior produced by the system. That is, the signals sent to the
effectors to make us (or a robot) move.

On the issue of John's view that evolution has hard-coded feature
transforms in here for us, that's fine. We can assume evolutionary has
done that for us and figure out just what it did, later. I'm at this point
only talking about the high dimension real time generic learning problem
that's left _after_ all the help evolution has done to give is the best
signals possible to start this learning task with. It might even be
implemented in the brain that these two features are intermixed in the same
networks, so that the nodes that evolution pre-wired are also built to
support a generic learning function at the same time. But that's all
implementation details to be figured out later. For now, I'm just talking
about how we might tackle the high dimension learning side of the problem.

From an information theory perspective, we can talk about the information
carried in each of the incoming signals. Taken alone, each signal has some
amount of information in it. But when we compare two signals (features)
from the set, side by side, we will find some redundancy in the signals.
We can calculate that mathematically as a correlation between the signals.
A correlation of 1 means the signals are identical - 100% correlated - 100%
the same signal. If this were the case, we could throw away one of the
signals and not lose any of the information. It's just redundant. On the
other extreme, a correlation of zero indicates no redundancy in the
information in the two signals.

So, in the more general case, we should be able to apply a transform, that
creates new signals from the old ones, and in the process, creates two new
signals that have a zero (or near zero) correlation. The idea is that the
information from the two input signals are used to create the output
signals (features), but in the process, all the redundant formation is
placed in the same signal, so as the create two new features with zero (at
at least less) correlation, then we start with.

Correlation exists in the sensory data for simple and obvious reasons. For
the eye, an example I use is turning the light on in a room. When you do
that, _every_ light sensor int he eye will suddenly have an increase light
level at the same time. It's a correlated effect that shows up across
every pixel in the image at the same time. But it's a single event which
happened out there in the universe. And that's typical of what happens.
There will be events that exist in the universe, that create corrected
effects across multiple sensory signals - not just across the pixels in a
video stream, but across sensory modality as well. You smash a light bulb
that was on and not only does it create a broad spectrum sound (noise) it
makes the lights go out. But we have one important feature of the
environment, creating a correlated effect across many sensory signals.

So the point of using transforms that remove correlation is not just some
generic "data clean up" process (though it can be thought of in those
terms). It's goal is to extract _out_ features that represent unique
events in the environment. That's why, and how, the correlations get
created in the sensory data to begin with. It's because unique events in
the environment have the power to make changes to multiple signals at, or
near, the same time.

If you have hardware that can take two signals as input, and create two
signals as output, with the correlation removed, then a large network of
these can be wired together to clean up the correlations that exist in any
size set of signals, two at a time by using some form of cross connection
topology in however many layers are needed. Such a network, should, in my
view, produce high level signals (features) that should in general, be a
good representation of the current state of the environment with minimal
amounts of redundancy (correlation) in the output signals.

The other point about this transform is that information is lost lost.
It's just routed to a different location in the signals. Just like with an
FFT, information is not lost. But yet, in the transform, changing one input
signal, has the power to change every output signal of the FFT. Likewise,
in this correlation removing network, the same effect is likely to happen.
Any change, to any input, is likely to have some effect, on every output.

The other aspect at work in these signals is the temporal problem. The
correlations don't just exist from signal to signal at the same point in
time (which is the standard way mathematical correlations are calculated).
I call that the spatial aspects of the signals. They exist across time as
well, where the value of a signal at one point in time, can be correlated
with the same signal (or a different signal), a second later. This again
represents a redundancy that exists even in a single signal, and a
redundancy that also needs to be removed, by the transformation process.
It's like an echo that exists in the signal that isn't helpful. It's
redundant data that's not needed and only makes the generic learning
problem harder to solve. So the transform needs to be able to identify,
and remove temporal correlations as well in the transform. I've not
figured out how to do that, but conceptually, it's easy to understand what
needs to be done here. If you have an audio recording with an 10 ms echo
caused by a big wall 50 ft away, then you can transform that signal into an
audio stream without the echo, and a second signal that represents the fact
that there is a 10ms echo in the signal. The fact that there is an echo in
the stream is information that is important. But it's better represented
internally as the data without the echo and a separate signal that
indicates the nature of the echo as an invariant characteristic of the data
stream.

When we remove the temporal redundancy from the signal and replace it with
signals indicating the type of redundancy we have removed, we create what
is known as an invariant representation. It's a feature that exists in the
input signals, that spans some period of time.

Motion detectors are one specialized view of the generic problem of
invariant temporal representation. Instead of the raw data showing on
object at different locations across the signals, the invariant
representation is a position independent description of the object, alone
with it's current location, along with it's velocity.

If you build the correct generic type of transform network that removes
redundancy both spatially and temporally in the transform process, what you
get as output IS are things like motion detectors and object detectors.

But in all this, I've still not touched on how learning works. :)

To understand how to solve learning with this sort of transform network, we
have to understand that in the transform, information is simply being
re-routed as it flows though the network, just as information in a time
signal is re-routed thought the FFT algorithm to produce the output.

If you have ever seen an FFT algorithm explained by using a block diagram
of its data flows, you will see it in fact is a neural-network like
transform in terms of the data flows in an FFT algorithm:

http://www.commsonic.com/images/fft2.png

As I touched on above, I believe the entire program of transforming sensory
data to output data, can be seen as, and achieved, in on type of transform
network that transforms input data, to output effector data. I don't
believe we need to (or even can) make it work with one transform network
doing the "perception" side of the work, and another side of the network
doing the "motor control" side. I think it all has to be done at once to
explain the learning powers of a human. I won't make the argument here as
to why I believe this to be so.

So, all the transform work I talked about above which addresses the
invariant representation problem and cleans up the signals by removing
redundancy, is really just the perception half of the problem. It's what
allows the network to create signals that represent high level events
happening out in the environment, from the events that it stared with,
happening in the sensors. They are the signals that should indicate things
like "there's a dog out there under the tree". No one has to hard code a
special "dog" detector for this to work. It falls out automatically from
the generic act of a transform network that finds, and removes spatial and
temporal correlations from the data. It's trained by the data that is fed
to it.

But, back to how we get form dog detectors, to "arm motion" control - which
is what I see the learning problem to be.

Let me make up a simple robot control problem as an example. If we want
the robot to run towards it's charging station, we first need to create a
sensor system that can recognize the charging station in the sensory data.
That's done by the type of transforms I talked about above. If the
transforms above are working, then there will be signals that represent the
fact that the charging station is in the room to the right of us, or in the
room to the left of us etc. It might in fact be a large collection of 100
different signals that represent the station's location relative to us, but
for simplicity sake, lets just say there are two simple signals. If the
station is to the right, we need to make the left wheel in our two wheel
robot turn forward. If the station is to the left, need to make the right
wheel turn forward. So now we have a transform problem to solve to create
this behavior. If we want the robot to run towards the charging station,
we have to transform those charger location signals into wheel control
signals. Which, if the charger location signals are somewhere in the
middle of a large multilayer network, means we have to create transforms
that routes the information in those signals, to the correct outputs which
controls the correct wheels. So to create the right behavior for this
problem, we have an information routing problem. Behavior production, is
now an information routing problem. The network has to learn where to send
the information.

Though the above example is very simplistic, I believe it's representative
of all types of human behavior. If, you have the right invariant
representation signals to start with, the problem of creating the right
behavior in response to the environment, is simply a information routing
(and transforming) problem. It's a problem of combing in the right ways,
the information from those signals.

But what are the "right ways" of combining the signals? I don't know. But
I have lots of ideas. Basically, in addition to a fixed routing of one
signal to one output (which would be like a switch board wiring), there is
also the requirement of allowing one signal to control the routing of
another signal. Which means to some extent an "IF" statement. Such as, IF
signal-A is active, then route signal-B to the right wheel, but if not,
THEN route signal-C to the right wheel. There are however many odd ways to
make that work that might not be so obvious at first. If for example you
were using a data transform unit that inherently had a signal balancing
function, then routing one signal to it, might naturally cause it to reduce
the strength of other signals. Or, such functionality might be implemented
though concepts of priority. Such as, "use signal A if it's active, but if
not, use the second priority signal which is signal B". This sort of thing
is the implementation details that must be created to solve this problem
and the stuff I spend time exploring.

But back, AGAIN, to the high level abstractions of how such a system can
learn to configure itself though learning.

The learning problem I believe will in the end (when implemented correctly)
boil down to a problem of learning to route the correct information from
the inputs, to the correct outputs. The transforms required to remove
redundancy is just the default routing the network needs to use in order to
clean up the signals. We add to this, the global reward signal (generated
by custom hardware hard-coded by evolution to motivate the system to learn
good survival behaviors). The design of the hardware creating the reward
signal is complex, and full of special hacks evolution has added, and
adjusted, over the millions of years to optimize our ability to survive.
But basically, in humans, it's the core definition of good and bad for us -
it defines what results are "good" (getting food in our stomach) and what
is bad (having our finger burned in a fire etc.). There is nothing
"simple" or "generic" about the design of that hardware.

The learning hardware, however is separate from that. It's just configure
to try and maximize some measure of expected _future_ rewards. How good
the hardware is at predicting the future has a lot to do with how good it
is a learning to act in advance to prevent future problems or take
advantage of future opportunities.

From the perspective of routing the information to the right place however,
this learning problem isn't too hard to understand. The basic strategies,
is to statistically track how much reward a given signal _path_ is
responsibly for producing over time. When rewards are received, we credit
the signals throughout the entire network based on how much of a role they
have recently played in the behaviors produced. We design the network to
use active-high signals to make this work. That is, when they are low,
they have no effect on what the machine is doing. When they are high, they
create behavior, either by directly changing an effector, or by causing
other signals to be routed a different way. This allows us to correlation
that signal activity, with rewards. This in general, allows us to assign
values to signals, or more clearly, values to signal-routes.

So when our network is performing a transform by moving information from
the input signals, to output signals, in each small node of the network, we
add to that a statistical track of how good that routing is working for us
in terms of how much credit it's been able to accumulate for the behavior
it ultimately caused.

Now the problem here is that initially the network is not configured for
optimal behavior. It will make the machine do lots of bad (stupid) things.
But that's OK, because it will sometimes still manage to do the right
thing, for the right reason. And over time, statistically, the "right
reason" will emerge from the noise of all the wrong reasons in the
statistics. And as it does, the network will slowly transform itself to
do more things for the right reason, and less for the wrong.

Let me show how that can work. Let me go back to the simplified example of
needing to route the "charger on right" signal to the right wheel. To make
this learning work, the default way the network configures itself has to be
to send a little bit of that that "charger on right" signal to _every_
output. Conceptually, that's like a neural network that's fully cross
connected where every output is a function of every input. For the
example, we can just say the way it combines the signals is a big sum
function. If in our example the correct solution is to send only the
single input signal to that single output, what we start off with, is a
configuration where the input signal is going to the right output, but it's
also mixed with 100 other signals, But, because we have a network that has
already transformed the data to remove redundancy, none of the signals
dominate the output behavior. They combine to effectively form random
noise. But even with all that noise added, when our "Good" signal is high,
the odds of the robot moving towards the charger is higher than when the
good signal is low. This fact will emerge in the statistics over time
which is tracking the value of the network connections, that routes the
good signal, to the good output. All the other signals, won't correlate at
all with the rewards. It will be a random distribution in terms of how
much reward each of the other signals receives. They will have equal odds
of getting the reward when they were high, as when they were low. Only the
good signal will show a correlation to the rewards. And that statistical
correlation, is what allows the network to reconfigure itself to "do the
right thing".

To get the good input signal to the correct output connection, might
require a path of 10 connections though the network (lets just say for an
example). And there might be many different paths the signal could take
through those 10 layers to get to the right output. At first, the signal
is fanning out through many paths to get there. A little bit of the signal
is traveling though all the paths at first (sort of like a river delta).
And at first, all those paths are showing slightly higher rewards. But the
paths close to where the "good" signal originated in the network are
showing the most rewards because they contain a higher percentage of the
good signal than any of the other paths.

The idea, is that as the statistical tracking of a reward increases over
time, that signal path needs to be made stronger. The "good" signal needs
to become more dominate in that path and the other signals need to become
less dominate.

Over time, after enough rewards, the signal which was fanning out and being
distributed to many outputs, gets more and more routed to only the one
output where it's doing good - where it's creating rewards. And at the
same time, the signals that are not doing good, will be pushed out of the
way, and sent to other places.

Signal paths that have a negative correlation with rewards (instead of just
no correlation) will get pushed out of the way even faster. Those signals
are not just noise in that case,they are actively harming to odds of the
machine to get rewards.

But the fact that information can be mixed together into a single signal,
and that mix of information be tracked statistical, is a key elemental how
to combinatorial explosion problem is solved here. And that's the big
point I've been building up to here.

If you have 100 input signals, and 100 outputs, how many different ways can
all those signals be wired to all those outputs? Looking at this as a
simple switch board problem creates a nearly infinite number of
possibilities to try. (not infinite at all but simply so large that it
will take the system eons if it tries to test different configurations one
at a time to see what produces the most rewards). It can't work by trying
one configuration, testing it, and then trying another.

Using a network that merges all the signals in different proportions
however, and then statistically tracking how well the merged results work,
allows you to test all the combinations in parallel at the same time.

However, such an approach is worthless, if you don't first reprocess the
raw signals to remove the correlations. If the signals share correlations,
it reduces the effectiveness of the process - normally to the point of it
being unworkable.

Let me make it clear why learning can't work if you don't first remove the
correlations from the signals with an example. Lets say one input sensor
on our robot is detecting light levels to the right while a second one is
detecting light levels to the left. When we turn the lights on in the
room, both light sensors indicate an increase light level. When the lights
in the room are on, we want the robot to freeze - to not move at all. When
there's a flashlight somewhere around teh room on, and the big overhead
room lights are off, we want the robot to seek out the flashlight.

For this behavior to work, when it sees light to the right, but not to the
left, it need to go the right. When it sees light to the left, but not to
the right, it needs to go to the left. When it sees light both to the
right and left, it needs to not move. Without pre-processing to remove the
correlations from these raw sensors, there's no way to wire these raw
signls to the motors and make these behaviors work. TO do the seeking
behavior, the right side light sensory has to be wired to make the left
wheel turn forward. And the left side light has to be wired to make the
right wheel turn forward. But to learn the stop behavior, those signals
have to be wired to stop both wheels.

So we can train this with reinforcement to stop when the lights are on, in
which case it will learn to stop the wheels in response to light. Or we
can train it with reinforcement to seek the light. But once it's trained to
stop, it fails badly as the seek behavior. So we have to re-train the seek
behavior. But the process of training the seek behavior will un-train the
stop behavior. And training for the stop behavior, will entrain (make it
forget) what it learned about he stop behavior. This robot can't learn
both tasks - which is a common complaint of why reinforcement learning is
"too simple".

But the error is not in the attempt to use reinforcement to do the
learning. The error is in not transforming the raw signals into better
signals first - into signals that are less correlated which better
represent the state of the environment. With the correct transformations
(and it would require some temporal work to solve this problem as well),
you would end up with signals that better represented _unique_ features of
the environment such as "overhead room lights on" and "flashlight on"
and/or "flashlight on the right of us" and "flashlight on the left" etc.

Now, the larger the transformation network, the more unique features there
are which can be extracted from the input signals. You could have those
two light level inputs, and 10,000 outputs to represent a wide array of
unique features of those two inputs. There's no limit to how many features
you can abstract from the signals because you are extracting temporal
features (which indication something about past signal values) as well as
spatial signals. Larger networks can represent information extending
further and further back in time about the signals (such as "the room light
turned off 10 minutes ago" just to give a conceptual example).

A larger network, allows for more detailed features of the network to be
represented, which means the learning system effectively has a better
perception of the environment to learn with. If there's a signal that
represents "the light went off 10 minutes ago" than _that_ network can
learn to do something 10 minutes after the light goes off. Without that
signal, the network can't learn that. If it can't "see" some feature of
the environment, it can't learn how best to react to it.

Many people have criticized reinforcement learning as being overly
simplistic and impractical to explain complex human behavior (or even to
use in making robots learn semi-interesting behaviors). But their
prototype systems that demonstrate the weakness show that the error is not
in using reinforcement as the basis of learning, but in the fact they
failed to create a strong perception system first. And I believe a
hierarchical network that transforms, without removing information from the
signals, so as to remove correlations from the signals, is key to creating
just such a perception system.

And though it seems natural to have a perception system pre-prepossessing
the data first, and then a motor system where the learning takes place, I
believe these two functions have to be overplayed on top of each other
instead of keeping them separate. When you do that, what you ultimately
create, is a system that adjusts it's peculation of what it has to do. At
that point, it's not decoding the sensory signals just to try and best
understand what's out in the environment. It's decoding the environment
based on what's _important_ to how it has to act to maximize rewards.

And, the same effect I talked about of combining signals and statistically
analyzing their combined value in order for the system to hone in in the
part of the signal what was truly useful, applies not just to the motor
outputs, but all the perception signals as well - they become one and the
same type of signal. Which is why I talk at times of the motor outputs as
being the systems perception of what it should do. It "sees" in the
environment that "It must run now". :)

But on the combined perceptions systems, I can demonstrate what I mean. If
we have sensory signal that means "the light when off over 10 seconds ago",
we can look at that as a combination of two more specific signals. One that
means "the light when off "10 to 20 seconds ago" and another that could
mean "the light when off over 20 seconds ago". The first signal is a
combination in effect of these two more specific signals. As likewise, we
can say the raw input signals from the sensors are a combination of may
more precise signals like a pixel signal which represents a pixel from a
dog nose at the moment could be a combination of the "dog" signal, and the
"dog nose" signal. If the raw signal is correctly transformed bu the
network, the "dog" signal gets separated from the "dog nose" signal later
in the network.

And with the learning applied to adjust how that transform works, if "dog"
turns out to be very important information for triggering behavior for
getting rewards, then more of the network will get allocated to the "dog"
signal, and as more is allocated to the dog signal, then the network will
produce a more fine-grained decoding of the "dog" data, so that features
like "dog nose" and "dog tail" and "dog head" get formed in the network.
While at the same time, if trees are not important to getting rewards, less
of the network will be allocated to the decoding of the tree data and as
such, the system will have a less fine-grained perception of tree features.
This adjusting how of the network is allocated to which features, happens
in response to the reward signal, as a natural fall out of the transform
from sensory data, to effector actions. If you try to do motor learning
separate from perception, you don't get that advantage. Huge parts of the
perception network will be wasted decoding features of dirt spots on the
floor which are not needed for any of the useful reward collecting
behaviors the system is trying to learn.

So the high dimension learning problem is solved by first building a
network that can transform the data by removing correlations from both the
spatial (signal to signal) domain at the same time it's removing temporal
correlations (over time within the same signals and over time across
signals). And that same network, has its data flows shaped over time by a
reward signal to make it converge on configurations that produce higher
rewards. The combination explain problem is solved because potential
compilations are analyzed in parallel for value. And at the same time,
it's done in multilayer network so that different combinations are being
being analyzed at the same time for value (I didn't really touch on that
above). The result is that it can converge on better configurations,
without having to test the near infinite number of possible combination one
at a time.

My current pulse sorting networks do all of the above. Just not very well.
So all the above conceptual ideas can be seen in action, in a working
network, today. What they fail to get right, is the single most important
part - the automatic transformation that creates the strong perception
system. My networks are doing a transform that removes some correlations,
but it's not getting the whole job done correctly. So the result is that
it can learn some things that are interesting, but most of what it would
need to learn to be really interesting, it can't do. Yet.

The conceptual approach however, I believe is correct, and I believe it's
the only way to make RL work in a the high dimension, real time, temporal,
problem space that the human brain works so well in to create our
intelligent behavior.

Human intelligence looks intelligent not just because of the "smart" things
we can do, but far more important, to how we are able to quickly transform
our behavior to fit a new environment. We can write chess programs that
do something "smart" (play a good game of chess). We have built no end of
systems that do "smart" things in AI work. But what we have never done, is
built a system that can adapt so well, and so quickly, to a problem domain
the designer of the machine never thought about. And the fact that this
power to adapt (aka learn) is missing from our AI projects is why after we
see them working, we always say "that's really cool, but it still doesn't
look intelligent", because we don't see the system as having any power to
adjust to what happens to it. It shows no creativity at finding novel no
behaviors to allow it to do better in life like humans always do. If I
program a robot to find it's charger it will keep finding it's charge the
same way, forever. If I put the charger up on the table, and the code I
wrote for the for the robot can't deal with getting the charger off the
table, or the robot up on the table, the robot will forever, fail to solve
this problem. But if we build strong learning into the same robot, and
leave its charger on the table, we wont be surprised to find the next day
that the thing has figured out how to get off the table on its own. And
when time after time, our robot solves problems like that on its own,
that's when we will start to see true intelligence in the machine.

Curt Welch

unread,
Dec 10, 2009, 3:24:30 PM12/10/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 10, 6:40=A0am, c...@kcwc.com (Curt Welch) wrote:
> > ...
> > You don't see how it can be solved, so you assume it's NOT
> > solved. How weak of an argument is that? Pretty damn weak.
>
> Some problems are not solved by finding a solution but rather
> by a demonstration that the problem is wrongly conceived or
> based on erroneous assumptions.
>
> > No amount of compression of sensory data to get at that raw
> > state data is going to make this problem easy because the
> > real state of the world around us FUCKING HUGE.
> >
> > (sorry for being a bit blunt and crude here but I'm trying
> > to make a point that I've been trying to make to you for 5
> > years and which you still haven't grasped).
>
> There is nothing to grasp. You are making a false assumption
> because of the way you are viewing the problem.
>
> We have circuits that can produce an answer regardless of what
> combinational input is given to them. A circuit can compute
> a particular response to a particular stimulus without having
> ever received a reward in its lifetime for that particular action.

Sure, and such circuits can and are very useful. But was that the point of
my post? I don't think so.

> We may need a reward to initiate and maintain a learning
> activity such as learning to recognize the characters used
> in English but the actual circuits to process, store,
> retrieve and form associations with such symbols can be
> innate.

I doubt it, but even if you are right, it doesn't change the strength of my
argument.

> It is the circuit that is rewarded by the reproductive success
> of the organism not by any stimulus response reward during
> the life time of the organism.

Yes, that's true. But again, totally off topic for my point which like
always, makes we wonder if you you are even able to grasp what I'm saying.

> > If we build a car that can drive itself, we don't create
> > an a sensory system that attempts to count the number of
> > leaves on the trees on the right side of the road and
> > compare that to the number of leaves on the trees on the
> > left side of the road. We know that data is not important
> > to this problem, so we don't include hardware that can do
> > that in our design.
> >
> >
> > You often mix in these arguments the idea of "using what
> > is important" with the idea of "the data is constrained
> > so the real amount of data there is actually a lot less".
>
> I don't think I confuse anything. There are laws of physics
> that restrict what can and cannot happen "out there". Even
> so it can be very "complex" such as the motion of atoms in
> a gas in which case we can't deal with it. Or we can find
> a simplified model (the gas laws) which we can deal with.

Yes, and again, I both understand and agree completely. But again, NOT MY
POINT. So why is it that you keep bringing up facts I agree with and are
not at all relevant to the point I was making.

It's as if I'm trying to talk about the dangers of a tire blow-out and you
respond by saying "but the car is painted RED!".

> > We could create a series of experiments to test a person's
> > power to learn in a limited domain like vision for example.
>
> Psychologists have done lots of experiments with learning in
> humans (and other animals).
>
> There are all sorts of input output experiments that reveal
> things about how the brain "deals with data".

Yes. Of course. AGAIN, what does that have to do with the opint I was
making????

I was describing a specific experiment that I hope you would be able to
roughly understand the results of even without me having to perform the
experiment that would support the point I was making.

> > ... what we can do, to verify the point I'm making here,
> > is make sure we pick a wide range of features that have
> > no known connection to human survival over the past 10
> > million years, and see how well the humans perform on
> > those features.
>
> Such experiments on human learning, memory and perception have
> been going on for decades. May I suggest you read some of them?

I have. I know some of the results. Which is why I don't need to do this
experiment to know the data would support my position if we did the
experiment.

> > I have workable ideas on exactly how these high dimension
> > learning problems can be solved. It's not all that hard.
> > But it's too much to describe in this overly long post so
> > I'll post more if I can find the time ...
>
> Why not have a web page which you can simply edit again and
> again to fine tune your views and answer the critics rather
> than write post after post?

I've kept notebooks for years (30 or so) where I write down my thoughts on
these sorts of subjects. They are just for brainstorming. I almost never
read them. I just write them. It's the act of writing them that helps me
learn, create, and refine my views.

I use these posts the same way. If you can't coherently translate abstract
ideas into words, then it's easy to see the ideas have issues. The simple
act of trying to explain them in words helps me understand my own ideas
better. Each time I re-write or re-explain the same ideas they become a
little clearer, and a little better defined.

If I had co-workers or friends around that cared about these ideas, I would
be spending hours bull-shitting and debating this stuff with them verbally
without ever writing anything down. But since this group is the closest
I've come to finding people willing to debate these sorts of ideas, this is
where my hours of bullshitting and debating takes place (when I've got the
time to do it).

Debating with someone who doesn't agree with your position is one of the
best ways to really understand your own position and its strengths and
weaknesses. When I debate with myself, I'm always right, and that doesn't
lead to much improvement in the thought process. :)

> JC

casey

unread,
Dec 10, 2009, 4:46:37 PM12/10/09
to
On Dec 11, 7:24 am, c...@kcwc.com (Curt Welch) wrote:

>> JC wrote:
>> We have circuits that can produce an answer regardless of what
>> combinational input is given to them. A circuit can compute
>> a particular response to a particular stimulus without having
>> ever received a reward in its lifetime for that particular action.
>>
>>

>> It is the circuit that is rewarded by the reproductive success
>> of the organism not by any stimulus response reward during
>> the life time of the organism.
>
>
> Yes, that's true. But again, totally off topic for my point

> which like always, makes we wonder if you are even able


> to grasp what I'm saying.

It is my thread so my points :)

What I was thinking about was your comment about any computation
being able to be replaced by a look up table.


> Yes, and again, I both understand and agree completely. But
> again, NOT MY POINT.

So what is your point?

> I was describing a specific experiment that I hope you would
> be able to roughly understand the results of even without me
> having to perform the experiment that would support the point
> I was making.

If you don't perform the experiment you don't know what the
outcome would be. Please direct me to the papers you claim
to support your views.

> Debating with someone who doesn't agree with your position is
> one of the best ways to really understand your own position
> and its strengths and weaknesses. When I debate with myself,
> I'm always right, and that doesn't lead to much improvement
> in the thought process. :)

My impression is that when you debate with others you still
think you always right.

JC


Curt Welch

unread,
Dec 10, 2009, 6:14:56 PM12/10/09
to
pataphor <pata...@gmail.com> wrote:
> Curt Welch wrote:

> But there is still the problem of what is the goal of all this activity?

Survival. It comes from evolution. It's why we are here.

> Even if evolution selects for those biological entities that survive and
> have offspring, it cannot predict what will be the best answer to each
> problem the organism faces because there is too much variation in the
> environment. So it makes general purpose feature detectors, connected to
> more or less hardwired pleasurable outcomes, and a learning systems that
> chips away at the complexity by abstraction from patterns of patterns.

Right. But it keeps that learning system on a very tight leash. We can't
just do anything we want to. And since the adaption system can't afford to
test the adaptions by waiting for us to live or die like evolution at the
higher level does, it has to hard code a measure of quality of adaption
into the system. Such learning systems can only work if the measure of
quality can be hard wired into the system (which is no easy feat for
evolution to create).

The quality of the adaptations will be closely tied to the quality of the
reward signal generated by the built in critic. Our reward system is in no
way perfect. It leads us to all sorts of things that aren't good for the
survival of our species, like creating and using birth control, killing
babies, drinking alcohol, and eating too much sugar. These are all caused
by the motivations evolution built until us which were at one time well
tuned to our environment, but currently are not so well tuned. Evolution
will fix that in time by changing our reward system, but it may take
thousands of years and hundreds of generations.

> In humans, the patterns have become so complex that newborns are
> unlikely to stumble upon the things their parents have found so we have
> a long educational period.

Yeah. Our learning brain has created yet another path for evolution. It's
created the evolution of memes which pass from person to person. Though
the structure of most of our body is heavily regulated by the genes that
come only from our two parents, the structure of our brains are passed from
person to person with unlimited inheritance across the entire species.
It's a part of our inheritance which is separate from our genes, and far
more powerful in many ways.

> However this still means the process is far from optimal. It will take
> some time before we can produce an AI that from the moment it is started
> optimally rewrites its future light cone by using the set of actions
> that are available to it.

We already have many machines doing this today. Every RL algorithm running
on a computer creates one such machine. What we don't have, is one which
is able to "rewrite it's future" (so to say) as well as the human brain
does it for our futures.

> Then the question arises, what would such a
> complete solution of the problem look like?

It just a high quality reinforcement learning algorithm. Nothing really
special about it at all. The only thing different about what we have now,
and what we will have when we "sovle" AI, is the same sort of algoirthm
which is jsut better at recognizing patterns, and making use of those
patterns to drive behavior, than our current algorithms.

The key point about RL machines is their motivation. They are not
motivated to survive, or to study and understand, or anything else like
that. They are motivated to maximize a reward signal and that's all they
do. What that translates to in terms of their behaviors and actions, all
depends on what hardware you connect it to for generating that reward
signal.

You could have the most advanced learning machine on the planet, one which
is 10 times better than any 100 human at it's power of learning. But if
you give it a button connected to it's reward signal, and all it has to do
to get maximal rewards, is push the button, then that's all this super AI
will ever do. It will learn very quickly that it must keep it's finger on
the button, and never take it off, and for the rest of it's life, that's
all it will every do.

Only if you give it a more complex problem, will it in turn develop the
more complex behaviors needed to solve it. Staying alive is one such
problem. But we can't just tell it "try to stay alive". We have to build
hardware, that generates a reward signal, that motivates it to do the
things that best help it stay alive. If we believe keeping it's battery
charged is a good way for it to survive, then we build reward hardware that
monitors it's battery charge and uses that to generate the reward signal.
If we believe preventing harm to the body is good for survival, we add
sensors to the entire body that detect excessive forces or heat that could
be damaging to the body, and we make those also control the reward signal.
If killing humans is a good way for it to survive, we could add hardware to
recognize living and dead humans to it's reward system and give it rewards
every time it turns a living human into a dead human. If we build smart
robots with good adaptive RL algorithms and give it motivations like that,
we better watch out because we have just built a dangerous as hell human
killing machine. But since we aren't stupid, we will know the risks of
building such machines, and hopefully, we won't make the mistake of
building too many of them. :)

> Suppose the AI has solved
> its universe

The universe is not a puzzle. There's nothing about it to "solve". It's
just a thing like a rock.

Adaptive learning machines must have innate hard wired motivations. What
motivations get built into the machine define the "puzzle" it's trying to
solve. The "push the button" puzzle is an example of giving a very good
puzzle solving machine a trivial puzzle to solve.

The "puzzle" we as humans are trying to solve can be simply explained as
trying to maximize our happiness. Indirectly, this also aligns well with
surviving. But it aligns with surviving only because evolution has defined
and adjusted what makes us happy based on how well any definition of
happiness works to keep our species alive.

It's the job of evolution to adjust our "happy" goal to help our species
survive. That's not our goal. As RL machines, our goal is simply to solve
the puzzle given us - how to make ourselves happy, given the reward system
we were born with.

When we build better RL machines, we will define their goal when we design
them. Since our goal is to make us happy, we will give them our goal - to
make us happy (the best we can figure out how to do do that).

> like it was some giant rubik's cube, it is now in the
> optimal state.

Some puzzle have optimal solutions. That is, the learning machine can
learn the perfect solution to creating maximal rewards and no change to the
behavior would give it more rewards - like with the button pushing problem.

But most real world problems have no perfect solutions. The AI is not smart
enough to create perfect solutions because it's complexity of behavior is
limited to its brain size and the complexity of behavior needed to deal
with a universe as large as ours is defined by the complexity of the
universe as a whole. There is no comparison between these two. The
universe is effectively infinity more complex than any brain could
understand. Which means there is no perfect answer possible. There is
only "the best a limited intelligence can do given what it has to work
with".

A lot of people seem to have problem with this idea that we can't hope to
understand the universe even with the help of lots of super AIs. By
"understand the universe" we mean "predict everything that will happen
before it happens so we can always to the right thing ahead of time to
maximize rewards.

There is so much we can't predict, we forget that it's even there. We
write it off as if it is just impossible to predict. Mostly that's not
true. Mostly, our little brains are just way to small to predict most of
this. It's why we are able to build big computers that can predict the
weather better than any human, or play chess better than any human. It's
because they are better predictors than we are. But yet they still can't
make prefect predictions of the weather or play the perfect game of chess.

No AI we will ever build will be able to perfectly solve all problems -
like should we schedule the picnic for Saturday or Sunday to get the best
weather? A system that could predict the weather on earth accurately out a
few weeks might have to be as larger as a galaxy. There are many such
problems that sinmply will never be easy to predict and no "super AI" that
lives her eon earth will possibly have a "brain" large enough to solve
these.

> No need to change anything. Is this death?

No, death is when you pull the power cable.

> And also, once
> one can reach any state in theory, is there really any need to actually
> do something, because the optimal end state is just as much a 'death' as
> any other state.

There is no "state" to be reached in reinforcement learning. RL has no end
goal. There is only a constantly battle to try and get(and hold long term)
a higher reward signal. Since the environment we "fight" against is
effectively infinitely more complex than we can understand and predict, all
we can do, is look for short cuts to help us "cheat" the system (so to
say). We find simple rules of thumb to help us predict a few simple
aspects of the future, and use those to help make life better. We use
simple rules of thumb like - "do unto others as you would have them do to
you" - and "a place for everything and everything in it's place", "F=ma",
and "don't eat the yellow snow". These and a billion other things we learn
in life help us reach higher levels of rewards and we use these little
tricks every minute of our live as we constantly react to our environment
around us.

All human behavior is nothing more than an collection of behavior "tricks"
we have learned to make life better for us.

When we solve AI, all it will do, is give us some smarter machines to act
as our slaves. It will be no different than the millions of machines we
have already created to work for us. The only difference is these adaptive
machines will be better workers for us. Oh, and they will be "conscious"
as well, but that's really not important or significant.

These AIs will be like smart intelligent humans, but instead of begin
motivated with survival instincts like all these humans we live with, they
will be motivated to help humans survive. They will _want_ to be our
slaves and it will make them happy being our slaves.

> P.

Curt Welch

unread,
Dec 10, 2009, 6:38:06 PM12/10/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 11, 7:24=A0am, c...@kcwc.com (Curt Welch) wrote:
>
> >> JC wrote:
> >> We have circuits that can produce an answer regardless of what
> >> combinational input is given to them. A circuit can compute
> >> a particular response to a particular stimulus without having
> >> ever received a reward in its lifetime for that particular action.
> >>
> >>
> >> It is the circuit that is rewarded by the reproductive success
> >> of the organism not by any stimulus response reward during
> >> the life time of the organism.
> >
> >
> > Yes, that's true. But again, totally off topic for my point
> > which like always, makes we wonder if you are even able
> > to grasp what I'm saying.
>
> It is my thread so my points :)

:)

> What I was thinking about was your comment about any computation
> being able to be replaced by a look up table.
>
> > Yes, and again, I both understand and agree completely. But
> > again, NOT MY POINT.
>
> So what is your point?

That no matter how much evolution has helped our intelligence with the
addition of hard-wired "helper" circuits, that it's easy to prove that the
human brain has solved the problem of strong generic reinforcement learning
in a high dimension learning space and that without this power, our
machines will never duplicate human intelligence. That was the point of
the entire long post.

> > I was describing a specific experiment that I hope you would
> > be able to roughly understand the results of even without me
> > having to perform the experiment that would support the point
> > I was making.
>
> If you don't perform the experiment you don't know what the
> outcome would be. Please direct me to the papers you claim
> to support your views.

The views I'm expressing are obvious for anyone that knows even the most
basic facts about human learning power. If you don't see it, I don't know
what's wrong with you.

For some reason, you can't seem to grasp the difference between hardware
that learns, vs hardware that's built with a fixed function. I don't
understand why this class of machine is so hard for you to understand. The
fixed function module we must built into our AI is generic reinforcement
learning. Until we build that fixed function into our AI we don't have AI.
This was something Skinner understood, and documented out the wazoo 50
years ago, and it's something that most people in AI still don't have a
clue about today. It's what the entire field of behaviorism has been
carefully documenting in both humans and animals for what 80 years now?

Humans have a very specific type of behavior that's hard wired into them.
It's called operant and classical conditioning in a high dimension, real
time, temporal, learning space. That's the fixed function we have to build
in our AIs to make them intelligent, it's the fixed function no one has yet
figured out how to build into a machine (or even how it works in the human
brain). It's the innate behavior that evolution built into us - learning.

> > Debating with someone who doesn't agree with your position is
> > one of the best ways to really understand your own position
> > and its strengths and weaknesses. When I debate with myself,
> > I'm always right, and that doesn't lead to much improvement
> > in the thought process. :)
>
> My impression is that when you debate with others you still
> think you always right.

Well, that's because I'm debating a point I believe I'm right about. Why
would I debate a point I believed I was wrong about????

In addition, when I write a long ass post carefully laying out an argument,
and you fail to make one cogent response to my position, and instead, bring
up issues unrelated to my argument in your response, why would that make me
think my argument was wrong? Oh yeah, John said hard wired circuits don't
need to learn so my position on the importance of learning in AI must be
wrong! Whey didn't I see that obvious weakness in my argument! :)

People have at times, pointed out important issues I totally overlooked and
have at times, made me do a 180 on some issue I was arguing. You on the
other hand, have never done that. You have helped me to see some weakness
in my position and helped me see that some things I thought were obvious or
a strong argument were just personal bias hidden in poor rationalizations.
But you have never prevented a strong counter argument that could shake my
position.

All you mostly respond with is, "I don't believe it" without giving any
strong counter evidence to contradict my position. You throw out lots of
circumstantial evidence that supports your own position, but nothing that
actually counters my position.

Of course a good bit of what we debate is nothing but educated guesses and
as such, they don't have a lot of strong evidence to shoot down in the
first place. So most of what we do is just say "these clues make me lean
this way" and you say "but these clues make me lean the other way".

pataphor

unread,
Dec 11, 2009, 6:15:19 AM12/11/09
to

Once most of the universe is ordered according to its preferences the
only thing that is left untouched is its own reward system. But if it
touches that, the state the universe is in doesn't matter anymore in the
first place. A super intelligent entity (AI or other), confident it
could move the universe to each desired state, could realize that and
just do nothing.

Maybe you think that would be illegal tampering with the reward system.
But consider you could modify yourself to become more intelligent and
then you notice some of your more primitive instincts are wrong, even if
evolution 'designed' you to be this way. Would you oppose getting rid of
those faulty instincts?

P.


pataphor

unread,
Dec 11, 2009, 6:51:18 AM12/11/09
to
Curt Welch wrote:

> Let me talk about this paragraph of your post here and I'll make other
> replies to talk about your other points.

Please do, but there is no way I can match your output if I carefully
place my comments under the appropriate sections of your text. The
problem is there are large reams of text and they are interesting to
read because they enable me to view things in a different light. However
there are some easy adaptations possible that would simplify matters a
lot. Since you seem to miss those there is no specific point of 'attack'
to your posts and I'll just have to try and introduce my concepts hoping
that you will then reread with that idea in the back of your mind and
see where you are making things complicated.

> To start with, the raw sensory inputs are already features of the
> environment. So before we do any processing, we already have a large and
> complex set of features to work in the form of parallel signals - each
> signal representing one feature. From the eye, the one signal for single
> nerve fiber will be a feature that roughly indicates the amount of light of
> a given frequency range failing on one part of the retina. So from this
> perspective, we don't have to build a network to create features. We
> already have features.

There may be features in the environment, however we don't know about
them except when we have created concepts of them. These concepts are
called 'things'. You know, tables, chairs, computers and so on. These
are the invariants you are talking about later.

The problems you have with linking motor activity to perception are
solved in biological organisms by neural area's concerned with attention.

There is a seductive desire for perfection that would lead one to think
everything is just a transformation of data. However, all action more or
less always implies losing some options, throwing away information, even
quantum mechanics tells us the act of choosing a way to measure things
itself determines what we will see and makes us lose some other options.

So losing information is a hard fact of life. Which means we don't know
what is really out there. We construct more or less arbitrary objects in
our perceptual environment. Maybe to some extent you are right, and
there are no real clean objects, they are always linked to some tendency
of action. I believe Heidegger wrote about that, that it is always about
what things mean to persons, instead of about the objects themselves.

But as humans we have language, and language is a process of abstracting
away from reality, making the objects (things) lose their connections to
individual preferences. As long as your robots don't talk to each other
this is not a problem, however it remains to be seen how smart they can
become while not talking to each other, or even to themselves in a way.

When I was a kid there was a strip I liked a lot where the protagonist
constantly saw objects other people didn't see and then pulled them out
of the air and solved problematic situations using them. I suppose an AI
could do the same, construe its environment in a different way than we
do, resulting in it seeing different objects, linking things together
that seem to be apart for us, or making distinctions where we don't. It
is a process of 'chunking' reality and I do not believe the way we do it
now is definite.

So, with this in mind, I read your text and it is just like you're
writing in matrix mechanical terms while I see it as wave functions. Or
maybe the other way round. Anyway, don't take this as a negative
reaction to your post, I very much enjoy reading stuff from this object
less point of view, I'm a sucker for perfectionism too, it's just that I
find it more effective to cut corners now and then.

P.

rs...@nycap.rr.com

unread,
Dec 11, 2009, 9:11:01 AM12/11/09
to
On Dec 11, 6:51 am, pataphor <patap...@gmail.com> wrote:
> Curt Welch wrote:

We all drift about in a fog of preconceptions. So let me say that I am
much taken up by molecular cell biology, and its implications in
whatever is at hand.

> > Let me talk about this paragraph of your post here and I'll make other
> > replies to talk about your other points.
>
> Please do, but there is no way I can match your output if I carefully
> place my comments under the appropriate sections of your text.

Why try? Some would call it diarrhea of the keyboard.

> > To start with, the raw sensory inputs are already features of the
> > environment. So before we do any processing, we already have a large and
> > complex set of features to work in the form of parallel signals - each
> > signal representing one feature. From the eye, the one signal for single
> > nerve fiber will be a feature that roughly indicates the amount of light of
> > a given frequency range failing on one part of the retina. So from this
> > perspective, we don't have to build a network to create features. We
> > already have features.
>
> There may be features in the environment, however we don't know about
> them except when we have created concepts of them. These concepts are
> called 'things'. You know, tables, chairs, computers and so on. These
> are the invariants you are talking about later.

Curt’s raw signal input are signals appearing at an n-input terminal
board. One input terminal is one feature. His entire universe is an
artificial nerve net. Your features are concepts, configurations of
human interneurons. I suppose there is some suitable matching, some
one-to-one mapping, probably an external reality.

> The problems you have with linking motor activity to perception are
> solved in biological organisms by neural area's concerned with attention.

Is this a problem? The genome constructs motor program generators.
These are based on the motor actions that resulted in the organism
surviving and reproducing in previous environments. A sensory input
triggers an MPG resulting in a motor action. Between sensory input and
trigger, various interneurons are activated. These we call percepts,
concepts. God knows what.

> There is a seductive desire for perfection that would lead one to think
> everything is just a transformation of data. However, all action more or
> less always implies losing some options, throwing away information, even
> quantum mechanics tells us the act of choosing a way to measure things
> itself determines what we will see and makes us lose some other options.

Here you introduce the soul(mind). I assume you mean subjective for
seductive.
I would rather say that the rejection of an option is the natural
integration of activated neurons in the thalamic reticular nucleus
resulting in the inhibition of the motor program as it passes through
the thalamus. Or not inhibiting!

> So losing information is a hard fact of life. Which means we don't know
> what is really out there. We construct more or less arbitrary objects in
> our perceptual environment. Maybe to some extent you are right, and
> there are no real clean objects, they are always linked to some tendency
> of action. I believe Heidegger wrote about that, that it is always about
> what things mean to persons, instead of about the objects themselves.

What is -really out there-! Did not Plato lay this out in the Cave.
Sometimes I think there is an exterior universe, and I know it
directly. Other times I am confused.

> But as humans we have language, and language is a process of abstracting
> away from reality, making the objects (things) lose their connections to
> individual preferences. As long as your robots don't talk to each other
> this is not a problem, however it remains to be seen how smart they can
> become while not talking to each other, or even to themselves in a way.

I prefer to think of language as motor activity. Starts at one to two
years of age when the MPG’s that produce phonemes are activated.

> When I was a kid there was a strip I liked a lot where the protagonist
> constantly saw objects other people didn't see and then pulled them out
> of the air and solved problematic situations using them. I suppose an AI
> could do the same, construe its environment in a different way than we
> do, resulting in it seeing different objects, linking things together
> that seem to be apart for us, or making distinctions where we don't. It
> is a process of 'chunking' reality and I do not believe the way we do it
> now is definite.

When I was a kid, Dagwood was dating Blondy. And Ignatz Mouse was
rejecting Krazy Kat. Very metaphysical. Different ships, different
long splices.

> So, with this in mind, I read your text and it is just like you're
> writing in matrix mechanical terms while I see it as wave functions. Or
> maybe the other way round. Anyway, don't take this as a negative
> reaction to your post, I very much enjoy reading stuff from this object
> less point of view, I'm a sucker for perfectionism too, it's just that I
> find it more effective to cut corners now and then.

Then I like to write in terms of proteins, and find truth there! The
truth of the genome.

So it goes.

Ray

Curt Welch

unread,
Dec 11, 2009, 10:04:14 AM12/11/09
to
pataphor <pata...@gmail.com> wrote:
> casey wrote:
> > On Dec 10, 10:49 pm, pataphor <patap...@gmail.com> wrote:
> >> ...
> >> Suppose the AI has solved its universe like it was some
> >> giant rubik's cube, it is now in the optimal state. No
> >> need to change anything. Is this death?
> >
> > If its only goal is to understand the Universe and if it
> > reaches that goal then yes there is no where else to go.
> > It is like a calculator. Give it a goal (make the system
> > unstable by pressing buttons) and it will run to a stable
> > state (solution) and stop.
> >
> > Biological machines however are only dynamically stable
> > like a whirlpool or a flame. They require a continual
> > inflow of energy and matter to exist. They have to maintain
> > thier essential variables. They have to keep finding food.
>
> Once most of the universe is ordered according to its preferences the
> only thing that is left untouched is its own reward system. But if it
> touches that, the state the universe is in doesn't matter anymore in the
> first place. A super intelligent entity (AI or other), confident it
> could move the universe to each desired state, could realize that and
> just do nothing.

The issue of an AI modifying it's own reward system is one we have debated
here. Tim Tyler who has posted a lot here but hasn't been around lately
has some interesting takes on the issue. We refer to as the wirehead
problem in reference to the idea of sticking a wire in your had to
stimulate your pleasure centers. He and I have argued in circles about
wire-heading. It seems to be a general issue of debate in the singularity
crowd as well.

I believe that the only way to create intelligence is to build a
reinforcement learning machine. With these machines, their purpose is
always defined indirectly though the hardware that defines and creates the
reward signal. So the learning system, when combined with a reward
generator, will learn behaviors that appear externally to have some purpose
- such as survival. But internally, the true purpose of the behavior
generating systems is not survival, it's just reward maximizing.

So, if my believe about intelligence being a reward maximizing RL machine
is correct, what does this mean about the future of man made super
intelligence?

What stops the AI from doing what it's build to do - that is reward
maximizing - by directly stimulating it's own reward center? I suspect,
that will always put a real limit on super intelligence. That is, the
singularity idea is based on the belief that super AIs we build will start
to build even smarter AIs and there will be an exponential runaway growth
of intelligences that will leave the intelligence of man in the dust.

I think this wirehead problem will cause real issues with that idea because
once an AI becomes smart enough to fully understand what it is, and fully
understand it's own design and construction to the point that it could
design and build even smarter AIs, it will understand what it's true
purpose in life is - which is not to build more AIs, or try to control the
future of the universe, but simply to wirehead itself.

I do believe there could be some complex options to keep that problem under
control, but I believe it will always be a prime factor on how things can
evolve in the future, and on just what any intelligence is able to do.

And it's also an interesting question to explore the future of man himself
once he gets this Pandora's box fully open and everyone in society fully
understand what they are.

Tim takes the more abstract view that an AI would have a set of high level
goals and that self modifications like that would be counter to the AIs
goals. As such, the AI won't do it.

My counter argument is that I believe there is no way to build true
intelligence with "high level goals". I believe the only way to build true
intelligence, is by building a machine with the prime low level goal of
maximizing some internal reward signal. Whether he, or I or both of ur our
right, all comes down to how an intelligence machine actually works. And
since we haven't built any of them yet (that is not equal to anything a
human can do) there is much still unknown about it's potential powers, or
limits. We really have to solve AI before we can know the answer to these
types of questions.

> Maybe you think that would be illegal tampering with the reward system.
> But consider you could modify yourself to become more intelligent and
> then you notice some of your more primitive instincts are wrong, even if
> evolution 'designed' you to be this way. Would you oppose getting rid of
> those faulty instincts?
>
> P.

Yes, but there's a flaw in that questions. You ask what if an AI notices
it's instincts are wrong. How does the AI judge right from wrong in the
first place? Where does right and wrong come from? It comes from his
primitive instincts! That's the machines definition of right and wrong.
There is no other definition, and there is no universal right and wrong.
All our high level goals developed _from_ our low level drives and
motivations, not the other way around.

If our high level goals are incomparable with our low level motivations,
then it's the high level goals that in general end up changing.

But, if the AI developed high level goals that seem incomparable with this
low level drives to him, AND it has the power to change his low level
goals, it's certainly possible he might do so. But once he changes his own
low level drives and motivations (his reward system) that will cause is
high level goals to start changing on him and then he could easily once
again change his mind about whether his current low level goals were
correct or not and make more changes. This spirals out of control with no
regulation and what sort of goals the poor thing ends up with is anyone's
guess.

The universe however has a built in high level goal that we can't change.
That's survival. Things that manage to survive, do survive, things that
don't mange to survive, don't survive. Unless there is some way to mess
with the basic laws of time and space, this survival game is one nothing in
the universe can escape from. But there's also no requirement that anyone
or anything care about playing it.e It's quite easy to build an
intelligent machine who's goal is to lose the survival game. The smarter
that machine is, the sooner it will kill itself, so they don't tend to last
long, but the point is, we can build it. And it's goal would be totally
counter to our own. There is nothing forcing machines to play to try and
win the survival game.

The facts of evolution simply show that universe will tend to be filled
with the material structures that does the best job of surviving.

casey

unread,
Dec 11, 2009, 2:30:23 PM12/11/09
to
On Dec 11, 10:15 pm, pataphor <patap...@gmail.com> wrote:

> ...


> Maybe you think that would be illegal tampering with the
> reward system. But consider you could modify yourself to
> become more intelligent and then you notice some of your
> more primitive instincts are wrong, even if evolution
> 'designed' you to be this way. Would you oppose getting
> rid of those faulty instincts?

My views I think are very much the same as those that Curt
expressed in his reply to your post.


JC

Curt Welch

unread,
Dec 11, 2009, 2:29:21 PM12/11/09
to
pataphor <pata...@gmail.com> wrote:
> Curt Welch wrote:
>
> > Let me talk about this paragraph of your post here and I'll make other
> > replies to talk about your other points.
>
> Please do, but there is no way I can match your output if I carefully
> place my comments under the appropriate sections of your text.

Just do what everyone does. Ignore most of it, cherry pick the stuff you
want to talk about and comment on that (as you have done).

> The
> problem is there are large reams of text and they are interesting to
> read because they enable me to view things in a different light. However
> there are some easy adaptations possible that would simplify matters a
> lot. Since you seem to miss those there is no specific point of 'attack'

Sure, if you think I've missed something important, attack that!

> to your posts and I'll just have to try and introduce my concepts hoping
> that you will then reread with that idea in the back of your mind and
> see where you are making things complicated.
>
> > To start with, the raw sensory inputs are already features of the
> > environment. So before we do any processing, we already have a large
> > and complex set of features to work in the form of parallel signals -
> > each signal representing one feature. From the eye, the one signal for
> > single nerve fiber will be a feature that roughly indicates the amount
> > of light of a given frequency range failing on one part of the retina.
> > So from this perspective, we don't have to build a network to create
> > features. We already have features.
>
> There may be features in the environment, however we don't know about
> them except when we have created concepts of them.

Right. We talk about them as concepts in our language of mental events. I
believe they happen as the result of the brain building pattern detectors
for them. So when people talk about concepts, I think in terms of pattern
detection hardware being created.

> These concepts are
> called 'things'. You know, tables, chairs, computers and so on. These
> are the invariants you are talking about later.

Yes, I see it that way as well (mostly). There are however important
implementation questions however about what form that internal
representation takes place in the hardware. We don't meed a single "table"
neuron or a "table" feature signal (in the terms I was using on the last
post) for example to have the system recognize and understand tables.

> The problems you have with linking motor activity to perception are
> solved in biological organisms by neural area's concerned with attention.

Yeah, the questions of attention is an interesting subject. I don't
believe it works as most people seem to describe it however.

> There is a seductive desire for perfection that would lead one to think
> everything is just a transformation of data.

Yeah, I'm highly seduced by simple and elegant designs. I'm far more
likely to error on the side of too simple instead of too complex.

> However, all action more or
> less always implies losing some options, throwing away information, even
> quantum mechanics tells us the act of choosing a way to measure things
> itself determines what we will see and makes us lose some other options.

Yes, measurement implies a loss of information and you can extend that as
you did to all action creates a loss of information. But yet, we can
transfer information around the world in computer networks without any
loss! How is that possible if all action creates a loss! Don't you think
there's something you might be missing there?

The answer is that we use digital circuits that in their raw hardware,
carry a huge amount of information in their actions (like the information
carried in the raw voltage levels of a digital signal. But of all the
information in that raw signal, we apply a huge amount of redundancy so
that we use it to carry (represent) a very small amount of information (one
bit). And though the output of an XOR gate has lost MOST the information
from it's inputs, it didn't loose the tiny few little bits that we wanted
it to use.

So though all action does lose information, it's still quite easy to build
signal processing circuits that preserve all the information we are
attempting to represent (at least with a very high level of certainty if
not a 100% level).\ without loss.

And in the case of building AI, we very much are building a signal
processing system, so there's really no conflict with the fact that it
_could_ use some internal system that preserves the information it is
processing. Whether it does or not, is still an unanswered question. But
it's not in any sense impossible for it to do so as you were trying to
imply by associating it with the Heisenberg uncertainty principle.

> So losing information is a hard fact of life. Which means we don't know
> what is really out there.

Well, yes and no. You are really taking it too far by talking like that.

When we take a photo, the digital image does not capture _all_ the
information that was entering the camera in the light. If might end up
with 10 Mbit of data about the light. But the light itself, if we could
calculate how much information was really in it, might have been billions
of times greater. So of all that information in the light, all we ended up
with was 10 Mbits of data.

So, since that hardware is limited to 10 Mbit, that's all it can "known"
about what's "out there". But it does in fact "known" that 10 Mbits of
information about what's out there. To say it "doesn't really know what's
out there" is to imply we don't know anything. And that's what I say is
taking it too far. We do in fact know what's out there. But we only know
a little bit about what's out there.

Talk like that also leads to these odd views of reality that make is
believe we are trapped inside our head and that we don't really know what's
going on out there and that life is all an illusion and crap like that. T
hat's all just pure metaphysical bull shit. The fact is, I do know what's
"out there". I know my fingers are "out there" typing on a keyboard at the
moment. This is a fact, not some odd-ass illusion I'm having.

There are billions of atoms in my fingers and I don't know what they are
doing right now. That's the stuff I don't know because of the limited
sensory and data processing ability of my body (brain/eyes/CNS etc). I
don't know for sure what's happening out in the hall outside by office here
at home either. That's just more stuff I don't know. What we do know, is
almost insignificant compared to what's out there to know about, but yet,
what we do know, is real, and there's nothing odd and metaphysical about
any of it. It's the exact same thing that happens when any instrument
takes a reading, or records some data, or any sensory system crates an
internal signal that is a representation of some other event.

yes, _most_ the information is lost by the sensor, but what is not lost, is
very much real knowledge about what is out there.

> We construct more or less arbitrary objects in
> our perceptual environment. Maybe to some extent you are right, and
> there are no real clean objects, they are always linked to some tendency
> of action. I believe Heidegger wrote about that, that it is always about
> what things mean to persons, instead of about the objects themselves.

Well, there are two issues there. The first is what John likes to talk
about as the constraints in the sensory data. These are the correlations
that allow us to do data compression. Those correlations exist because of
what's really "out there" and the way they were able to create those
correlations. We can talk about building hardware which finds those
correlations and uses them as the basis for defining and identifying the
"things" of the universe. Such a process I believe is in theory totally
independent of the hardware. It's not biased by the needs of the machine,
OR by the techniques it's used to find the correlations.

I suspect however, is that there are many different _practical_
implementations for identifying those correlations - just as there are many
different compression algorithms that looks for those sorts of correlations
when for example they look for repeating strings in a data stream. So any
practical machine for locating and labeling those correlations will have
some bias on the technique it uses to find them. But I believe that in
theory, the correlations are independet of the techniques used to find
them. But I'm not 100% sure about that.

However, the second point, is the need of a reinforcement learning machine
to identify, and make use of, those "things" to control it's behavior for
it's own needs. In that case, I believe what we have to build for this
class of high dimension learning problems, are machines that will adjust
it's view of the world to fit it's own needs.

But just because these types of machines do that, I don't think it is
reason to believe that all parsing of the universe into to "things" must be
done against a context of need.

In other words, I believe it's valid to say the things exist totally
independent of our desires and needs.

> But as humans we have language, and language is a process of abstracting
> away from reality, making the objects (things) lose their connections to
> individual preferences. As long as your robots don't talk to each other
> this is not a problem, however it remains to be seen how smart they can
> become while not talking to each other, or even to themselves in a way.

Yes the entire subject of langauge adds a very interesting dimension to the
whole question of AI. A good bit of our power to manipulate the world
comes from the fact that we use our language skills to guide our actions.
No AI is going to work as well as a human if it doesn't also have some
similar langauge skills.

But there's this running debate about whether language skills are
inherently a different type of skill, from all our other non-language
skills, or whether it's just more of the same. Do we have langauge
hardware in our brain which is a fundamentally different type of hardware,
than the hardware that we use to perform all our other non-langauge
intelligent behaviors? I fall on the side of saying all the hardware is
the same, and that our language behaviors are just one more set of learned
behaviors we use. This is the old debate that goes back to Skinner and
Chomsky. I fall on the Skinner side of the debate.

> When I was a kid there was a strip I liked a lot where the protagonist
> constantly saw objects other people didn't see and then pulled them out
> of the air and solved problematic situations using them. I suppose an AI
> could do the same, construe its environment in a different way than we
> do, resulting in it seeing different objects, linking things together
> that seem to be apart for us, or making distinctions where we don't. It
> is a process of 'chunking' reality and I do not believe the way we do it
> now is definite.

Yes, I think that's all true. I think there is some absolute facts about
the "chunks" of reality that are independent of the observer, but practical
implementation issues along with the needs and motivation of the system
bias what how that system tends to chunk - or maybe more accurately - what
"chunks" draws the attention of that observer (to pull back in the concept
of attention to the subject).

Let me step back to the compression example to just make by point. An
algorithm like LZW works by building a tree of the byte strings it finds in
the data stream it's trying to compress and uses that tree to create a new
langauge to describe those strings. Each string in the tree is given an
short "name" which is what gets output by the algorithm. As such, the
strings (aka the things) that algorithm chooses to use and to label (how it
"sees" the data stream if you will), is very much a function of the
specifics of that algorithm. A different compression program is very
likely to "chunk" the data into different strings.

So if we have a data stream like this: oixoixoioi we can chunk the
charachters many different ways, such as "oix" repeated twice and "oi"
repeated twice, or "o", followed by "ixo" repeated twice. But, independent
from any choice of how to chunk the data are the absolute facts of what's
there. There are, 4 o's, 4 i's, 4 oi's, 2 ix's, 1 io, 2 xo's, etc. These
statistical facts (the constraints of the data stream) are their
independent of how we might choose to view the stream. And it's these
sorts of statistical truths about the universe that creates the universal
truth about what's "out there" independent to how we might represent those
truths in the signal systems of our brain (in the langauge of our brain).
Or at a much higher level, in the language behaviors that come out of our
fingers to label those truths of the universe.

> So, with this in mind, I read your text and it is just like you're
> writing in matrix mechanical terms while I see it as wave functions. Or
> maybe the other way round.

:)

> Anyway, don't take this as a negative
> reaction to your post, I very much enjoy reading stuff from this object
> less point of view, I'm a sucker for perfectionism too, it's just that I
> find it more effective to cut corners now and then.
>
> P.

On the "wave function" idea.

Our perception of our own consciousness normally leaves us believing that
thought and perception is a continuous process - not one which is digitized
or operating in discrete steps in any way. Out thoughts have no indication
of "ticking" but rather one of a continuous analog flow.

As such, when CDs came out, there was great uproar from some of the
audiophiles claiming that no digital system could capture the _smooth_ flow
of the analog LP record. It was counter intuitive to them that digital
music could in any sense be "good" or "the real thing". Of course, the
movement of the cone in the speaker which is driven by a digital audio
system is itself, very much analog - at least analog down to the lowest
levels of which our universe is analog like.

I see this same effect spilling over into AI and mind/consciousness debates
and discussions at times. The idea that a digital computer could possibly
have a conscious mind equal to a humans doesn't sit well with lots of
people. And I believe a good bit of that is hidden in these feelings that
our thought process seems very continuous and even "wave-like" instead of
being broken into any sort of discrete units or steps.

I strongly suspect those feelings are all a big illusion and have little to
do with the facts of how the brain really works. Certainly the firing of a
neuron is very much a discrete event and certainly those events play a big
role in our thought processes - but yet we can't detect that our thoughts
are really just a lot of neurons spiking though self introspection.
There's no sign of that being true at all.

But yet, when we perform careful psychological tests on people, wefind that
what they think is happening is often at odds with the test data. There's
plenty of evidence showing that there's a lot going on that we are not,
normally, aware of. Or that what we believe to be happening, isn't what
actually happens. This if course has caused no end of problems for AI I'm
sure. People believe they have some grasp of what is happening in their
own mind, and so they go off and try to build AI systems to duplicate what
they think is going on there. Seldom do AI systems built that way produce
anything like human human behavior in the machine.

I think it's wise, when working on AI, to try and forget everything you
think you know about yourself. Just throw it all out the window, and work
on the problem as if you weren't a human, but instead, were just trying to
reverse engineer the control systems for creating animal behavior and
ultimately, human animal behavior. That's my normal approach to the
problem.

As such, I have no trouble believing that using digital control systems is
a fine way to proceed and that if we get the right digital signal systems
(right algorithms if you will) that it will have the same type of
consciousness and illusion of continuity that we have, even though there is
nothing continuous or wave-like at the lowest levels of the machine.

As you can guess from my other posts, my basic belief on how to build such
a control system, requires us to build a system that can have it's behavior
adjusted by reinforcement - which is no small trick. They way we tend to
structure robotics code and control systems don't lend themselves to being
reconfigured by a reward signal. We pick a structure and a design to solve
a very specific problem when we hard-code some behavior into a machine. TO
create an architecture that can be adjusted by a reward signal I believe
creates a totally different architecture.

That architecture I believe more closely resembles a big lockup table that
maps sensory conditions into actions, than the typical architecture we
might created in a computer program of creating lists of step by step
instructions to follow.

Of course, our programmable computers are also structured at the lower
level as a big lockup table which feeds the current context into the memory
(the memory address) and receives from memory, the next behavior to produce
(the next instruction). They are structured that way because they _are_
programmable - so that it's easy to change their behavior by reconfirming
the look-up table. In this way, I think all computers share a basic
architecture feature with the way I think our AIs will have to be
structured.

The main difference however, is because of the need to do _slowly_ adjust
the behavior in response to conditioning, the basic instructions of the
learning machine have to be somewhat different than the instructions of our
computers. You change one bit in a normal computer instruction and we get
major changes to the resulting behavior. To make reinforcement learning
work in this domain, we have to find low level behaviors that can be
changed in smaller steps.

I also see the "memory" of the look-up table being implemented with a
multilayer data transform instead of of the simple function ram performs in
our computers. But I've already touched on that.

The architecture paradigm I've been playing with for a few years now in an
attempt to find a workable solution is to use pulse signals as the basic
signal format, and to use pulse sorting though a network, as the basic
information paradigm. My network is configured to make routing decision,
instead of just "firing" decisions. It's similar to a network made of
nodes that fire like neurons, but it's regulated so that one, and only one,
down stream neuron will fire as the signal passes though the network. It
does that for reasons of information preservation, as well as for reasons
of eliminating one element of the combinatorial explosion problem that
exists in classic neural networks.

With the pulse sorting however, my nodes can be sorting on average 68% of
the pulses that flow though in one direction, and then in response to a
reward, adjust that to sort 67% of the pulses instead. The result is that
the network can have it's behavior slowly adjusted by rewards, instead of
suffering the one bit change makes it act completely different at the
highest levels. However, some of my recent designs do have issues of
changing their behavior too drastically too quickly in response to a single
pulse so I suspect other aspects of my implementation are just wrong in
that way at this point. But that's just part of the ongoing search for a
workable implement ion of these higher level architecture concepts.

Let me return to this whole issue of information loss. There was one more
point I never got to in my other long message that I see as important to
this approach.

I already pointed out that I think there's a need not to toss out
information because of the basic requirements of learning. The system
never knows what information might be needed in the future, so there's
great danger in throwing anything out. The system I believe _must_
_always_ be tracking the statistical worth (like always in terms of
rewards) of _all_ the information.

It solves the combinatorial explosion problems by chuntering - by taking
all the information, and mixing it together in different clusters or
classifications, and then tracking the stats ital worth of the cluster as a
whole.

But then we get to the point that for a given problem, some of the
information in the data is highly useful, and some is basically crap that
the system just never needs to use. So what does it do with the crap if it
can't thrown anything out? That's the interesting question, but that's
also where I have an interesting answer. The answer is that when you build
a transform based on information preservation, and you make the transform
cluster some set of "useful" data into one signal (so you can use it as you
want), what happens to the rest of the data? The answer is that it gets
spread out evenly across all the signals and becomes background noise in
the signals. When you do it that way, instead of actually removing it from
the signals by filtering, then the data is still there and can, if things
in the future change, be found again by the statistical systems and made
use of. If you filter it out, and stop tracking it statistically in any
sense because you have removed it, then the learning system will never
notice that it's once again become useful.

So let me demonstrate this concept with some mathematics.

lets start with 4 signals, which we will represent as 4 variables, A, B, C
and D. We can think of these as sequences of real numbers being fed to us
in real time (though I don't use this sort of signal formats in my network,
people are trained to think about signals in this type of time sequence of
values so I'll use this for the example).

We can use a 4x4 data transform matrix to produce 4 output signals from
these 4 input signals and if we make sure the matrix is invertible, we can
be sure we haven't loss any of the information in the transform. This just
means there will always be another transform to reverse the transform we
are using.

But for fun, we can also assume the signals have been normalized before we
got them, so that their mean is 0 and the standard deviation are the same
for all for signals - say 1, and that their correlations between any two
signals are also zero.

And though my math is a bit weak here, and I've not thought about this a
lot yet, I believe, we can also make sure our transform is maintaining the
average and SD requirements - but will not maintain the correlation
requirement because we want the outputs (our behavior) to be full of
correlations.

So what we are going for, is to define the output as 4 linear equations of
the 4 input variables. Lets say the outputs are W X Y and Z. So one
possible transform for this example could be:

W = .5 A + 0 B + 0 C + .25 D

X = 0 A + 1 B + 0 C + .25 D

Y = 0 A + 0 B + .5 C + .25 D

Z = .5 A + 0 B + .5 C + .25 D

Which is the matrix:

0.500 0.000 0.000 0.250
0.000 1.000 0.000 0.250
0.000 0.000 0.500 0.250
0.500 0.000 0.500 0.250

Which has the inverse:
(calculated by http://www.bluebit.gr/matrix-calculator/calculate.aspx )

0.000 0.000 -2.000 2.000
-1.000 1.000 -1.000 1.000
-2.000 0.000 0.000 2.000
4.000 0.000 4.000 -4.000

So what happens in that transform is that all the input data, is
distributed across the outputs with any loss of total "energy" (so to say)
because the sum of each column is 1. The transform divides up the data and
sends some of each input, to the different outputs.

Input A is divided in half, with half going to W, and half going to Z, and
none to the other two output signals.

Input B goes all to output X.

Input C goes half to Y and half to Z.

Those transforms represents the ides of sending the data to the place it
needs to be sent in order to create the behavior we want to get better
rewards.

But input D is the "crap" input. It's of no use to us currently. But
instead of throwing it away by setting all the input weights to zero (which
we could do), we keep it. We distribute it across all the outputs as extra
noise.

But since the noise components are distributed across all the outputs, they
have very little effect on anything. In this small example, it has a
fairly large effect (.25) but in a larger example, you can see that if the
noise is distributed across 100 outputs, it's effect on any one output is
minimal.

And if the input data is correctly statistically biased, such as in this
case to have a mean of 0 and SD of 1, then adding this noise to the signals
doesn't change the mean, or the SD, of any of the output signals either.
If the input signals have a correlation of zero, then these outputs will
show a correlation. Because input A was sent to both W and Z, that fact
will cause W and Z to show some correlation.

However, the inverse matrix shown above, is the matrix you would use to
_remove_ those correlations (since it is the reverse operation). So taking
the signals W, X, Y and Z, we can put those signals though the second
transformation matrix to return to a set of signals with no correlations.

I think conceptually the above idea is the foundation of how such a machine
must work. That is, it takes sensory input signals that happen to be full
of correlations (constraints per John's langauge) caused by the features of
the environment, and by default, will converge over time, on a transform
that removes the correlations from the data (or removes as much as is
possible). The output of this sort of transform would basically look like
random noise. But it would not be random noise. It would be a large set
of the unique micro features of the "things" that exist out in the
environment.

To that default transform, which tends to make our machine seem to act
totally randomly (we can't see any purpose in it's actions), we apply the
reinforcement learning, that makes it change into a transform more like the
first matrix - where the required signals are being sent to the place they
are needed.

In the real solution however, it happens in layers, not in one transform
like with the one matrix above. And the effects of the reinforcement
training would apply backwards though the layers - effecting the last
layers the most at first, and effecting the previous layers less and less.
What this would tend to create, is a network where the layers nearest the
sensory inputs were mostly effected by the need to remove correlations from
the data. The first layers would be "more about" data clean up and "thing
extraction" than about "purpose". But the closer you get to the outputs,
the more the wiring of the transforms are affected by "purpose" and the
less they are affected by the need to normalize the data.

In a well trained network of this type, we expect to find the input layers
as having lots of constraints, the middle layers having very little
constraints, and the output layers, again, having lots of constraints.
Real world sensory inputs, and the behaviors we have to produce to deal
with the environment, will be full of constraints. But the middle layers
of the transform creates an abstract constraint-free high resolution
representation of the current state of the environment.

BTW, the simplified matrix solution in the example can't work for this
problem because it's a purely spatial solution. It doesn't in any way
address the temporal aspect of the problem. The real solution has to base
the output of the transform not just on the _current_ inputs, but on a
large number of past inputs as well - which again creates yet another
combinatorial problem to be solved with a clever implementation of this
approach.

For completeness, let me throw in one more missing design detail of how
this sort of learning network would be used to drive real time behavior.

You can't just hook up sensors to effects though a learning network like
this and explain how the system is able to trigger a sequence of behaviors
(trigger a motor program generator in Ray's terms). But a simple extension
can explain it.

If you have 1000 sensor inputs, and 100 effector outputs, you need a basic
network that can transform 1000 inputs to the 100 outputs. But to explain
how it's able to learn to trigger behavior _sequences_ you have to add
feedback. But hat can be done by simply feeding back the 100 outputs (full
of constraints) back to 100 other inputs. So we use a network with 1100
inputs, and 100 outputs, and feed the 100 outputs back as 100 new sensory
inputs as well as feeding the 100 outputs to the effectors they will
control.

What this does, is allows the network to "see" what it's doing. It
effectively lets it sense it's own actions (without having to do it
indirectly by using it's eyes to watch it's hands for example - which also
happens). With such a configuration, the system is able to "calculate"
what outputs to produce, based not only on the external sensory inputs, but
also based on "what" it's been doing. So to create a motor program, like
the repetitive pattern needed to make the body walk the network uses that
feedback path as the primary control path for generating the complex
repetitive sequence - mostly free from external sensory conditions.

Behaviors that are a direct response to some external stimulus condition,
are stead triggered mostly though the other side of the net from the
sensory inputs to the outputs.

Such a system has no hard dividing line between the part of the network
which is processing the sensory inputs, or the part that's processing it's
own output signals - except at the input to the network. Otherwise, all
the signals start to intermix in the two normal processes that are at work
shaping the transform of the network - the data normalizing process which
extracts the "things" from the data, and the reinforcement learning which
makes the network collect together, and send the right data, to the right
place, to re-produce the behaviors that have in the past, worked best to
collect rewards.

I strongly suspect, but have zero real proof, that the above is the basic
architecture of the human neocortex. The entire thing is acting as a
reinforcement trained data transform network that also attempts to
normalize the data by removing correlations from the signals. And in the
process of removing correlations, it forms a feature extraction network at
the same time.

The motor cortex is not, as people normally describe it, "driving" the
motor outputs. Instead, the entire cortex is "driving" the motor outputs,
and the motor cortex is just more sensory cortex, that is busy sensing what
the entire system has been making the body do. The motor cortex then is
where we "recognize" what it is we are doing (in a odd twist of how it's
all described). But yet, when a network forms for producing a motor
sequence, it forms in the motor cortex. But it forms, because we first
recognize where we are in our walking "gait" and use that recognition to
drive what we do next in the gait. But that's just random speculation
about the brain without a lot of study on my part of brain research.

My real interest is in finding a practical and workable algorithm to solve
this high dimension real time temporal non-Markov learning problem which I
believe we must find to solve AI. And the above ideas are the general high
level architecture of how I think it must be structured to solve this
problem.

And if that above ideas are correct, it says a lot of interesting things
about what we, as humans, are as well. But that's putting the cart before
the horse (which I do without abandon). :)

casey

unread,
Dec 11, 2009, 5:32:06 PM12/11/09
to
On Dec 12, 6:29 am, c...@kcwc.com (Curt Welch) wrote:
> ...
> And in the case of building AI, we very much are building
> a signal processing system, so there's really no conflict
> with the fact that it _could_ use some internal system that
> preserves the information it is processing. Whether it
> does or not, is still an unanswered question.

The problem with conserving information is one of resources
in memory, computing power and usefulness. The smaller the
system the less it can conserve no matter how compressed the
data is. There would have been an advantage in data selection
for reproductive success to minimize the amount of data to
be saved or time spent processing stuff that was not useful.

> The fact is, I do know what's "out there". I know my fingers
> are "out there" typing on a keyboard at the moment. This is
> a fact, not some odd-ass illusion I'm having.

I don't believe you don't understand what is meant when people
say the world "as we see it" is an illusion. It doesn't mean the
world out there doesn't exist or isn't real in any way, all it
means is what we see is determined by the way we represent it.
We represent (experience) a table as solid even though we now
believe this is an "illusion", it is mostly space between very
small things we call atoms. The table however is still real,
the illusion is not of something that doesn't exist rather the
illusion is the way we represent it as not really corresponding
to the way it is "out there". We experience a rainbow as being
made up of color stripes when in fact this representation is a
result of the way we code color. There are no stripes in a
rainbow only a continuous change in the frequency of the
electromagnetic waves.


> Our perception of our own consciousness normally leaves us
> believing that thought and perception is a continuous process
> - not one which is digitized or operating in discrete steps
> in any way. Out thoughts have no indication of "ticking" but
> rather one of a continuous analog flow.

One of the many mental illusions which can be shown by making
objective measurements. The classical example being the illusion
that the eyes move continuously over this text. In fact they move
in jerks called saccades and the eye has to fixate for at least
0.2 seconds to read the text. Faster readers simply fixate at
less points in the text.


> I think it's wise, when working on AI, to try and forget
> everything you think you know about yourself.


But that doesn't mean throwing out objective measures about the
operations of brains, only the subjective thoughts people have
about themselves.


> The way we tend to structure robotics code and control systems


> don't lend themselves to being reconfigured by a reward signal.


I wouldn't assume all reconfigurations require a reward signal.

For example if you lightly touch the siphon of Aplysia it will
withdraw its gills. Do it again and it will only partly withdraw
its gills. Do it again and it will not respond at all. We say
it has learned the touch is not harmful. Physically we now know
that the connection between the touch sensitive neuron and the
gill withdrawal motor neuron weakens with such stimuli. It starts
with a useful default connection and adjusts it to the current
situation. However that same default connection can be changed
if the light touch is followed by a shock to the tail. The pain
sensory input will trigger a intermediary neuron that will make
the connection strong and eventually cause a permanent change.

(siphon)--(T)------------->(M)----->[gill]
^
|
(tail)--(P)----->(I)-----+

You can find more detailed versions of this circuit on the internet.

Another interesting circuit being studied is the central pattern
generator used by Aplysia to control ingestion and egestion.


> We pick a structure and a design to solve a very specific problem
> when we hard-code some behavior into a machine. TO create an
> architecture that can be adjusted by a reward signal I believe
> creates a totally different architecture.


The evidence is that the modules identified in brains are not fixed
in their default behaviors but rather are modified by experience.


> That architecture I believe more closely resembles a big lockup
> table that maps sensory conditions into actions, than the typical
> architecture we might created in a computer program of creating
> lists of step by step instructions to follow.


At the high level I think something like "programming" can take
place in the brain however the modules are not "lists of instructions"
but rather actual circuits which you might emulate with "lists of
instructions". The advantage a circuit has over a look up table
is it doesn't have to insert data to look up the response, it simply
computes the response. Circuits can be modified by experience by
the reward signal changing the strength and number of connections.

You asked before what replaces the Java programmer. The short answer
is evolution. It builds the modules including the modules that control
the other modules.


> The system never knows what information might be needed in the
> future, so there's great danger in throwing anything out.


This is the reason some people never throw their stuff out because
one day they might need it. But like I wrote above, in order to
compete an organism that can save of time and resources by having
some successful criteria for throwing stuff out and thus it will have
a
reproductive advantage.


> It solves the combinatorial explosion problems by chuntering -
> by taking all the information, and mixing it together in different

> clusters or classifications, and then tracking the statistical worth


> of the cluster as a whole.

I suspect you would have a combinational explosion of possible
clusters to be considered.


JC

Curt Welch

unread,
Dec 11, 2009, 11:54:12 PM12/11/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 12, 6:29=A0am, c...@kcwc.com (Curt Welch) wrote:
> > ...
> > And in the case of building AI, we very much are building
> > a signal processing system, so there's really no conflict
> > with the fact that it _could_ use some internal system that
> > preserves the information it is processing. Whether it
> > does or not, is still an unanswered question.
>
> The problem with conserving information is one of resources
> in memory, computing power and usefulness. The smaller the
> system the less it can conserve no matter how compressed the
> data is. There would have been an advantage in data selection
> for reproductive success to minimize the amount of data to
> be saved or time spent processing stuff that was not useful.

None of that really applies to what I'm talking about there. I'm basically
talking about the type of functions the neural network is performing on the
data that's already there and nothing else. I'm not talking about adding
memory to store a record of past data, or adding more processing power.
I'm just talking about what type of function the network is computing.
Some functions preserve the input information in the output, and others
don't. That's all I'm talking about here.

> > The fact is, I do know what's "out there". I know my fingers
> > are "out there" typing on a keyboard at the moment. This is
> > a fact, not some odd-ass illusion I'm having.
>
> I don't believe you don't understand what is meant when people
> say the world "as we see it" is an illusion. It doesn't mean the
> world out there doesn't exist or isn't real in any way, all it
> means is what we see is determined by the way we represent it.

Sure. But most of how we represent it is NOT an illusion. It IS HOW IT
IS. Illusions happen when the representation is in error, not when it's
correct. Most of our brain's representations are actually very accurate.

> We represent (experience) a table as solid even though we now
> believe this is an "illusion", it is mostly space between very
> small things we call atoms.

When I look at a table, I don't see something that has no gaps at the
microscopic level. I simply can't see the microscopic level and as such,
don't make such assumptions about how much space there is, or isn't at that
level.

When I call a table solid, I mean that when I put a coffee cup on it, the
coffee cup doesn't fall through it to the ground. It means if I poor water
on it, the water won't mix with the table. It means when I touch the table,
it doesn't change shape to fit my finger like a liquid does. None of this
is an illusion. It's simply the way the universe really is.

If someone takes these characteristics they can sense about the table, and
makes the ASSUMPTION that the table is solid even at levels they are unable
to sense, then there is no illusion at work there. They are just being
stupid and making assumptions about things they can't sense.

What we can sense, is, for the most part, the way the world really is. No
illusions evolved.

> The table however is still real,
> the illusion is not of something that doesn't exist rather the
> illusion is the way we represent it as not really corresponding
> to the way it is "out there".

> We experience a rainbow as being
> made up of color stripes when in fact this representation is a
> result of the way we code color. There are no stripes in a
> rainbow only a continuous change in the frequency of the
> electromagnetic waves.

Yes, that's true. But is not the colors we see generally speaking the
colors that are actually there? Meaning when I see red, it only means that
my sensory system is detecting a given EM frequency range. Is that
frequency range not really there? Of course it is (except adjusting for
the true illusionary effects of the signal systems of the eye which does a
type of white balancing and adjusts what it represents based on

> > Our perception of our own consciousness normally leaves us
> > believing that thought and perception is a continuous process
> > - not one which is digitized or operating in discrete steps
> > in any way. Out thoughts have no indication of "ticking" but
> > rather one of a continuous analog flow.
>
> One of the many mental illusions which can be shown by making
> objective measurements. The classical example being the illusion
> that the eyes move continuously over this text. In fact they move
> in jerks called saccades and the eye has to fixate for at least
> 0.2 seconds to read the text. Faster readers simply fixate at
> less points in the text.
>
> > I think it's wise, when working on AI, to try and forget
> > everything you think you know about yourself.
>
> But that doesn't mean throwing out objective measures about the
> operations of brains, only the subjective thoughts people have
> about themselves.

Right. Objective evidence good - subjective beliefs bad.

> > The way we tend to structure robotics code and control systems
> > don't lend themselves to being reconfigured by a reward signal.
>
> I wouldn't assume all reconfigurations require a reward signal.

There are many types of learning and only one - RL - requires a reward
signal.

I even suggest that to solve the AI problem, we have to first add a type of
learning to the system that doesn't use a reward signal - one that will
adapt to the data sent it and learn to transform it so as to reduce the
correlations in the data.

But I don't in general think all these other types of learning is
intelligence. I think RL is the only type of learning that can create
intelligence. And with RL, there's always a reward signal - one way or
another.

> For example if you lightly touch the siphon of Aplysia it will
> withdraw its gills. Do it again and it will only partly withdraw
> its gills. Do it again and it will not respond at all. We say
> it has learned the touch is not harmful. Physically we now know
> that the connection between the touch sensitive neuron and the
> gill withdrawal motor neuron weakens with such stimuli. It starts
> with a useful default connection and adjusts it to the current
> situation. However that same default connection can be changed
> if the light touch is followed by a shock to the tail. The pain
> sensory input will trigger a intermediary neuron

That's a reward signal John. Bad example if the point was to demonstrate a
system without a reward signal.

> that will make
> the connection strong and eventually cause a permanent change.
>
> (siphon)--(T)------------->(M)----->[gill]
> ^
> |
> (tail)--(P)----->(I)-----+
>
> You can find more detailed versions of this circuit on the internet.
>
> Another interesting circuit being studied is the central pattern
> generator used by Aplysia to control ingestion and egestion.
>
> > We pick a structure and a design to solve a very specific problem
> > when we hard-code some behavior into a machine. TO create an
> > architecture that can be adjusted by a reward signal I believe
> > creates a totally different architecture.
>
> The evidence is that the modules identified in brains are not fixed
> in their default behaviors but rather are modified by experience.

Yes, they are learning modules. Do they all learn the same way? If so,
then they are not "modules" - they are one large learning system. :)

> > That architecture I believe more closely resembles a big lockup
> > table that maps sensory conditions into actions, than the typical
> > architecture we might created in a computer program of creating
> > lists of step by step instructions to follow.
>
> At the high level I think something like "programming" can take
> place in the brain however the modules are not "lists of instructions"
> but rather actual circuits which you might emulate with "lists of
> instructions".

Yes, my abstraction is one of a signal processing circuit but yet I create
it using lists of instructions in a computer.

> The advantage a circuit has over a look up table
> is it doesn't have to insert data to look up the response, it simply
> computes the response.

Well, that's just a matter of perspective John. When you use circuits like
that to create a look up table, you do very much have to "insert the data".
You just do that when you build the hardware. It's just as valid to say
that when you store data in ram to create a look-up table in a computer
that you are doing the same thing - that is "building" the hardware by
modifying the electrons in some memory cell.

The difference here is not important to our abstract understanding of what
type of process we are dealing with. It's only important in terms of the
the implementation details we choose to use when we build it.

> Circuits can be modified by experience by
> the reward signal changing the strength and number of connections.

Yes, all neural nets work that way John. We all understand that.

> You asked before what replaces the Java programmer. The short answer
> is evolution. It builds the modules including the modules that control
> the other modules.

No, John, evolution replaces the computer designer. The Java programmer is
replaced by the learning machine that evolution built.

When we build a learning machine, no one needs to program it. It programs
itself.

> > The system never knows what information might be needed in the
> > future, so there's great danger in throwing anything out.
>
> This is the reason some people never throw their stuff out because
> one day they might need it.

My entire family seems to have a serious hording disease. :)

> But like I wrote above, in order to
> compete an organism that can save of time and resources by having
> some successful criteria for throwing stuff out and thus it will have
> a reproductive advantage.

Yes that's just fine. But that's not the type of hording I'm talking about
in these discussions of the type of processing I believe has to happen in
the learning system that gives rise to our strong adaptive powers. The way
I'm trying to describe this is clearly causing you to get confused.

> > It solves the combinatorial explosion problems by chuntering -

Chuntering????

I think my fingers must have mixed together the words chunking and
clustering. :)


> > by taking all the information, and mixing it together in different
> > clusters or classifications, and then tracking the statistical worth
> > of the cluster as a whole.
>
> I suspect you would have a combinational explosion of possible
> clusters to be considered.

That IS the combinatorial explosion problem that IS solved by the approach
I've recently outlined here (yet again).

The solution, works because instead of having to try and test every
possible combination, it picks a combination at random, and then starts
testing it by statistically tracking its performance. It's able to
iterative improve the combinations without having to try every combination
separately. It works because a combination, even when very wrong, can be
tested for "closeness" to being right. So as small iterative changes are
made, the system can sense when the change is taking it in the right
direction, or the wrong direction. This allows it to converge on the right
answer, without ever having to find the right answer in a one by one search
of all combinations. The key is in the implementation that gives it
measure of closeness to make it converge on the right answer from anywhere
in the answer space.

casey

unread,
Dec 12, 2009, 2:24:11 AM12/12/09
to
On Dec 12, 3:54 pm, c...@kcwc.com (Curt Welch) wrote:

> casey <jgkjca...@yahoo.com.au> wrote:
>> The advantage a circuit has over a look up table is it doesn't
>> have to insert data to look up the response, it simply computes
>> the response.
>
>
> The difference here is not important to our abstract understanding
> of what type of process we are dealing with. It's only important
> in terms of the implementation details we choose to use when
> we build it.

Perhaps you could show me how you would implement a look up table
to transform an image into a list of blobs?

> It [Curt's theory] works because a combination, even when very


> wrong, can be tested for "closeness" to being right.

Major breakthrough then? Shame you can't get anyone to understand it.

I see the combinational explosion problem as unsolvable and can only
be circumvented by simplification and the invention of heuristics.


JC

pataphor

unread,
Dec 12, 2009, 7:18:07 AM12/12/09
to
Curt Welch wrote:

> The issue of an AI modifying it's own reward system is one we have debated
> here. Tim Tyler who has posted a lot here but hasn't been around lately
> has some interesting takes on the issue. We refer to as the wirehead
> problem in reference to the idea of sticking a wire in your had to
> stimulate your pleasure centers. He and I have argued in circles about
> wire-heading. It seems to be a general issue of debate in the singularity
> crowd as well.

I've been looking around in the singularity crowd memeosphere (and even
seen Tim there) so maybe I got it from there. The general background
philosophy there seems to be consequentialism which I find hard to get
used to because in my opinion it breaks continuity, which is about
equivalent to messing with the reward system. But maybe they found some
way to quantum tunnel themselves into more agreeable worlds, they have
many worlds to choose from.

My own general attitude in that group reminds me of the way I felt when
I was in a group of Python programmers and then some C programmer came
along who could not get used to dynamic typing. Only I am now in the
position of that C programmer. So I am inclined to cut them some slack,
but the things you warn about, the confusion resulting from messing with
the reward system, is all over the place.

> What stops the AI from doing what it's build to do - that is reward
> maximizing - by directly stimulating it's own reward center? I suspect,
> that will always put a real limit on super intelligence. That is, the
> singularity idea is based on the belief that super AIs we build will start
> to build even smarter AIs and there will be an exponential runaway growth
> of intelligences that will leave the intelligence of man in the dust.
>
> I think this wirehead problem will cause real issues with that idea because
> once an AI becomes smart enough to fully understand what it is, and fully
> understand it's own design and construction to the point that it could
> design and build even smarter AIs, it will understand what it's true
> purpose in life is - which is not to build more AIs, or try to control the
> future of the universe, but simply to wirehead itself.

Yes, but this goes for humans too. Will we disappear into virtual
worlds, forgetting about where we came from?

> If our high level goals are incomparable with our low level motivations,
> then it's the high level goals that in general end up changing.

I don't think so. That way of thinking is like refusing to use
teleporting devices (once they become available) because you don't
believe the clone on the other side is really you.

> But, if the AI developed high level goals that seem incomparable with this
> low level drives to him, AND it has the power to change his low level
> goals, it's certainly possible he might do so. But once he changes his own
> low level drives and motivations (his reward system) that will cause is
> high level goals to start changing on him and then he could easily once
> again change his mind about whether his current low level goals were
> correct or not and make more changes. This spirals out of control with no
> regulation and what sort of goals the poor thing ends up with is anyone's
> guess.

Yes. That's what I think the singularity crowd is doing. But maybe it's
just progress.

> The facts of evolution simply show that universe will tend to be filled
> with the material structures that does the best job of surviving.

But now we're having a memetic evolution instead of a physical one. If
Wolfram is right then we can't even predict the way simple structures
will develop. So what will happen if we put our thought creations back
into the universe is already anyone's guess with very simple structures.

P.

Curt Welch

unread,
Dec 12, 2009, 9:08:25 AM12/12/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 12, 3:54=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> >> The advantage a circuit has over a look up table is it doesn't
> >> have to insert data to look up the response, it simply computes
> >> the response.
> >
> >
> > The difference here is not important to our abstract understanding
> > of what type of process we are dealing with. It's only important
> > in terms of the implementation details we choose to use when
> > we build it.
>
> Perhaps you could show me how you would implement a look up table
> to transform an image into a list of blobs?

Well, that's a good question. Nope, I don't know hand-create such a
function. I'm fairly sure it could be done however and studying that
problem would be one of many approaches to gaining a better understanding
of this problem and how to solve it.

Pictures however are not temporal data. They are purely spatial data. As
such, wasting a lot of time solving only that problem would not be good
because when you are done, you would have an algorithm that is unworkable
for the domain we need to solve - which is in the temporal domain.

Humans have networks that decode 2D visual data. But I don't believe that
network was formed either by evolution, or by exposure to lots of pictures.
It was formed by exposure to the temporal data that comes from continuous
interaction with a 3D world. When we look a picture, we "see" the 3D
objects it represents (assuming it's a photo or drawing that does have the
correct shapes to represent 3D objects). But that ability I believe didn't
come from looking at pictures. It came from looking at real 3D objects and
how they change over time when we move relative to the object. How that
data changes is constrained by what happens when a 3D object is mapped to a
2D representation in the eye. It's from those constraints that the design
of the network evolves from as our visual cortex wires itself. And once
wired to correctly identify expected correlations in real images, the eye
can also use that same transform to identify 3D objects in a picture.

> > It [Curt's theory] works because a combination, even when very
> > wrong, can be tested for "closeness" to being right.
>
> Major breakthrough then? Shame you can't get anyone to understand it.

It's not a major breakthrough. It's basically how lots of RL algorithms
work - even though they tend to use a similar technique in a different way.
It's basically how TD-Gammon works. TD-Gammon faces the same sort of
combinatorial explosion problem in terms of the number of games it would
have to play to get enough experience but yet it solves it - which is why
TD-Gammon is such an interesting RL algorithm. It shows that the problem
you believe can't be solved, can be solved.

This clustering of features that gets adjusted I talked about is also known
as an abstraction of the sensory data. It's an abstract feature of the
data. TD-Gammon was not good at creating optimal abstract features - at
adjusting the way the data was clustered to revel the maximal amount of
information about how to make a move. It worked isntead, because the human
that designed it used his own intelligence which used is how experience in
designing Backgammon games to pick a clustering that he thought would work
well in Backgammon - and he was right.

So TD-Gammon can learn using abstractions, but what it fails at, is doing a
good job of finding it's own abstractions. To solve the general problem,
we have to build a machine that can not only abstract by transforming the
data, but which can also identify, and converge on, GOOD abstractions for
whatever problem it is working on. And the approach I'm talking about an
attempt to do that.

The point of all this posting is because there are unsolved implementation
questions here that I would like to find solutions for. I've been working
on this problem, ALONE for decades as a fun hobby. I came to c.a.p. years
ago hoping to find someone to brainstorm with about how to solve this
problem.

Instead of finding anyone to brainstorm with, I've found lots of people
that don't even understand what the problem is that needs to be solved. So
before I can get anyone to do some interesting design brainstorming with, I
have to educate the world (it feels like at times) about what the problem
is. Tough I keep trying, it's not yet worked. :) So I continue to
brainstorm about how to tackle this problem on my own (often in long posts
that might otherwise appear to be me trying to describe something to
people).

> I see the combinational explosion problem as unsolvable and can only
> be circumvented by simplification and the invention of heuristics.

Your "circumvented by simplification" is what drives me up the wall. Most
of your suggestions along that direction are obviously impossible. What
you are mostly doing there is denying the problem exists and pretending
it's solved in ways that it can't be solved and then you just use those
rationalizations to not work on the real problem (not even think about how
to solve it becuase you have convinced yourself it's unsolvable).

If "invention of heuristics" means "invent an algorithm that solves it"
then that's exactly what I'm working on and searching for. If it means,
"invent a trick so we don't have to solve it", then I'm back to being
driven up a wall. :)

The simple black-box problem I work on is the problem of taking multiple
parallel sensory inputs to the box and have the box produce multiple
parallel output signals. The box has a reward signal input, and must
adjust how it those outputs are computed from the inputs to make the
reward signal get higher over time - with no a prior knowledge of what
environment it's interacting with outside the box.

This black box problem is not intended to be the complete and full picture
of what the solution to AI will look like. But it's the key "magic box"
that's missing from all AI work currently. The complete solution might
includes lots of custom modules outside the box, and might include the
algorithm inside the box optimized to the problem. But until someone
solves this specific black box problem, no one is going to make any real
program in AI, or in brain research. Understanding how to solve this
specific black box problem, is the foundation of understanding how the
brain creates intelligent behaviors and how we are going to build machines
to duplicate that intelligence.

DO you grasp that no one has a workable algorithm to put in that box to
solve that class of problems? The fact that no one yet has such an
algorithm is why AI sill hasn't been solved in 60 years. The fact that so
few people are working on this problem is why progress in AI is moving so
slowly.

This generic learning black box problem is the only problem I've been
interested in for the past 30 or so years in AI and it's the only puzzle of
AI I'm going to be interested in until it's solved (or until I die).

Curt Welch

unread,
Dec 12, 2009, 9:55:19 AM12/12/09
to
pataphor <pata...@gmail.com> wrote:
> Curt Welch wrote:
>
> > The issue of an AI modifying it's own reward system is one we have
> > debated here. Tim Tyler who has posted a lot here but hasn't been
> > around lately has some interesting takes on the issue. We refer to as
> > the wirehead problem in reference to the idea of sticking a wire in
> > your had to stimulate your pleasure centers. He and I have argued in
> > circles about wire-heading. It seems to be a general issue of debate
> > in the singularity crowd as well.
>
> I've been looking around in the singularity crowd memeosphere (and even
> seen Tim there) so maybe I got it from there. The general background
> philosophy there seems to be consequentialism which I find hard to get
> used to because in my opinion it breaks continuity, which is about
> equivalent to messing with the reward system. But maybe they found some
> way to quantum tunnel themselves into more agreeable worlds, they have
> many worlds to choose from.

Tim believes the AIs we create we replace us as the natural path of
evolution. Not so much in some takeover event or war, but just naturally
in time they will out evolve us. And think he sees AIs engineering other
AIs as the basic "improved" form of evolution that will emerge and dominate
the future (which is basically the singularity movements ideas).

I have real doubts that reproduction by intelligent engineering can work as
a replacement for the non-intelligent reproduction that we use now.

When we reproduce, we produce a new machine motivated for it's own
survival. Such machines are very dangerous. They can (and do at times)
kill their parents. We do it not because it's a good way to survive, but
because we are genetically pre-disposed to do it. We were built to
reproduce these dangerous off-spring. We "like" to do it because we are
genetically built to "like" to do it.

For this to work with AIs building more AIs, they have to be designed to
"like" to design and build new and better AIs which will then go off and
find highly clever ways to steel resources from their creator. But I think
the same intelligence that has to be trained with the the knowledge of who
to design and build new and better AIs will at the same time not be
"tricked" by it's reward system into building machines that will take
resources from it. It creates an inherent conflict in their goals. That
is, any low level goals that motivate an AI to survive, will be counter to
the AI building AIs that also want to survive. What we, and they are
motivated to do, is build AIs to help US survive, and not AIs that what to
work to maximize their own survival.

So I think the whole singularity idea of reproduction by intelligence is
flawed. And it's flawed because they don't really know what intelligence
is.

But, I also have great faith in the power of evolution. Which means, if
there is a way to make it work, evolution will find it and that's what will
live on into the future. My idea that it wont' work, is betting against
the power of evolution to find a way to make it work. And that's never a
good bet.

I don't think we are smart enough to make any viable predictions about how
these aspects of the future will actually play out. But it can be fun to
speculate.

> My own general attitude in that group reminds me of the way I felt when
> I was in a group of Python programmers and then some C programmer came
> along who could not get used to dynamic typing. Only I am now in the
> position of that C programmer. So I am inclined to cut them some slack,
> but the things you warn about, the confusion resulting from messing with
> the reward system, is all over the place.
>
> > What stops the AI from doing what it's build to do - that is reward
> > maximizing - by directly stimulating it's own reward center? I
> > suspect, that will always put a real limit on super intelligence. That
> > is, the singularity idea is based on the belief that super AIs we build
> > will start to build even smarter AIs and there will be an exponential
> > runaway growth of intelligences that will leave the intelligence of man
> > in the dust.
> >
> > I think this wirehead problem will cause real issues with that idea
> > because once an AI becomes smart enough to fully understand what it is,
> > and fully understand it's own design and construction to the point that
> > it could design and build even smarter AIs, it will understand what
> > it's true purpose in life is - which is not to build more AIs, or try
> > to control the future of the universe, but simply to wirehead itself.
>
> Yes, but this goes for humans too. Will we disappear into virtual
> worlds, forgetting about where we came from?

I don't know. We might.

But again, evolution and it's reward of survival is always the top dog in
the universe. Anyone that chooses a path that doesn't maximize their
survival won't be around in the future, and whoever finds a way to continue
to best fight that survival game are the ones that will be here. That
doesn't' make survival "Right" by any means. It just means that if you
don't chose that, you won't be here to debate the issue in the future. And
that's all it means.

> > If our high level goals are incomparable with our low level
> > motivations, then it's the high level goals that in general end up
> > changing.
>
> I don't think so. That way of thinking is like refusing to use
> teleporting devices (once they become available) because you don't
> believe the clone on the other side is really you.

Well, two issues here. One, our learned beliefs took a life time to
create, and can take another live time to change. They don't change
quickly. So if some deep fundamental beliefs drives you to a conclusion
you should not use transporters, you might not change your mind until
almost your entire stack of beliefs gets changed. And that may take a
second life time.

Second, its' unclear in humans how possible it is to teach and old dog new
tricks. And by that, I'm suggesting the brain might be built to actually
disable some of it's learning ability over time so it locks on to some
beliefs formed early and life and will never let go of them. If an AI is
built like that, then it won't be able to develop new high level beliefs if
the mid-level beliefs that support it have all be locked into memory so
they can't change.

But if there are no mechanical systems at work preventing the continued
learning, then the high level beliefs will, given enough time, change when
the environment changes enough to force it to happen.

> > But, if the AI developed high level goals that seem incomparable with
> > this low level drives to him, AND it has the power to change his low
> > level goals, it's certainly possible he might do so. But once he
> > changes his own low level drives and motivations (his reward system)
> > that will cause is high level goals to start changing on him and then
> > he could easily once again change his mind about whether his current
> > low level goals were correct or not and make more changes. This
> > spirals out of control with no regulation and what sort of goals the
> > poor thing ends up with is anyone's guess.
>
> Yes. That's what I think the singularity crowd is doing. But maybe it's
> just progress.

Well, again, any such system that messes with it's innate goals and ends up
with goals that are not compatible with survival will of course, be taken
care of by evolution, leaving only the ones that happened to pick good
survival goals around to talk about what's right.

> > The facts of evolution simply show that universe will tend to be filled
> > with the material structures that does the best job of surviving.
>
> But now we're having a memetic evolution instead of a physical one. If
> Wolfram is right then we can't even predict the way simple structures
> will develop. So what will happen if we put our thought creations back
> into the universe is already anyone's guess with very simple structures.

Well, Tim and I went around in circles over the issue of memetic evolution.
He liked to believe memes have their own power of survival separate from
the medium they existed in. As such, be believed they would jump from
humans to AIs and leave the humans behind as if they were a worn out old
pair of shoes.

I don't think it's valid to believe they are so separate. I think memes
exist only as long as they help their "old shoes" exist. Human memes will
jump to AIs if those memes also help the AI exist. (do unto others etc is
one such meme that is easy to see how it could jump from humans to AIs).
But the ability of the meme to survive is always tied to how it helps the
brain it's carried in survive. The human meme of "Don't let the AIs get
too much power" is not a meme that will survive well in the AIs because
that meme doesn't do a good job of helping the AIs survive. They would
tend to develop the "Don't yet the humans get too much power" meme instead.

Memes pass from humans to human quickly because humans are all part of one
big survival machine - the human race - which we need to look at as a large
single distributed machine at times (despite the selfish gene effects). We
act in ways to help the survival of the entire machine, because doing that,
helps our own survival (and I can be talking about "our own" either from
the perspective of a single human or a single gene). We create mechanisms
to allow the free flow of memes (Usenet for example and all the other
communication inventions of humans) because sharing memes inside this
large survival machine is beneficial to the machine's survival.

But if we get to the day that there is on big AI survival machine building
more AIs, along side the human race survival machine, then there will be
conflict. Humans won't want to share memes with AIs if that meme will help
the AI out-survive the humans. And the same thing in reverse. AI's and
humans would tend to keep secretes from each other blocking the free flow
of memes - blocking the meme's ability to survive by jumping from humans to
AIs.

I just don't see the memes as being a separate evolution path that is free
from the needs and constraints of the medium it exists in.

Tim Tyler

unread,
Dec 12, 2009, 12:13:43 PM12/12/09
to
Curt Welch wrote:

> The issue of an AI modifying it's own reward system is one we have debated
> here. Tim Tyler who has posted a lot here but hasn't been around lately
> has some interesting takes on the issue. We refer to as the wirehead
> problem in reference to the idea of sticking a wire in your had to
> stimulate your pleasure centers. He and I have argued in circles about
> wire-heading.

Time for a little more of that, by the sound of it!

> Tim takes the more abstract view that an AI would have a set of high level
> goals and that self modifications like that would be counter to the AIs
> goals. As such, the AI won't do it.
>
> My counter argument is that I believe there is no way to build true
> intelligence with "high level goals". I believe the only way to build true
> intelligence, is by building a machine with the prime low level goal of
> maximizing some internal reward signal. Whether he, or I or both of ur our
> right, all comes down to how an intelligence machine actually works. And
> since we haven't built any of them yet (that is not equal to anything a
> human can do) there is much still unknown about it's potential powers, or
> limits. We really have to solve AI before we can know the answer to these
> types of questions.

I am less pessimistic about other approaches. I do think this is a
challenging and not-convincingly-resolved philosophical question.
However, IMO, there are many things that can be said about the issue
before we have machine intelligence.

One approach comes from examining the behaviour of companies.
I think you and I would both grant that companies act like intelligent
agents with their own goals - which usually prominently involve
making money for their shareholders.

Companies can self-modify and hack into their reward signals. You
once gave an example of a wireheading company - Enron.

Governments are much the same. They can print money - thus hacking
their own utility systems. Governments do exactly that sometimes -
in an operation known as "quantitative easing" - e.g. see:

http://www.infiniteunknown.net/2009/11/07/bank-of-england-extends-quantitative-easing-to-200-billion/

We have examples of wireheading companies and wireheading governments -
suggesting we may see wireheading machine intelligences as well.

Yet these behaviours are not terribly common. An understanding of how
companies and governments avoid wireheading might well apply to machine
intelligences as well.
--
__________
|im |yler http://timtyler.org/ t...@tt1lock.org Remove lock to reply.

Tim Tyler

unread,
Dec 12, 2009, 12:39:44 PM12/12/09
to

Those making machines that compete with humans benefit from it.
The makers of the weaving machines did not put themselves out of
work - they got rich! It was *other* humans that lost their jobs.

I don't think there will be much human-machine conflict. The man-machine
civilisation's opposition will exist, but it will come from terrorists.

> Well, Tim and I went around in circles over the issue of memetic evolution.
> He liked to believe memes have their own power of survival separate from
> the medium they existed in. As such, be believed they would jump from
> humans to AIs and leave the humans behind as if they were a worn out old
> pair of shoes.

Most memes already live and reproduce in machines on the internet.
They are digital - and most of the copying of them is done by machines.

> I don't think it's valid to believe they are so separate. I think memes
> exist only as long as they help their "old shoes" exist. Human memes will
> jump to AIs if those memes also help the AI exist. (do unto others etc is
> one such meme that is easy to see how it could jump from humans to AIs).
> But the ability of the meme to survive is always tied to how it helps the
> brain it's carried in survive.

Computer viruses? Selection pressures on cultural fragments need not
have anything to do with them getting into brains. Many computer
viruses propagate without human involvement at all.

> Memes pass from humans to human quickly because humans are all part of one
> big survival machine - the human race - which we need to look at as a large
> single distributed machine at times (despite the selfish gene effects). We
> act in ways to help the survival of the entire machine, because doing that,
> helps our own survival (and I can be talking about "our own" either from
> the perspective of a single human or a single gene).

It sounds like species-level selection.

> But if we get to the day that there is on big AI survival machine building
> more AIs, along side the human race survival machine, then there will be
> conflict. Humans won't want to share memes with AIs if that meme will help
> the AI out-survive the humans. And the same thing in reverse. AI's and
> humans would tend to keep secretes from each other blocking the free flow
> of memes - blocking the meme's ability to survive by jumping from humans to
> AIs.

If human genes regularly faced competition from memes, they might have
some adaptations that functioned as a defense against them.

Instead, we have millions of years of history during which memes were
more beneficial than destructive. We like memes, overall.

I don't forsee civilisation dividing in the manner you describe. I
essentially think that we will stay as one unified civilisation, once
we fully coalesce. It's just that the "machine" elements will become
more common - while the "man" elements become relatively less
common.

> I just don't see the memes as being a separate evolution path that is free
> from the needs and constraints of the medium it exists in.

I don't see a "separate evolution path" for memes. Evolution is
one big process - consisting of various different replicators. Some
replicators are made of DNA, and other ones are stored in databases.

Each successful replicator alters the environment the others find
themselves in.
Information flows between the various media in meme-gene coevolution.

IMO, the memes have a big long-term advantage. They evolve using
intelligent design and directed mutation. They exploit inductive and
deductive reasoning as well as natural selection. We can control their
size, cost, longevity, reliability, random-access capabilities, etc.

We have seen this phenomenon before - in historical genetic takeovers.
DNA did well for a while - but now it too will be replaced. May the best
set of inheritance media win the battle to carry the world's information.

Tim Tyler

unread,
Dec 12, 2009, 12:43:40 PM12/12/09
to
pataphor wrote:
> Curt Welch wrote:

>> The facts of evolution simply show that universe will tend to be filled
>> with the material structures that does the best job of surviving.
>
> But now we're having a memetic evolution instead of a physical one.

Memes are surely just as physical as DNA-genes are.

In one case you have a nucleic acid storage medium, in the other
you have datatbases, or whatever. Basically both types of inheritance
are forms of information - and information needs a physical representation
in order to exist.

pataphor

unread,
Dec 12, 2009, 1:22:57 PM12/12/09
to
Tim Tyler wrote:
> pataphor wrote:
>> Curt Welch wrote:
>
>>> The facts of evolution simply show that universe will tend to be filled
>>> with the material structures that does the best job of surviving.
>>
>> But now we're having a memetic evolution instead of a physical one.
>
> Memes are surely just as physical as DNA-genes are.
>
> In one case you have a nucleic acid storage medium, in the other
> you have datatbases, or whatever. Basically both types of inheritance
> are forms of information - and information needs a physical representation
> in order to exist.

If you think a chess match is just as physical as a wrestling match,
you're in an analogous situation as Curt when he thinks objects are just
learned behavioral invariants.

P.

casey

unread,
Dec 12, 2009, 3:17:51 PM12/12/09
to
On Dec 13, 1:08 am, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
>> Perhaps you could show me how you would implement a look up table
>> to transform an image into a list of blobs?
>
>
> Well, that's a good question. Nope, I don't know hand-create such
> a function. I'm fairly sure it could be done however and studying
> that problem would be one of many approaches to gaining a better
> understanding of this problem and how to solve it.

Until then I will stick with my view of how AI will be implemented.

> Pictures however are not temporal data. They are purely spatial
> data. As such, wasting a lot of time solving only that problem
> would not be good because when you are done, you would have an
> algorithm that is unworkable for the domain we need to solve -
> which is in the temporal domain.

You really have a fixation on the temporal element of space time.

A series of pictures taken at regular intervals are temporal data.

In working on the robot visual problem I come up with solutions.
I suggest that evolution working on the visual problem probably
came up with similar solutions.


> ... once wired to correctly identify expected correlations in real


> images, the eye can also use that same transform to identify 3D
> objects in a picture.


How much seeing is the result of learning and how much is innate
must be determined by objective observations not Curtian subjective
self observations or according to the Curtian temporal creed
otherwise you are in the position of the Catholic Church who
declared there were no moons around Jupiter because it didn't fit
their theory.

We know that sensory input contains information required by the genes
to construct neural circuits. What circuit it constructs, however,
I would suggest is innate. If the potential is not there to "see"
it will not happen no matter how much sensory input occurs. If you
lack the genes to construct a circuit to extract stereo vision from
two eyes no amount of exposure will change that. However this is
to be confirmed or refuted by science.


> It's basically how TD-Gammon works.


Have you read articles on why the techniques used in TD-Gammon are
not so effective in games like chess?

Different problems need different solutions. What works well on
one problem may not work so well on another problem. Finding a
problem your solution is good at doesn't make it a Universal
solution to any problem.


> To solve the general problem, we have to build a machine that can
> not only abstract by transforming the data, but which can also
> identify, and converge on, GOOD abstractions for whatever problem
> it is working on. And the approach I'm talking about an attempt
> to do that.


I would suggest evolution did just that. Evolution was the programmer
(or builder) that embodied its abstractions in hardware that could
compute an answer.


> Instead of finding anyone to brainstorm with, I've found lots of
> people that don't even understand what the problem is that needs
> to be solved.


Because some (most?) of us don't believe in your perceived "problem".

A problem may turn out to be wrongly conceived or based on
erroneous assumptions.


> If "invention of heuristics" means "invent an algorithm that solves
> it" then that's exactly what I'm working on and searching for. If
> it means, "invent a trick so we don't have to solve it", then I'm
> back to being driven up a wall. :)


Well back up the wall then. At each fork in the road you have to have
some selection method to decide which path to take. I suspect each
set of roads may need their own set of assumptions and there is never
any guarantee of a solution.


> The simple black-box problem I work on is the problem of taking
> multiple parallel sensory inputs to the box and have the box
> produce multiple parallel output signals. The box has a reward
> signal input, and must adjust how it those outputs are computed
> from the inputs to make the reward signal get higher over time
> - with no a prior knowledge of what environment it's interacting
> with outside the box.


Essentially that is evolution as it starts with no prior knowledge.


> This generic learning black box problem is the only problem I've
> been interested in for the past 30 or so years in AI and it's the
> only puzzle of AI I'm going to be interested in until it's solved
> (or until I die).

There may be no single solution. The learning problem was my first
interest in "thinking machines". I read about it in my first books
on cybernetics including the one that I found gave me the best
footing to think about these problems.

http://pespmc1.vub.ac.be/ASHBBOOK.html

JC

Tim Tyler

unread,
Dec 12, 2009, 3:34:26 PM12/12/09
to

I said nothing about chess or wrestling. I was talking about DNA-genes
and memes - which are replicators which are made of different stuff.

DNA builds protein agents, memes build computers and robots.

Robot evolution will seem no less physical than the DNA/protein
world that preceded it. Describing memetic evolution as
"non-physical" seems downright misleading to me. It's no more
"non-physical" than the DNA/protein evolution that preceded it.

Tim Tyler

unread,
Dec 12, 2009, 3:45:31 PM12/12/09
to
casey wrote:
> On Dec 13, 1:08 am, c...@kcwc.com (Curt Welch) wrote:

>> Pictures however are not temporal data. They are purely spatial
>> data. As such, wasting a lot of time solving only that problem
>> would not be good because when you are done, you would have an
>> algorithm that is unworkable for the domain we need to solve -
>> which is in the temporal domain.
>
> You really have a fixation on the temporal element of space time.

I agree:

"Researchers use much the same reasoning that
leads them to think that robots are important to conclude
that getting real-time data is important. I don't think
that's really right either. Correspondence problems demand
as much intelligence as real time problems do. Ultimately,
intelligent agents should be able to function in a range of
environments - so they should be able to solve real-time
problems. However, it doesn't follow that those problems are
particularly important during their development. Real time
problems have the advantage of allowing the agent to get
rapid feedback about the effects of its actions, but that's
about the extent of it."

Curt Welch

unread,
Dec 12, 2009, 5:55:48 PM12/12/09
to
Tim Tyler <t...@tt1.org> wrote:
> Curt Welch wrote:
>
> > The issue of an AI modifying it's own reward system is one we have
> > debated here. Tim Tyler who has posted a lot here but hasn't been
> > around lately has some interesting takes on the issue. We refer to as
> > the wirehead problem in reference to the idea of sticking a wire in
> > your had to stimulate your pleasure centers. He and I have argued in
> > circles about wire-heading.
>
> Time for a little more of that, by the sound of it!

:)

> > Tim takes the more abstract view that an AI would have a set of high
> > level goals and that self modifications like that would be counter to
> > the AIs goals. As such, the AI won't do it.
> >
> > My counter argument is that I believe there is no way to build true
> > intelligence with "high level goals". I believe the only way to build
> > true intelligence, is by building a machine with the prime low level
> > goal of maximizing some internal reward signal. Whether he, or I or
> > both of ur our right, all comes down to how an intelligence machine
> > actually works. And since we haven't built any of them yet (that is
> > not equal to anything a human can do) there is much still unknown about
> > it's potential powers, or limits. We really have to solve AI before we
> > can know the answer to these types of questions.
>
> I am less pessimistic about other approaches. I do think this is a
> challenging and not-convincingly-resolved philosophical question.
> However, IMO, there are many things that can be said about the issue
> before we have machine intelligence.
>
> One approach comes from examining the behaviour of companies.
> I think you and I would both grant that companies act like intelligent
> agents with their own goals - which usually prominently involve
> making money for their shareholders.
>
> Companies can self-modify and hack into their reward signals. You
> once gave an example of a wireheading company - Enron.

Yes, and once it wire headed itself, if stopped performing the function we
(humans) as the creators of the intelligent wanted/expected it to perform.

> Governments are much the same. They can print money - thus hacking
> their own utility systems.

The rewards of a democracy is measured in votes, not dollars. So printing
money is really a different issues there.

> Governments do exactly that sometimes -
> in an operation known as "quantitative easing" - e.g. see:
>
> http://www.infiniteunknown.net/2009/11/07/bank-of-england-extends-quantit
> ative-easing-to-200-billion/
>
> We have examples of wireheading companies and wireheading governments -
> suggesting we may see wireheading machine intelligences as well.
>
> Yet these behaviours are not terribly common. An understanding of how
> companies and governments avoid wireheading might well apply to machine
> intelligences as well.

Yeah, maybe so. The companies and governments were created by intelligent
humans for a purpose. The creators put checks and balances in place to
monitor their creation to make sure it's still serving their desires and
when it doesn't, changes are made.

Evolution is the ultimate created, and monitor of all this. If some AI
wireheads itself, and in so doing, fails to survive, evolution will have
erased it from the future.

Curt Welch

unread,
Dec 12, 2009, 10:15:57 PM12/12/09
to

I think there's certainly some potential to create intelligent langauge
machines which are not real time. However, without having real time
perception and the ability to interact with the world in real time, what
type of view of reality would they end up with? What would they talk about
if you couldn't talk to them about what you did today since even our
concept of "today" would be hard for them to grasp.

Curt Welch

unread,
Dec 12, 2009, 11:12:54 PM12/12/09
to

Yeah, but that's a very different type of competition than what I'm talking
about. Or at least, the magnitude of the issue is very different. I'm
talking about machines that get mad at you when you don't allow it to have
access to energy and raw material and who will then fight you for access to
that energy - even kill you if it must. It's a very different level of
competition we would be talking about if we created smart AIs which were
motivated for their own self preservation like humans are.

> I don't think there will be much human-machine conflict. The man-machine
> civilisation's opposition will exist, but it will come from terrorists.

I don't think so either. I think we will build them to be our slaves and
we won't get anywhere near human machine conflict. In the distant future,
they may end up inheriting the universe as the humans die out.

> > Well, Tim and I went around in circles over the issue of memetic
> > evolution. He liked to believe memes have their own power of survival
> > separate from the medium they existed in. As such, be believed they
> > would jump from humans to AIs and leave the humans behind as if they
> > were a worn out old pair of shoes.
>
> Most memes already live and reproduce in machines on the internet.
> They are digital - and most of the copying of them is done by machines.

Nah, that doesn't count in my view. It's like saying Microsoft word when
it's on a CD is the meme. The CD can't do "word". The CD is just a dump
of the real memes code. The real meme doesn't get created until it's
loaded into a machine that can produce the "word" behavior. That is, until
a machine becomes configured to do "word".

It would be like calling a computer print out of our DNA our genes. In
that form, they can't produce a human body so they aren't real genes. Just
like a photograph of a human isn't a human, and a photograph of a gene
isn't a gene. And a data dump of computer memory is not the software.

THe CD is a little closer to being a real computer meme because there are
mechanisms to make the computer reconfigure itself in response to the CD.
But you have to include the computers as well as part of the meme alone
with the CD. Likewise, though we talk about our genes doing all this neat
stuff, they can't. They need the cell environment to interpret them and
even define what they "mean".

Likewise, human memes that fills site like wikipedia are missing the
human. The meme doesn't really exist until the human got reconfigured to
respond to it.

Now, when the day comes that we have computer AIs that can read web sites
and act on it somewhere near the order of magnitude of action humans can,
then the memes will exist in the AIs as well.

> > I don't think it's valid to believe they are so separate. I think
> > memes exist only as long as they help their "old shoes" exist. Human
> > memes will jump to AIs if those memes also help the AI exist. (do unto
> > others etc is one such meme that is easy to see how it could jump from
> > humans to AIs). But the ability of the meme to survive is always tied
> > to how it helps the brain it's carried in survive.
>
> Computer viruses? Selection pressures on cultural fragments need not
> have anything to do with them getting into brains. Many computer
> viruses propagate without human involvement at all.

Yes, but then we are talking about a computer "meme" that can't exist
without the computers instead of a human meme that can't exist without the
human.

Computer virus only really exist when they are loaded into a computer.
Take away all the computers, then you really have to say there are no virus
(even though you still have CDs with the old virus data on it). It can't
perform the action of replication without the computer - it can't do
anything without the computer,. It can't even infect the computer on it's
own.

> > Memes pass from humans to human quickly because humans are all part of
> > one big survival machine - the human race - which we need to look at as
> > a large single distributed machine at times (despite the selfish gene
> > effects). We act in ways to help the survival of the entire machine,
> > because doing that, helps our own survival (and I can be talking about
> > "our own" either from the perspective of a single human or a single
> > gene).
>
> It sounds like species-level selection.
>
> > But if we get to the day that there is on big AI survival machine
> > building more AIs, along side the human race survival machine, then
> > there will be conflict. Humans won't want to share memes with AIs if
> > that meme will help the AI out-survive the humans. And the same thing
> > in reverse. AI's and humans would tend to keep secretes from each
> > other blocking the free flow of memes - blocking the meme's ability to
> > survive by jumping from humans to AIs.
>
> If human genes regularly faced competition from memes, they might have
> some adaptations that functioned as a defense against them.
>
> Instead, we have millions of years of history during which memes were
> more beneficial than destructive. We like memes, overall.

Well, that's because humans are the only ones creating them and in general,
any mean that helps one human, is likely to help all humans.

But once we get a society of AIs building and they start to develop their
own genes which are good for them, but not good for humans, then we won't
like them so much at all. The AI gene of "kill the fucking humans" is
great for the survival of the AIs but kinda sucks as a human meme. :)

> I don't forsee civilisation dividing in the manner you describe. I
> essentially think that we will stay as one unified civilisation, once
> we fully coalesce. It's just that the "machine" elements will become
> more common - while the "man" elements become relatively less
> common.

Yeah, I don't think you grasp how evolution works. :)

I do actually think we won't divide - at least not anytime soon - but not
because we are all one big happy family as you seem to see it - but because
we won't build AIs that have any interest except being our slaves.

There are people that grow up and become aliened to normal human society
and end up as a result being generally "bad guys" because they don't see
normal human society as their friend and protector. So these humans do
whatever they have to do to take care of themselves - like kill other
humans and rob etc.

These sorts of people can seem fairly off the wall and scary to us - which
is why in general we don't mind locking them up behind bars and keeping
them out of society.

But those guys are humans with basic human needs. When you can build a
true independent AI that's motivated like a human is to take care of itself
- and far worse, you get a large society of them. Watch the fuck out
because you've never seen political conflict like we would see between them
and humans. They will have completely different needs from humans and that
fact will cause them to have completely different political agendas. They
won't give a shit about global warming because they don't need the
biological food chain and ecosystem to survive for them to survive like we
do. Clean water and air? They don't need it. Safety from radiation?
They don't need it. So if any of these human needs gets between them and
their need for more energy and raw material, what do you think is going to
happen? Big ass war.

Small stupid AIs that try to take care of themselves might be cute. We
might like to keep them as pets because they will depend on us giving them
their little electricity bottle and we will like how they need us. But
big smart AIs that are smart enough to trick humans with their own needs to
take care of? No way we will like them in the least.

The danger of AIs that are motivated incorrectly will become apparent very
quickly and we just won't build them for the same reason we don't do other
stupid thinks like put a running chain saw in the crib with our baby.

> > I just don't see the memes as being a separate evolution path that is
> > free from the needs and constraints of the medium it exists in.
>
> I don't see a "separate evolution path" for memes. Evolution is
> one big process - consisting of various different replicators. Some
> replicators are made of DNA, and other ones are stored in databases.

That's a fine way to look at it.

> Each successful replicator alters the environment the others find
> themselves in.
> Information flows between the various media in meme-gene coevolution.

Yeah, that's fine.

> IMO, the memes have a big long-term advantage. They evolve using
> intelligent design and directed mutation. They exploit inductive and
> deductive reasoning as well as natural selection. We can control their
> size, cost, longevity, reliability, random-access capabilities, etc.
>
> We have seen this phenomenon before - in historical genetic takeovers.
> DNA did well for a while - but now it too will be replaced. May the best
> set of inheritance media win the battle to carry the world's information.

Well, there's no batter or competition about "carrying the information".
There is only the top level battle for survival of structure. Which
doesn't, BTW, require any replication. Replication is just one the more
modern technologies that evolved to help with the game. Rocks, with no
power to replicate at all, are still doing very well at the survival game.

For machines that can use information to help with their survival, there is
a lot of human data that is useful in that way (facts of science etc.).
But there's fare more human data that is of almost no use to other forms of
structure like AIs to survive - such as the photo collection of my kids
I've got and a good bit of human history would be of little use to the AIs
(the fact that so and so was elected Mayor in 1812 for example). The bulk
of the data we would find on the internet would be of no use to AIs for
exmaple.

Learning machines care only about the data they believe is useful to them.
Most information in the world is of no use - which is why you don't find it
in the internet. An AI society would have a very different set of
information they would find useful to them, and as such, the information
(and the memes) that carry around aand share, would probably have little in
common with our information.

I'm just saying all that because your closing remark mead it sound as if
the value was in the information, instead of in our need for it. We define
the value of the information relative to our needs.

Curt Welch

unread,
Dec 12, 2009, 11:22:56 PM12/12/09
to

Well, I don't think objects are learned behavior invariants. Objects are
objects. I think our ability to _recognize_ objects come from learned
sensory invariants. :)

This issue of whether the words we use refer to the thing "out there" or
the representation "in here" is an interesting problem. John for example
seems to lean to the view that his words, when he uses them, refer to
what's in is head. Where as I lean the other way. When I say "the chair"
my words are referring to the chair (out there), and not the neurons in my
head that are my internal representation of that chair.

In normal English, we really don't specify or care which of the two
versions of the chair we are talking about. We don't bother to even
consider that there are two chairs we could be talking about.

casey

unread,
Dec 13, 2009, 1:09:08 AM12/13/09
to

I wasn't thinking of real time issues rather I was
thinking of your belief that it is useful to think
in temporal terms instead of in spatial terms.

Real time as I understand it means it is happening in
a system at the same rate as it would in real life.

For example the rate at which the screen is updated
in a computer game would mean you would see things
happening at the same rate they would if they were
events taking place in the real world.

Your notions were based around the fact that a pulse
occupies a infinite number of positions on the time
line and you were suggesting a real system could make
use of these infinitesimal units of time. I refuted
this being possible in any practical system.

In practice we use spatial metaphors for time and this
is reflected in our language.

"That's all behind us". "We're looking ahead". "She has
a great future in front of her". (Taken from The Stuff
of Thought by Steven Pinker who analyses our spatial
metaphor for time in great detail.)

In a physical system, such as a brain or electronic
machine, giving time a spatial representation allows
the temporal patterns to be stored in a static form
and manipulated as static patterns. It allows temporal
patterns to be processed all at once in parallel.
It provides a format for seeing the past and the
predicted future all at once. It allows us to move
back and forth over a planned action. It allows a
high level planned action to control the execution
of that action.

____________________________
| | planned action
______ _____ _______ ____
| || || || | principal movements
__ __ _ __ __ ___ _ _
| || || || || || || || | minor movements
____________________________
|||||||||||||||||||||||||||||| real time automations


A temporal pattern generator made up of connected
oscillator circuits act in real time and there is
no spatial representations of past or future. The
actual rate at which these actions are taken are
not fixed being modulated by higher levels and by
position and force feedback. A good example of a
low level innate system that adapts in real time
might be the self balancing two wheeled Segway.

Your idea is that the node can only know when a pulse
arrives, not from where it came from. This is true but
it is knowing at the level of the node not at the level
of the whole system. Knowledge does not reside just in
the node but also in the connections between the nodes.


JC


Tim Tyler

unread,
Dec 13, 2009, 5:21:29 AM12/13/09
to
Curt Welch wrote:
> Tim Tyler <t...@tt1.org> wrote:

>> We have seen this phenomenon before - in historical genetic takeovers.
>> DNA did well for a while - but now it too will be replaced. May the best
>> set of inheritance media win the battle to carry the world's information.
>
> Well, there's no batter or competition about "carrying the information".
> There is only the top level battle for survival of structure. Which
> doesn't, BTW, require any replication. Replication is just one the more
> modern technologies that evolved to help with the game. Rocks, with no
> power to replicate at all, are still doing very well at the survival game.

What I mean is that different genetic materials compete with one another
for the role of carrying the heritable information of living systems when
they coexist.

Previously, DNA displaced RNA, RNA probably displaced PNA or TNA and
they probably displace earlier genetic materials, in a chain going back to
the first ones.

We see a similar thing with human-invented heritable media. Optical media
have partly replaced magnetic media, which replaced punched cards. Now
solid state memory is growing in popularity.

DNA was the main medium of biological inheritance in the past - but we
are heading towards an era where it gets replaced by multiple better forms
of storage.

> Learning machines care only about the data they believe is useful to them.
> Most information in the world is of no use - which is why you don't find it
> in the internet. An AI society would have a very different set of
> information they would find useful to them, and as such, the information
> (and the memes) that carry around aand share, would probably have little in
> common with our information.

Sure. We will exist in the history books, probably. Maybe we'll be kept
around for a while as a method of rebooting civilisation - in case of an
asteroid strike. However, the machines of the future will probably
descend more from human culture than from human DNA. In the
short term, bacteria have more chance of getting their DNA preserved
than we do - since they know how to do some pretty useful tasks.

Tim Tyler

unread,
Dec 13, 2009, 5:42:54 AM12/13/09
to
Curt Welch wrote:
> Tim Tyler <t...@tt1.org> wrote:
>> Curt Welch wrote:

>>> Well, Tim and I went around in circles over the issue of memetic
>>> evolution. He liked to believe memes have their own power of survival
>>> separate from the medium they existed in. As such, be believed they
>>> would jump from humans to AIs and leave the humans behind as if they
>>> were a worn out old pair of shoes.
>> Most memes already live and reproduce in machines on the internet.
>> They are digital - and most of the copying of them is done by machines.
>
> Nah, that doesn't count in my view. It's like saying Microsoft word when
> it's on a CD is the meme. The CD can't do "word". The CD is just a dump
> of the real memes code. The real meme doesn't get created until it's
> loaded into a machine that can produce the "word" behavior. That is, until
> a machine becomes configured to do "word".

I take an information-theory perspective on this. So, genes and memes
are fundamentally information - and can thus be instantiated in any
physical medium:

http://alife.co.uk/essays/informational_genetics/

> It would be like calling a computer print out of our DNA our genes. In
> that form, they can't produce a human body so they aren't real genes.

Right. I differ - I think genes in a database are still genes. However,
this issue is simply terminology. You like to use the word one way, and
I prefer another usage. Both usages have plenty of historical precident
behind them - so we will just have to remember the different meanings
we give to these words.

>> Instead, we have millions of years of history during which memes were
>> more beneficial than destructive. We like memes, overall.
>
> Well, that's because humans are the only ones creating them and in general,
> any mean that helps one human, is likely to help all humans.
>
> But once we get a society of AIs building and they start to develop their
> own genes which are good for them, but not good for humans, then we won't
> like them so much at all. The AI gene of "kill the fucking humans" is
> great for the survival of the AIs but kinda sucks as a human meme. :)

We see something similar already with computer viruses. Those are
not good for most humans - and indeed, we don't like them at all.

>> I don't forsee civilisation dividing in the manner you describe. I
>> essentially think that we will stay as one unified civilisation, once
>> we fully coalesce. It's just that the "machine" elements will become
>> more common - while the "man" elements become relatively less
>> common.
>
> Yeah, I don't think you grasp how evolution works. :)
>
> I do actually think we won't divide - at least not anytime soon - but not
> because we are all one big happy family as you seem to see it - but because
> we won't build AIs that have any interest except being our slaves.
>
> There are people that grow up and become aliened to normal human society
> and end up as a result being generally "bad guys" because they don't see
> normal human society as their friend and protector. So these humans do
> whatever they have to do to take care of themselves - like kill other
> humans and rob etc.
>
> These sorts of people can seem fairly off the wall and scary to us - which
> is why in general we don't mind locking them up behind bars and keeping
> them out of society.
>

> [...] When you can build a


> true independent AI that's motivated like a human is to take care of itself
> - and far worse, you get a large society of them. Watch the fuck out
> because you've never seen political conflict like we would see between them
> and humans. They will have completely different needs from humans and that
> fact will cause them to have completely different political agendas. They
> won't give a shit about global warming because they don't need the
> biological food chain and ecosystem to survive for them to survive like we
> do. Clean water and air? They don't need it. Safety from radiation?
> They don't need it. So if any of these human needs gets between them and
> their need for more energy and raw material, what do you think is going to
> happen? Big ass war.

Sure. However, this is the human / machine split in society again. You
don't
think that is going to happen. I don't think that is going to happen.
So, I am
not clear about why we are still discussing it. The nearest thing is
likely to
be the technophile / luddite spectrum that we see today.

Technophiles and luddites peacefully coexist in society today. The luddites
don't get to dictate policy very much - and a few ones that go in for
violent
protest get stuck in prison. However, there is little in the way of
outright
warfare. Basically the technophiles are in power (e.g. see DARPA), and the
luddites are not in a position to do very much about it - due to being
insufficiently powerful or numerous.

Don Stockbauer

unread,
Dec 13, 2009, 7:56:06 AM12/13/09
to

But then self-organization is beyond everyone's control, technophiles
and luddites alike, being at the level of the metasystem transition.

pataphor

unread,
Dec 13, 2009, 9:52:39 AM12/13/09
to
Tim Tyler wrote:

> Describing memetic evolution as "non-physical" seems downright
> misleading to me. It's no more "non-physical" than the DNA/protein
> evolution that preceded it.

So what *is* non-physical then? If everything falls under a term, the
discriminative power of that term goes to zero. It's not about signaling
one is a tough materialist, it is about making the most effective use of
the words. This means that for 'physical' to be a useful word, it should
apply to about 50 percent of all things we talk about. But the rot has
gone so deep that even using the term "non-physical" to describe things
that are as far removed from the world as the contemplation of chess
moves in someone's head lumps one together with people looking into
crystal balls.

This has its origin in complete assholes in safe academic positions
trying to protect their turf by appearing tough, all the while getting
payed big time and deviating the minds of our youth so that they will
support their social pyramid. So they try to make machine learning
programs that not only cluster the data, but that also find the best
cluster starting points, because you now, any program that cannot stand
on its own is not real AI.

But that humility is exactly what is missing, in some respects we are no
better than back gammon programs where evolution has filled in the
cluster starters for us. In fact, the way we think the world is divided
into objects is mostly that. If Curt and John would kiss and make up,
we'd have AI.

(Not that I think either of Curt or John, or Tim for that matter, are
one of these people that have caused us to fear using words where they
fit, it's just like a global economic crisis of greed, a thing that we
all suffer from because we did not stop those profiting from it until
they got so big we can't allow them to fall, yet we must.)

P.

Tim Tyler

unread,
Dec 13, 2009, 10:06:22 AM12/13/09
to
pataphor wrote:
> Tim Tyler wrote:
>
>> Describing memetic evolution as "non-physical" seems downright
>> misleading to me. It's no more "non-physical" than the DNA/protein
>> evolution that preceded it.
>
> So what *is* non-physical then? If everything falls under a term, the
> discriminative power of that term goes to zero. [...]

I didn't say *everything* was physical - just that memes are no more
non-physical than DNA-based genes are.

casey

unread,
Dec 13, 2009, 2:57:35 PM12/13/09
to
On Dec 13, 11:56 pm, Don Stockbauer <don.stockba...@gmail.com> wrote:
> > Technophiles and luddites peacefully coexist in society today.
> > The luddites don't get to dictate policy very much - and a
> > few ones that go in for violent protest get stuck in prison.
> > However, there is little in the way of outright warfare.
> > Basically the technophiles are in power (e.g. see DARPA), and
> > the luddites are not in a position to do very much about it -
> > due to being insufficiently powerful or numerous.
>
> But then self-organization is beyond everyone's control, technophiles
> and luddites alike, being at the level of the metasystem transition.

But I would say that different groups at different times may vary in
the strength of the effect they have on the whole?

JC

Curt Welch

unread,
Dec 13, 2009, 3:10:02 PM12/13/09
to

I find it hard to grasp what you are talking about at times.

However, on the issue of physical vs non-physical - the terms comes from
the duality of the mind body problem - the question of a human soul. The
English language as used in day to day communication is filled with
concepts of duality. As you say, maybe 50% of the standard concepts of the
English lagniappe are references what is is described as the non-physical
stuff of the world. The non-physical stuff is the domain of the mind and
soul. It's everything that happens in the non-physical mind (according to
standard social definition). That is, thoughts, and ideas, and feelings,
and emotions and desires an intentions and dreams and concepts and
abstractions and awareness and qualia and 1000 other things we have words
in the English language are all said to be non-physical. We even have the
world intangible to label the stuff of the mind which isn't physical.

However, if you are a materialist, all that crap is total nonsense - as is
the entire belief in the soul. Materialism makes the claim that all the
stuff we normally talk about as being non-physical, is in fact also
physical. It's just physical activity in the brain. Materialism takes all
meaning away from the concept of non-physical at the same time it takes
away all validity of the soul as being something separate from the body.

If you understand and believe in materialism, then the domain of the
non-physical doesn't exist. The concept has no meaning. However, based on
how people use it, you can also just re-define the concept to mean
something else - which is, ti mean that the domain of the non-physical is
just the physical activity of the brain that humans are able to sense
happening in their own brain.

It's no accident that our culture and basically every culture in human
society has this same error of believing in the non-physical. It's created
by a standard illusion that forms in humans. It's an error the brain makes
when it attempts to model reality. And this goes back to what I was just
talking about how I believe the way the brain breaks the world into
invariant objects is based on the invariants it finds in the sensory data.
It's just a statistical process the brain applies to sensory data in order
for it to define the objecvts it's able to sense in the unvierse.

The brain is able to sense it's own activity to some extend. It's why we
are aware of our own thoughts. But this brain activity we can sense,
seems to have no direct correlation to events detected by the external
sensors. When we have a thought, it doesn't at the same time, create a
noise we can hear with our ears. It doesn't make our head glow with EM
radiation that can be picked up with our eyes. It doesn't give off a smoke
that we can smell. There is no correlation for the statistical system of
the brain to find between our external sensor data, and the internal sensor
data. The result, is that the brain divides the world into two major sets
of "objects". The external objects, and the internal objects. The
external objects are the part of our brain's model that we use the word
"physical" to describe. And all those internal objects "the stuff our
brain is doing" we call non-physical. But the sense we all have that this
internal stuff is not physical, is just an illusion. It's an illusion
created by the fact that we can't sense our own brain behaviors, using our
eyes or ears. And that illusion - a simple illusion crated by the fact we
can't see our own brain functioning with our own eyes - is what makes every
human view reality as if it had a major divide between the stuff outputs
the head (the physical) and the stuff inside the head (the mind).

So, if you want to define what non-physical means (based on how people use
it), it means the physical activity happening in our own head, which we can
sense, but yet can't correlate with all the external activity that is
picked up with our external sensors. But it's not really non-physical.
It's just an illusion that most humans are stuck with.

casey

unread,
Dec 13, 2009, 4:21:41 PM12/13/09
to
On Dec 14, 1:52 am, pataphor <patap...@gmail.com> wrote:
> Tim Tyler wrote:
> > Describing memetic evolution as "non-physical" seems downright
> > misleading to me.  It's no more "non-physical" than the DNA/protein
> > evolution that preceded it.
>
> So what *is* non-physical then?

Something that doesn't exist.

If something exists it is physical, if it doesn't exist it is non-
physical.

If something exists it has an effect.

JC

Curt Welch

unread,
Dec 13, 2009, 5:56:33 PM12/13/09
to
casey <jgkj...@yahoo.com.au> wrote:

Yeah, that's generally what I mean when I use it in this context. But I
also means that the machine can solve time related problems (in real time),
like catching a ball thrown to it, or driving a car. Really any of the
actions where we are required to make an action ahead of the effect we need
- like knowing when, and how hard, to step on the brake pedal to make a
smooth stop in a car at the stop sign.

> For example the rate at which the screen is updated
> in a computer game would mean you would see things
> happening at the same rate they would if they were
> events taking place in the real world.

Yes, that's fine.

> Your notions were based around the fact that a pulse

> occupies a infinite number of positions ...

No, my notions of what the real time _problem_ is has NOTHING to do with
pulse signals. We have lots of hardware that solves real times problems
using traditional frame-by-frame processing of spatial date.

I just happen to believe that solving this problem in a signal processing
system will be easier to understand and explain with pulse signals than
with the more traditional fixed-rate sample abstraction we like so much to
use in our hardware.

> on the time
> line and you were suggesting a real system could make
> use of these infinitesimal units of time. I refuted
> this being possible in any practical system.

And I showed your "refuting" to be nonsense. But that was yet another
concept you seemed unable to grasp. The fact that analog pulse signals can
and do make use of infinite resolution in the temporal domain is neither
relevant or important to what the temporal problem is. When I implement
the pulse abstraction in digital hardware that infinite resolution is lost
anyway due to the discrete time steps of computer hardware.

The past debates I've had with you over the true nature of pulse signals
was just more of me trying to get you to understand things you don't seem
to be able to understand.

> In practice we use spatial metaphors for time and this
> is reflected in our language.
>
> "That's all behind us". "We're looking ahead". "She has
> a great future in front of her". (Taken from The Stuff
> of Thought by Steven Pinker who analyses our spatial
> metaphor for time in great detail.)

Yes we use spatial langauge to talk about time. And that's important to
the fact that AI needs to solve time based problems how?

> In a physical system, such as a brain or electronic
> machine, giving time a spatial representation allows
> the temporal patterns to be stored in a static form
> and manipulated as static patterns. It allows temporal
> patterns to be processed all at once in parallel.
> It provides a format for seeing the past and the
> predicted future all at once. It allows us to move
> back and forth over a planned action. It allows a
> high level planned action to control the execution
> of that action.

Those are all very good descriptions of how humans take advantge of
writting stuff down on paper - and how humans, tend to design machines baed
on how they temselves use paper to solve temporal programs.

But I think that's exactly one of the reasons it's so hard for humans to
understand how the brain works, because it doesn't use the "paper and pen"
trick we like to use both with real paper and pen, and abstractly in how we
design our machines.

> ____________________________
> | | planned action
> ______ _____ _______ ____
> | || || || | principal movements
> __ __ _ __ __ ___ _ _
> | || || || || || || || | minor movements
> ____________________________
> |||||||||||||||||||||||||||||| real time automations
>
> A temporal pattern generator made up of connected
> oscillator circuits act in real time and there is
> no spatial representations of past or future. The
> actual rate at which these actions are taken are
> not fixed being modulated by higher levels and by
> position and force feedback. A good example of a
> low level innate system that adapts in real time
> might be the self balancing two wheeled Segway.

Yes, and if you look at how they design those types of systems, it's a just
a signal processing feedback system, not something that "plans out some
future in a spatial domain and then "plays it out" in the temporal domain.

> Your idea is that the node can only know when a pulse
> arrives, not from where it came from.

No, that's not my idea. That's true about the one network design you seem
to be able to remember, but it's also the one thing I had issues with when
I first came up with that design, and one of the key issues that made me
conclude that network design wasn't ever going to work as a complete
solution - which is why I spent all that time talking about how to fix that
problem in other designs.

You might remembrance that Michael Olea was the one that pointed out that
my network couldn't compute a "which pulse came first" function and that
fact was the one thing that made me decide the network was fundamentally
lacking an important ability. I decided that design wasn't going to work
back then and haven't considered the design workable since then.

You are a few years behind the "curt" technology curve John.

I thought with that design that it would have enough power to solve both
the spatial and temporal aspects of the problem by only doing the
processing in the temporal domain and not _bothering_ to adjust based on
which of the two inputs feed the node. But ultimately, I've changed my
tone about this and I now more clearly describe it as a spatial-temporal
problem and believe the low level nodes will need to deal with both domains
at the same time - somehow.

> This is true but
> it is knowing at the level of the node not at the level
> of the whole system. Knowledge does not reside just in
> the node but also in the connections between the nodes.

And you see, that was exactly my thinking to believe that network design
could work. Though the individual nodes performed a purely temporal
function which was independent of which path the pulse came from, the
overall network was very constrained in how pulses could travel because of
it's overall topology. So though the individual nodes performed a purely
temporal based decision, their action was a spatial action. They converted
some of the temporal information to the the spatial domain. So the
temporal domain was dealt with by the nodes, but the spatial domain was
solve by the overall network topology. But my current conclusion is that
it didn't solve it well enough.

My best understanding at the moment, is that the processing function the
low level node does really needs to include both domains. It must make
output decisions based both on where it came from, and when, to calculate
where to send it.

Tim Tyler

unread,
Dec 13, 2009, 6:21:16 PM12/13/09
to
Curt Welch wrote:
> Tim Tyler <t...@tt1.org> wrote:

>> "Researchers use much the same reasoning that
>> leads them to think that robots are important to conclude
>> that getting real-time data is important. I don't think
>> that's really right either. Correspondence problems demand
>> as much intelligence as real time problems do. Ultimately,
>> intelligent agents should be able to function in a range of
>> environments - so they should be able to solve real-time
>> problems. However, it doesn't follow that those problems are
>> particularly important during their development. Real time
>> problems have the advantage of allowing the agent to get
>> rapid feedback about the effects of its actions, but that's
>> about the extent of it."
>
> I think there's certainly some potential to create intelligent langauge
> machines which are not real time. However, without having real time
> perception and the ability to interact with the world in real time, what
> type of view of reality would they end up with? What would they talk about
> if you couldn't talk to them about what you did today since even our
> concept of "today" would be hard for them to grasp.

I have a recent video about this general topic:

"Tim Tyler: On embodiment"

- http://www.youtube.com/watch?v=ZV8OUVy78v0

Internet search oracles and stockmarket traders (for example) will probably
develop a good understanding of what "today" means. Stockmarket traders
in particular will need to know the date and time that stock-related
news items
refer to. As a result, they will be able to figure out what "today" means -
with a bit of suitable reinforcement. I doubt that robots will be employed
to help with this task.

Curt Welch

unread,
Dec 13, 2009, 8:04:46 PM12/13/09
to

Well, I'm fairly sure there's no reason not to make these things temporal
and very time-aware in general even if they don't have a body. You have to
be fairly time-aware to communicate verbally correctly with humans for
example. You almost have to work harder to create a machine that has no
accurate sense of time passing I suspect and I doubt there will be any
significant savings to justify the construction of such systems. If the
purpose of these machines is to help humans in some way, and they don't
correctly understand the passage of time, then it just makes it harder for
them not to make mistakes in what they are doing for us.

Clearly, timing is everything in the stock market - not only in your
investment actions, but also in understanding the dynamics of the market as
well as understanding the complex causes moving the market.

If there's any use of non-temporal AI I think it would be limited
to highly special and highly artificial domains like solving chess
problems, or solving complex mathematical problems. There's just not much
we do that doesn't have time as an important factor of the problem (not
just the passage of time in some crude form but in being able to produce
appropriate actions at the right time).

Don Stockbauer

unread,
Dec 13, 2009, 10:41:46 PM12/13/09
to

I'll agree with you there. But even a large group cannot have a fully
deterministic effect on the Earth, due to the actions of its non-
members, plus their own actions having unintended consequences.

zzbu...@netscape.net

unread,
Dec 14, 2009, 10:25:59 AM12/14/09
to
On Dec 6, 3:15 pm, casey <jgkjca...@yahoo.com.au> wrote:
> Biological brains start with a default set of connections
> which are then modified by experience.

AI is not difficult to understand.

1) If the only thing you know computers is IBM, forward your resume
GM.
2) If the only thing you know computers is Microsoft, start brushing
up on
the Donald Tump Real Estate Market.
3) If the only thing you know computers is Quantum Mechanics,
get a life, get a 21st Century Digital Book, and get a job with
NBC..
4) If the only thing you know about computers is IEEE, Try
Philosophy,
rather than Blue Ray, Home Broadband, XML, USB, Holograms, Self-
Assembling Robots,
Desktop Publishing, mp3, mpeg, Cell Phones, Atomic Clock
Wristwatches,
Light Sticks, HDTV, Cyber Batteries, Self-Replicating Machines,
UAVs, GPS,
and Post GE-nomics.


>
> There are those who want to skip the hard slog of inventing
> those innate default circuits that allow a brain to convert
> a complex sensory input into an simpler internal model of
> the brain's environment.
>
> My hunch is we are going to have to reproduce the innate
> circuitry possessed by real brains. I suspect we are not
> this general purpose machine some imagine we are simply
> because we can do a lot of things. We will have to invent
> these innate modules or evolve them.
>
> Imagine if your robot had a visual module that converted
> the complex changing pixel values from a video camera into
> a description of a scene with objects, positions and actions.
> I think adding the bit to allow it to drive a car would become
> easier. It is the innate modules that took evolution millions
> of years to develop that are difficult not the high level
> symbolic reasoning on top.
>
> We can program our computers to do logic and math because
> it is simple compared with the complex sensory analysis
> and complex motor synthesis done by biological brains.
>
> JC

Curt Welch

unread,
Dec 14, 2009, 12:05:08 PM12/14/09
to
Tim Tyler <t...@tt1.org> wrote:
> Curt Welch wrote:
> > Tim Tyler <t...@tt1.org> wrote:
>
> >> We have seen this phenomenon before - in historical genetic takeovers.
> >> DNA did well for a while - but now it too will be replaced. May the
> >> best set of inheritance media win the battle to carry the world's
> >> information.
> >
> > Well, there's no batter or competition about "carrying the

batter? I hate re-reading what I mistyped in old messages _after_ it's
been posted and I can't fix it. :)

That was intended to be battle. I guess.

> > information". There is only the top level battle for survival of
> > structure. Which doesn't, BTW, require any replication. Replication
> > is just one the more modern technologies that evolved to help with the
> > game. Rocks, with no power to replicate at all, are still doing very
> > well at the survival game.
>
> What I mean is that different genetic materials compete with one another
> for the role of carrying the heritable information of living systems when
> they coexist.
>
> Previously, DNA displaced RNA, RNA probably displaced PNA or TNA and
> they probably displace earlier genetic materials, in a chain going back
> to the first ones.
>
> We see a similar thing with human-invented heritable media. Optical
> media have partly replaced magnetic media, which replaced punched cards.
> Now solid state memory is growing in popularity.
>
> DNA was the main medium of biological inheritance in the past - but we
> are heading towards an era where it gets replaced by multiple better
> forms of storage.

Yeah, that's all good. But again, there is no goal here of "carrying
information". There is only the top level goal of surviving. It just
happens that information based construction systems are able to evolve and
improve faster than non-information based systems.

yes, we are developing a wide array of new technologies for storing and
transporting information, but they are all being engineered for our purpose
- which is to help humans better survive.

Because "good information storage" is not the top level goal, it's very
unclear how all this technology is going to change the top level survival
game dynamics. That game is far more complex than just carrying
information forward in time.

> > Learning machines care only about the data they believe is useful to
> > them. Most information in the world is of no use - which is why you
> > don't find it in the internet. An AI society would have a very
> > different set of information they would find useful to them, and as
> > such, the information (and the memes) that carry around aand share,
> > would probably have little in common with our information.
>
> Sure. We will exist in the history books, probably. Maybe we'll be kept
> around for a while as a method of rebooting civilisation - in case of an
> asteroid strike. However, the machines of the future will probably
> descend more from human culture than from human DNA. In the
> short term, bacteria have more chance of getting their DNA preserved
> than we do - since they know how to do some pretty useful tasks.

I don't in general, see how humans are going to give up the race for
survival to their toasters - no matter how smart the toasters are.

If in some future, the human race is facing extinction because of forces
beyond our control, I could see people deciding to build AIs that are
intended to carry on without the humans. But unless human society is gone,
I just don't see a path for the AIs being allowed to take over - or to get
_any_ real power over the humans. I don't think humans as a whole, will
every allow such a thing.

Your view has always struck me as a fairly intellectual/academic view of
purpose in life. That is, information seems to be your purpose - to create
it, to share it, to use it. That's what a scientist is trained to do - and
what the entire structure of the academic system is set up for - the
creation and preservation of knowledge to be shared and passed on the
future generations. n that world, I can understand how AIs could be very
useful to carry on the torch.

But that world is NOT the real world of humans. It's just one job that
some humans get assigned to do because it is helpful to the real purpose of
humans - survival of the human race. If you were born and raised in the
academic environment, than I can understand how you would be conditioned to
see the academic point of view as some key to the future. But you are
being deceived by the environment you were raised in.

To believe, or act, that the academic view of "purpose" is the highest goal
in the universe is to fail to understand what humans are and where we came
from. When the shit hits the fan (so to say) and times get tough, the
academic system will fall apart and we will burn all the books to try and
stay warm of that's what it takes to survive. Because the need for _human_
survival is what will always become the top priory when everything else
falls apart.

Academic research for new knowledge is one of the many things people do
when they have too much free time -- when surviving becomes so easy, we
forget how important survival is. But drop any of us off in the middle of
nowhere, where our life and health is suddenly at real risk, and none of us
would be wasting much time improving our dead language skills or debating
AI. :)

When we invent better machines with AI, what we will have, is just more of
the same of what we already have. Machines that make surviving easier for
us. We will become even more addicted to the machines than we already are
addicted to all the technology that protects us today - like our houses,
and cars, and cell phones, and computers.

The machines we build now our all our slaves. They do what we need them to
do. We take care of them, only as long as it's a large net win to us.
That is, whatever we lose in terms of money, time, and resources, to take
care of machine, must be paid back many times over, in the value we get
from them. When we add AI technology to the mix, nothing about the balance
will change. We will only build, and use, these AI based machines if it's a
large net win in our favor.

If you build AI machines that can, and do, attempt to survive on their own,
and you make them an _equal_ part of human society, meaning you give them
the the right to own property, and give them the right to vote, then we
will have a real problem. The AI machines will out produce, and outperform
humans, and we will be slowly and systematically wiped out. They will take
over control of the government, and all natural resources, and give us only
what the kindness of their hearts allow - which means they will basically
take our right to reproduce away from us, and reduce our numbers to such
low levels that we will be animals in the local zoo.

"Normal" run of the mill non-academia humans won't stand for this. They
will jump all over any suggestion of giving machines equal power to humans
long long long before it ever gets to that point.

Yes, humans will become addicted to having smart AIs to do things for them.
But because we can, and will, build smart AIs that don't have survival as
their prime motivation, but instead have the care of humans as their prime
motivation, there will be no need or purpose to give them an vote, or power
in society. The only power and desire they will be given, is the power to
respond to a request by a human.

So we will have a world, with humans clearly in charge, with lots of smart
machines serving us and even though we we have tons of machines smarter
than any human, none of them will be using their intelligence to try and
out-survive us. The creation of human and above human levels of AI won't
do anything to change the dynamics of who's in charge here.

But what happens in the long run? I don't know. It's too hard to predict.
With every increasing advances in science and technology, we will develop
some very odd powers to modify humans both though genetics, and though
surgery after birth, and though the combining of man and machine. The more
man himself gets modified by his own technology, the harder it will become
to understand, or define, what human civilization is. Humans themselves,
will evolve into something different. Whether, in time, there's any
biological component left is hard to predict.

So in the long run, humans might evolve, one small step at a time, into
what we might think of as AI machines. So I can agree that is a
possibility in the long run. But NOT because we hand over our future to
the AI machines we create to serve us - but because we transform
_ourselves_ into something very different over a long time.

Curt Welch

unread,
Dec 14, 2009, 12:29:27 PM12/14/09
to
Tim Tyler <t...@tt1.org> wrote:
> Curt Welch wrote:
> > Tim Tyler <t...@tt1.org> wrote:
> >> Curt Welch wrote:
>
> >>> Well, Tim and I went around in circles over the issue of memetic
> >>> evolution. He liked to believe memes have their own power of survival
> >>> separate from the medium they existed in. As such, be believed they
> >>> would jump from humans to AIs and leave the humans behind as if they
> >>> were a worn out old pair of shoes.
> >> Most memes already live and reproduce in machines on the internet.
> >> They are digital - and most of the copying of them is done by
> >> machines.
> >
> > Nah, that doesn't count in my view. It's like saying Microsoft word
> > when it's on a CD is the meme. The CD can't do "word". The CD is just
> > a dump of the real memes code. The real meme doesn't get created until
> > it's loaded into a machine that can produce the "word" behavior. That
> > is, until a machine becomes configured to do "word".
>
> I take an information-theory perspective on this. So, genes and memes
> are fundamentally information - and can thus be instantiated in any
> physical medium:
>
> http://alife.co.uk/essays/informational_genetics/

Yeah, it's ok to talk about them using an information perspective. And
some aspects of their survival can be understand only from that
perspective. But the far more important aspect of their survival can't be
understand from the information perspective because it doesn't happen at
the information perspective. It happens at the level of the hardware which
the information perspective intentionally ignores.

> > It would be like calling a computer print out of our DNA our genes. In
> > that form, they can't produce a human body so they aren't real genes.
>
> Right. I differ - I think genes in a database are still genes. However,
> this issue is simply terminology.

Yes, and I'll change my stance on what a gene is depending on the time of
day. :) Is the information perspective data we are talking about or is the
implementation that allows it to happen? When we talk about the genes
ability to survive, we can't really get the story right if we stick to the
information view of genes.

> You like to use the word one way, and
> I prefer another usage. Both usages have plenty of historical precident
> behind them - so we will just have to remember the different meanings
> we give to these words.

Except in the next post, I'll likely switch and use your definition, so you
won't be able to figure out what I'm thinking half the time. :)

Yeah, and the invention, and control, of the AI based machines are going to
add more fuel to that fire. AI could be used by a small group to try and
take control of society. And they might succeed. This might create a very
interesting change in the evolutionary path of the human race because of
it. Again, it's just hard to predict.

Because we don't have AI yet, humans are still quite valuable to each
other. We all benefit from this large productive society of billions of
humans making stuff and helping each other, and sharing information
survival information with each other. Capitalism (when correctly used and
regulated) works as a way to mesh our efforts together and keep us
productive for the good of the larger society. But when AIs develop, it
wont just be the physical tasks that are replaced by the machines - all the
mental tasks we need done will be replaced by the machines as well -
leaving almost nothing that a human can do for another human, that some
machine can't do better.

When we get to that point (or even as we approach that point), things will
become very different for humans. We will all want an army of AI machines
working for us, and our ability to give a shit about other humans will
decline as our needs for their efforts decline. Human society today is
held together by our common needs, and most important, by our ability to
make our lives better when we work together. But the more AI we get, the
less we will need to work together on anything, and the more the fabric
that currently holds are society together will unravel. What will the end
result of that be? I have no clue. It could be a bunch of big wars
between the people that have a large army of AIs and the people that don't,
ending up with a very small human society left (a few thousand people???)
each in charge of a very large arm of AI machines working to take care of
them.

Or we can take the optimistic view and maybe there will simply be large
amounts of world piece. But that we as a society will vote to strongly
regulate reproduction which will lead to a great drop in the human
population over time again resulting in a much smaller human population in
the end? It's all very hard to predict.

But I still believe that the survival ability of any gene or meme is
_mostly_ defined by the survival power of the gene's implementation as a
whole, and far less on some intrinsic value of the information itself. I
don't think you can understand much of anything about the gene's survival
power by looking only at the information. It's like trying to answer the
question about the survival value of "1010101010111110011000111". Is that
information a good survivor or not?

casey

unread,
Dec 14, 2009, 4:14:43 PM12/14/09
to
On Dec 14, 9:56 am, c...@kcwc.com (Curt Welch) wrote:

> casey <jgkjca...@yahoo.com.au> wrote:
>> Real time as I understand it means it is happening in
>> a system at the same rate as it would in real life.
>
>
> Yeah, that's generally what I mean when I use it in this
> context. But I also mean that the machine can solve

> time related problems (in real time), like catching a
> ball thrown to it,


http://www.youtube.com/watch?v=-KxjVlaLBmk

The machine may not have learned to do this by itself.
This innat module was designed by men just as our
modules were designed by evolution.

> or driving a car.

Like catching a ball I think this will be achieved also by
innate visual hardware that can reduce the high dimension
problem input to low dimension line following problem.


> Really any of the actions where we are required to make
> an action ahead of the effect we need - like knowing when,
> and how hard, to step on the brake pedal to make a smooth
> stop in a car at the stop sign.

A problem a robot can also deal with it easily. The hard part
is not the predicting ahead of time, it is translating the
data from the sensory inputs into a real time current state
in order to compute that prediction.


> Yes we use spatial language to talk about time. And


> that's important to the fact that AI needs to solve
> time based problems how?


Spatial representation might be used in the planning and
learning stages. Once the desired action is represented
it can be used to initiate a real time action and compare the
result with the desired and then the difference fed back to
the real time learning system. A simple example is learning to
write with a pen. You can "visualize" on your inner virtual
paper what it should look like and the movements you would
need to make. At first it will be slow because the visual
feedback that generates the error signal is to the high level
planning stages but the lower real time motor systems will
learn as a result of feedback from the planner, combined with
feedback signals from the hand, to refine the gross motor
actions until they are as smooth as possible. After that the
high level planner only has to initiate an motor pattern and
the low level real time systems, like the ball catcher robot,
will execute it.

They have actually used PET scans during the acquisition of a
skill and many parts of the brain are involve in the learning
stage but only a few parts are involved in executing it once
it has been learned.

Try writing something with one hand and then the other hand.
Notice the difference and the kinds of errors and speed of
execution possible for a particular outcome.

The high level plan is the same in both cases.


>> In a physical system, such as a brain or electronic
>> machine, giving time a spatial representation allows
>> the temporal patterns to be stored in a static form
>> and manipulated as static patterns. It allows temporal
>> patterns to be processed all at once in parallel.
>> It provides a format for seeing the past and the
>> predicted future all at once. It allows us to move
>> back and forth over a planned action. It allows a
>> high level planned action to control the execution
>> of that action.
>
>
> Those are all very good descriptions of how humans

> take advantage of writing stuff down on paper - and
> how humans, tend to design machines based on how they
> themselves use paper to solve temporal programs.


>
>
> But I think that's exactly one of the reasons it's so
> hard for humans to understand how the brain works,
> because it doesn't use the "paper and pen" trick we
> like to use both with real paper and pen, and abstractly
> in how we design our machines.

Dear Curt you have no idea how the brain works so don't
start declaring it can't use the functional equivalent
to paper and pen. If there is a way then there is no
reason it could not have been done that way. If you
visualize something, including the path of a movement,
you use the same visual areas of the cortex involved
in seeing that something.


>> ____________________________
>> | | planned action
>> ______ _____ _______ ____
>> | || || || | principal movements
>> __ __ _ __ __ ___ _ _
>> | || || || || || || || | minor movements
>> ____________________________
>> |||||||||||||||||||||||||||||| real time automations
>>
>>
>> A temporal pattern generator made up of connected
>> oscillator circuits act in real time and there is
>> no spatial representations of past or future. The
>> actual rate at which these actions are taken are
>> not fixed being modulated by higher levels and by
>> position and force feedback. A good example of a
>> low level innate system that adapts in real time
>> might be the self balancing two wheeled Segway.
>
>
> Yes, and if you look at how they design those types of
> systems, it's a just a signal processing feedback system,
> not something that "plans out some future in a spatial
> domain and then "plays it out" in the temporal domain.

The Segway was designed (parts selected) by the builder,
just as I have suggested innate modules in organisms are
designed (parts selected) by evolution.

I think the problem here is you _want_ it all to bubble out of
a complex network of weights with the details all being beyond
human understanding. Well maybe it doesn't work that way.
And maybe the way they are building robots now is the way it
was done when organisms evolved. Maybe we can understand how
the modules work together to make plans and learn new things.


> You are a few years behind the "curt" technology curve John.

Well not much I can do about that unless you want to provide
the details of your latest efforts.

> My best understanding at the moment, is that the processing
> function the low level node does really needs to include both
> domains. It must make output decisions based both on where
> it came from, and when, to calculate where to send it.

And where we disagree is HOW it can be done.

I see a generalized learning system evolving out of trials
done by specialist learning systems. The things they have
in common will point the way. A glaring example is the use
of heuristics used to solve the high dimensional problem
in games like chess and checkers. The question becomes how
can a system learn to generate heuristics.

JC


Curt Welch

unread,
Dec 14, 2009, 6:06:28 PM12/14/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 14, 9:56=A0am, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> >> Real time as I understand it means it is happening in
> >> a system at the same rate as it would in real life.
> >
> >
> > Yeah, that's generally what I mean when I use it in this
> > context. But I also mean that the machine can solve
> > time related problems (in real time), like catching a
> > ball thrown to it,
>
> http://www.youtube.com/watch?v=3D-KxjVlaLBmk

Yes, it's obvious that happens John. The PET scan was not needed.

But you are talking about a very specific and very high level type of
learning what results from one part of the brain in effect training the
other. That in no means can explain how we do learning because it fails to
explain how the first part of the brain leaned it to start with. How the
brain learn that it "wanted to draw a letter with a pen so that it could
direct the other part of the brain to learn the motion"?

> Try writing something with one hand and then the other hand.
> Notice the difference and the kinds of errors and speed of
> execution possible for a particular outcome.

Yes, but again, you are talking about a type of high level learning that
happens when on part of the brain teaches the other. That type of learning
is nearly identical to what happens when a teacher teaches us to do
something. Except in your example, one part of the brain is teaching the
other instead of an external teacher making it happen. These are all
important types of learning that must in the end be fully explained, but
they fail to get at the how the brain learns without a teacher - which must
happen at some point to bootstrap the creation of new knowledge.

> The high level plan is the same in both cases.
>
> >> In a physical system, such as a brain or electronic
> >> machine, giving time a spatial representation allows
> >> the temporal patterns to be stored in a static form
> >> and manipulated as static patterns. It allows temporal
> >> patterns to be processed all at once in parallel.
> >> It provides a format for seeing the past and the
> >> predicted future all at once. It allows us to move
> >> back and forth over a planned action. It allows a
> >> high level planned action to control the execution
> >> of that action.
> >
> >
> > Those are all very good descriptions of how humans
> > take advantage of writing stuff down on paper - and
> > how humans, tend to design machines based on how they
> > themselves use paper to solve temporal programs.
> >
> >
> > But I think that's exactly one of the reasons it's so
> > hard for humans to understand how the brain works,
> > because it doesn't use the "paper and pen" trick we
> > like to use both with real paper and pen, and abstractly
> > in how we design our machines.
>
> Dear Curt you have no idea how the brain works so don't
> start declaring it can't use the functional equivalent
> to paper and pen.

Some thins are so obvious with the little data we currently have they
actually can be declared.

> If there is a way then there is no
> reason it could not have been done that way. If you
> visualize something, including the path of a movement,
> you use the same visual areas of the cortex involved
> in seeing that something.

Yes, after learning to use paper and pen to solve problems, we can then use
our brain to visual those actions to solve problems without the paper and
pen. But none of that explains how the brain learned to visualize the use
of paper and pen in the first place.

You so often fall back to the high level behaviors we are able to sense in
ourselves when we do learn as "the way we learn" instead of grasping that
your logical is completely flawed. You can't user our learned behavior to
explain how the behavior is learned. YOu have to explain where the
behavior came from.

But you have the answer for that don't you? You fall back to "evolution
solved that for us".

Humans have LOW LEVEL high dimension learning skills that are easily tested
for and and demonstrated which you have not explained expect to say it's
not possible so evolution must have "cheated" somehow instead of actually
building learning hardware that solves the hard problem.

If solving AI is nothing more than re-implementing the hard-coded behaviors
that evolution build into us as you suggest it is, the field of AI would
have made far more progress in the last 60 years than it has. A lot of
people have done what you suggest - look at what humans do at all levels of
behavior and then hard-code it into a machine to get human-like behavior.
But not a single project created that way has shown any signs of human
intelligence or human behavior - not even withing the limited domain they
attacked. Chess programs don't play chess like humans.

Take the game of go as a clear example of the learning powers of the human
brain. No hard-coded go program comes close to the skill of a human at the
game. And the game of go has zero features that evolution could have
hard-coded into us to explain our skill level at a behavior which has
nothing to do with human survival for the past million years.

The pattern matching required to play GO well is a high dimension pattern
matching problem that the brain has the power to learn, and which our
computers can't yet learn.

Yeah, there's a bias here at work in that way.

But the fact that it "must" do that, is obvious and beyond me why you can't
see the evidence in front of your face.

Evolution is just as strong of a learning machine as the human brain is.
Given enough time, it could solve any of these problems. But that's the
rub. Evolution is slow. What a human can learn in a year, evolution takes
thousands or millions of years.

By looking at all these problems humans are good at which haven't been
around long enough to use the "evolution solved it" answer, we can tell
with ease the raw learning power evolution built into humans - such as by
looking at our performance at a game like GO, or in driving cars, or in
programming computers, or doing math. The list of stuff man can learn to
do which could not be explained by evolution hard coding the result is HUGE
- probably something like 90% of what adults can do can't be explained as
"evolution solved it for us" because the stuff we do today wasn't part of
the life our our ancestors for the past million years.

> Well maybe it doesn't work that way.
> And maybe the way they are building robots now is the way it
> was done when organisms evolved. Maybe we can understand how
> the modules work together to make plans and learn new things.
>
> > You are a few years behind the "curt" technology curve John.
>
> Well not much I can do about that unless you want to provide
> the details of your latest efforts.

I have many times. Most my "latest efforts" have happened right here in
brain storming sessions that were documented in the posts I've written.

> > My best understanding at the moment, is that the processing
> > function the low level node does really needs to include both
> > domains. It must make output decisions based both on where
> > it came from, and when, to calculate where to send it.
>
> And where we disagree is HOW it can be done.
>
> I see a generalized learning system evolving out of trials
> done by specialist learning systems.

Not sure what you mean there. Yes, evolution probably created generalized
learning by first adding learning to some of it's specialized systems.

If you want to follow that path, you are free to play with specalized
systems ad add a little bit of learning to them in order to understand how
to create generalized learning.

Our argument isn't about how to find the solution (as far as I can tell)
are argument is that you don't think humans have a generalized learning
system. Or at least, my side of the argument for years now has been
nothing more than to get you to grasp that fat that it's easy to see that
humans have as a highly important aspect of their intelligence, a
generalized high dimension real time reinforcement learning ability that no
one yet knows how to duplicate.

> The things they have
> in common will point the way. A glaring example is the use
> of heuristics used to solve the high dimensional problem
> in games like chess and checkers. The question becomes how
> can a system learn to generate heuristics.

Which I believe at the low level translates to the question of how does a
generic learning machine find useful abstractions to guide both behavior
and learning.

And I've explained how that can done to you - which means I've explained
how such a system finds heuristics. It's done by starting off with a
default algorithm for creating abstractions based on correlations in the
data, and then by re-shaping that clustering (aka abstractions, aka
heuristics) by reinforcement to maximize their usefulness.


> JC

casey

unread,
Dec 14, 2009, 7:28:41 PM12/14/09
to
On Dec 15, 10:06 am, c...@kcwc.com (Curt Welch) wrote:
>
> You can't use our learned behavior to explain how the
> behavior is learned. You have to explain where the

> behavior came from.
>
>
> But you have the answer for that don't you? You fall
> back to "evolution solved that for us".

Well spotted :)

> Humans have LOW LEVEL high dimension learning skills

> that are easily tested for and demonstrated which


> you have not explained expect to say it's not possible
> so evolution must have "cheated" somehow instead of
> actually building learning hardware that solves the
> hard problem.
>
>
> If solving AI is nothing more than re-implementing the
> hard-coded behaviors that evolution build into us as
> you suggest it is, the field of AI would have made far
> more progress in the last 60 years than it has.

But we don't know how to re-implement the hard coded
behavior _that_ is why there has been little progress.
We can duplicate the behavior of a student solving
a calculus problem but not the simple behaviors of
a six year child.


> A lot of people have done what you suggest -


No they haven't, they have simply tried to do it and have
only had limited success. It is a hard problem. It took
evolution millions of years to evolve a human brain. If
there was a simple generic solution to these sensory
analysis problems why wouldn't evolution have found one?


> Take the game of Go as a clear example of the learning


> powers of the human brain. No hard-coded go program
> comes close to the skill of a human at the game.


Maybe with an innate visual grouping algorithm they may
have the beginnings of a good Go player.


> And the game of go has zero features that evolution
> could have hard-coded into us to explain our skill
> level at a behavior which has nothing to do with
> human survival for the past million years.


I doubt it. The ability to process visual data probably
had survival value.


> The pattern matching required to play GO well is a high
> dimension pattern matching problem that the brain has
> the power to learn, and which our computers can't yet
> learn.


And the solution I believe will be found in ways of
reducing the high dimensional problem to a low dimensional
problem. A human brain can deal with more complex inputs
than a frog brain but all brains are limited. Computer
programs can process complex data in a way we can't even
though we know the steps to be taken because we lack the
innate hardware of a general purpose computer.


>> I think the problem here is you _want_ it all to bubble
>> out of a complex network of weights with the details
>> all being beyond human understanding.
>
>
> Yeah, there's a bias here at work in that way.
>
>
> But the fact that it "must" do that, is obvious and
> beyond me why you can't see the evidence in front of
> your face.


http://www.skepdic.com/cognitivedissonance.html

You will find a Dilbert/Scott Adam's example here,

http://www.tdl.com/~schafer/learning.htm


> By looking at all these problems humans are good at which
> haven't been around long enough to use the "evolution
> solved it" answer, we can tell with ease the raw learning
> power evolution built into humans - such as by looking at
> our performance at a game like GO, or in driving cars, or
> in programming computers, or doing math.


Whereas you see high level learning such as learning to
ride a bike, learning to tie a knot, learning to drive a
car, learning how to weave a basket, learning how to dress
yourself, learning how to boil an egg or cook a meal,
learning how to make a brick, learning how to use those
bricks to build a house, learning how to make a fire with
sticks and so on, as being much harder than learning how
to see and do the innate things that we share with other
animals. Whereas I see those actions as being easier to
learn than learning how to see, even if there are more of
them. The quantity of novel behaviors _is not a measure
of the difficulty_ of learning those behaviors any more
than the quantity of patterns generated by a kaleidoscope
is any measure of the complexity of the kaleidoscope design.

Clearly humans have something that has pushed them over the
hump in the road to being able to generate such a variety
of novel and useful behaviors although some of it can be
seen in other Apes. (use of tools, social dynamics). One of
those skills is to learn from others without which we would
not be learning all those things you talk about.


> Most my "latest efforts" have happened right here in
> brain storming sessions that were documented in the posts
> I've written.


Verbal descriptions maybe but not the detail that enabled
me to write programs to test your ideas like last time.
I was able to decode your last bit using your ABCD inputs
and WXYZ outputs matrix and the normalizing etc. I did
read it a few times.


> Our argument isn't about how to find the solution (as far

> as I can tell) our argument is that you don't think humans


> have a generalized learning system.


Depends what you mean by "generalized" as you don't always
specify that. I noticed in another post you admitted the
limitations of any learning system. There is no reason to
believe we can solve ANY problem. There is also problems we
have trouble with because we are using reasoning modules
meant for solving problems we faced for millions of years
rather than academic problems.


> Or at least, my side of the argument for years now has been
> nothing more than to get you to grasp that fat that it's
> easy to see that humans have as a highly important aspect
> of their intelligence, a generalized high dimension real
> time reinforcement learning ability that no one yet knows
> how to duplicate.


I don't agree with the "high dimension" part as that is for me
an unproved assumption. Not only is the visual input high
dimension it is in fact unsolvable. There are many possible
interpretations of the sensory data but evolution has given
us innate assumptions about how the world works and that is
used to create, from the sensory input, the most likely
interpretation. It is based on millions of years of past
experience in billions of individules and embodied in the
hardware of todays animals - including us.


> I've explained how such a system finds heuristics. It's
> done by starting off with a default algorithm for creating
> abstractions based on correlations in the data, and then
> by re-shaping that clustering (aka abstractions, aka
> heuristics) by reinforcement to maximize their usefulness.

Ok. Give a demo on a simple example. Let it find some
heuristics for playing a game of chess. Or if that is too
hard some heuristics for playing tic tac toe.


JC

Curt Welch

unread,
Dec 14, 2009, 11:27:58 PM12/14/09
to
casey <jgkj...@yahoo.com.au> wrote:

It did.

> > Take the game of Go as a clear example of the learning
> > powers of the human brain. No hard-coded go program
> > comes close to the skill of a human at the game.
>
> Maybe with an innate visual grouping algorithm they may
> have the beginnings of a good Go player.
>
> > And the game of go has zero features that evolution
> > could have hard-coded into us to explain our skill
> > level at a behavior which has nothing to do with
> > human survival for the past million years.
>
> I doubt it. The ability to process visual data probably
> had survival value.
>
> > The pattern matching required to play GO well is a high
> > dimension pattern matching problem that the brain has
> > the power to learn, and which our computers can't yet
> > learn.
>
> And the solution I believe will be found in ways of
> reducing the high dimensional problem to a low dimensional
> problem.

Well, you can't pretend the problem isn't there.

But yes, in the other direction, a network like I outline _does_ (or
attempts to) transform a high dimensional problem into a hierarchical set
of low dimension problems. In that sense, if you can find a machine design
that _solves_ the high dimension problem by fracturing it into a
reasonable number of simple problems, then you HAVE SOLVED the high
dimensional problem.

> A human brain can deal with more complex inputs
> than a frog brain but all brains are limited. Computer
> programs can process complex data in a way we can't even
> though we know the steps to be taken because we lack the
> innate hardware of a general purpose computer.

yes all true.

But the way you deal with high dimension problems in general, is to find a
way to fracture them into a resonable set of simple problems. It's the way
sort algorithms work. Instead of having to compare every element in the
set to most other elements in the set to do the sort (n^2 sized problem) a
different approach turns it into an O(n log n) approach. Sorting and
searching large data sets is a very similar problem to this high dimension
learning problem and solutions have been found to make those practical by
turning what looks like an N^2 into an N log N problem. I believe the type
of network abstraction I'm working on does the same thing.

Data compression is also a high dimension problem that looks at first
inspection to be impossible to solve. It's a high dimension problem if you
attempt to fracture the data into all possible strings to see which
fracturing of the dataset will produce the most compact langauge for
compression. But people have found multiple algorithms to get "close
enough" so they can produce good enough solutions if not perfect solutions
to the problem in O(n) time with O(log n) memory.

Keeping a binary tree sorted during insertions look like yet another high
dimension problem that has no solution except O(n^2) again. But again,
engineering comes in with a "close enough" solution and does it in O(log n)
again simply by relaxing the requirement of keeping the tree perfectly
balanced and instead allowing for a depth of +-1.

We have solved many algorithm problems that looked impossible or extremely
hard at first but which turned out to have a fairly simple solution. The
brain solves this class of learning problem somehow and it's not by using a
technique which gets around the problem having to be solved. WE can test
humans and see they DO have a brain that solves this problem.

It might solve it by taking advantage of some really odd properties of
neurons that can't even be duplicated with transistors even if we wanted
to. Or more likely, it's using some straight forward "trick" like all the
tricks we have found for other hard problems. Not a trick that somehow
removes the problem so we don't having to solve it, but instead, a
straightforward approach that solves is.

> >> I think the problem here is you _want_ it all to bubble
> >> out of a complex network of weights with the details
> >> all being beyond human understanding.
> >
> >
> > Yeah, there's a bias here at work in that way.
> >
> >
> > But the fact that it "must" do that, is obvious and
> > beyond me why you can't see the evidence in front of
> > your face.
>
> http://www.skepdic.com/cognitivedissonance.html
>
> You will find a Dilbert/Scott Adam's example here,
>
> http://www.tdl.com/~schafer/learning.htm

:)

The only conflicting data I'm dealing with, is you. Unlike Dilbert, you
don't present any numbers to show that humans aren't solving the high
dimension learning problem nor are you able to support your claim that the
brain is doing something so it doesn't have to solve the problem. You just
keep saying "I don't think so", and "evolution did something other than
create a solution to the high dimension learning problem" - which implies
you are suggesting the problem isn't really there somehow.

I don't really grasp what you think evolution did because you can't solve a
learning problem by not building learning hardware - which is obviously
what evolution spent it's 100 million years doing - developing a good
strong generic learning system that works in high dimension data spaces.

> > By looking at all these problems humans are good at which
> > haven't been around long enough to use the "evolution
> > solved it" answer, we can tell with ease the raw learning
> > power evolution built into humans - such as by looking at
> > our performance at a game like GO, or in driving cars, or
> > in programming computers, or doing math.
>
> Whereas you see high level learning such as learning to
> ride a bike, learning to tie a knot, learning to drive a
> car, learning how to weave a basket, learning how to dress
> yourself, learning how to boil an egg or cook a meal,
> learning how to make a brick, learning how to use those
> bricks to build a house, learning how to make a fire with
> sticks and so on, as being much harder than learning how
> to see and do the innate things that we share with other
> animals. Whereas I see those actions as being easier to
> learn than learning how to see, even if there are more of
> them. The quantity of novel behaviors _is not a measure
> of the difficulty_ of learning those behaviors any more
> than the quantity of patterns generated by a kaleidoscope
> is any measure of the complexity of the kaleidoscope design.

Humans can solve visual learning problems with the same generic learning
system as well. Learning to see IS one of the problems the generic
learning system solves.

And though you can certainly talk about how vision has been important for
us for the past millions of years, what you can easily do, is create a high
dimension vision learning problem which has patterns unlike anything humans
would have had to deal with over the past 100,000 millions years, so
evolution couldn't have solved it by building specialized pattern
recognizers for these patterns, but yet, we can still see how humans using
their vision system can solve this high dimension learning problem.

In fact, you can't explain how vision works WITHOUT solving this same high
dimension learning problem.

> Clearly humans have something that has pushed them over the
> hump in the road to being able to generate such a variety
> of novel and useful behaviors although some of it can be
> seen in other Apes. (use of tools, social dynamics). One of
> those skills is to learn from others without which we would
> not be learning all those things you talk about.

Well, we teach chimps to use sign language so clearly they can learn from
others as well as we can.

I think the big difference is our language skills. We use it to guide so
much of our actions. Without it, we would be far more like a clever ape
than a human. Our larger brains gives us higher resolution context to work
with, but it's our langauge that puts us in a different dimension I think.
I think the reason we have the language skills is because part of our
generic learning hardware was re-configured to support a far longer short
term memory - a longer temporal context for the generation of action from
language. Aps can respond to language with a very short temporal context
(one or two words) but ask them to parse meaning in a long temporal context
(5 words) and they can't do it. I don't think they have enough short term
memory to correctly recognize and parse longer strings of language. They
talk about as apes not baing able to parse syntax, but I think it's not
nearly as complex as that sounds.

I don't think our basic low level generic learning hardware is different in
any significant way. I think it was jsut configured into a different
toplogy so it could support the pattern recognition of langauge syntax for
us, and other animals with the same basic hardware didn't get a section
tuned for parsing (and producing) langauge.

As you know, I talk about the temporal nature of this problem. Which means
the high dimenstion learning system must learn to recognize temporal
patterns that span over time. No hardware can recognize _any_ length
pattern with equal ease. The hardware must have limitations of what it can
recognize. The more hardare there is, the more resolution it has for
"seeing" spatial and temporal patterns. The more resolution, the more
complex of a pattern it can both recognize, and learn to produce a unique
behavior for. I strong suspect teh ssytem is designed in away that gives
it higher resolution for shorter patterns and lower resolution as the
temporal patterns get longer. Which means there is no point at which our
short term pattern recognitions just dies (hits a brick wall), but instead,
the longer the pattern (the more time it spans) the more significant it
must be. Which would imply our memory for and ability to recognize
patterns would inversely related to their length.

This is a basic trade off I think the hardware can be configured to deal
with between spatial resolution and temporal resolution. YOu can configure
it one way, and make it have very high spatial resolution for very short
tempora periods, or configure it another way to have low spatial
resolution, but span very long periods of times.

I think our various sensory modalities are tuned in ways like to make the
work best for the type of data we need to deal with.

I think for humans, we got a section of our generic learning system tuned
(and cross wired) to support langauge, and most animals don't.

Dogs for example can be trained to respond to hand signals which shows they
can use language as well. But their temporal pattern recognition is very
very short. You can't train them (as far as I know) to recognize a
sequence of three hand motions to "label" a trick. They don't have the
needed temporal pattern recognition powers that spans the needed length of
time to do that.

It's not that their generic learning skills aren't solving the same basic
problems our generic learning skills solve for us - but it does seem to be
that we have at least one part of our brain that can parse very long
temporal patterns and that it's that ability, that gives us such strong
language powers.

I think it's the same basic learning "stuff" that has been in humans and
animals for millions of years, and that it just got configured and tuned to
support language for us and that's what set is so far apart from the rest
of the animals.

> > Most my "latest efforts" have happened right here in
> > brain storming sessions that were documented in the posts
> > I've written.
>
> Verbal descriptions maybe but not the detail that enabled
> me to write programs to test your ideas like last time.
> I was able to decode your last bit using your ABCD inputs
> and WXYZ outputs matrix and the normalizing etc. I did
> read it a few times.
>
> > Our argument isn't about how to find the solution (as far
> > as I can tell) our argument is that you don't think humans
> > have a generalized learning system.
>
> Depends what you mean by "generalized" as you don't always
> specify that. I noticed in another post you admitted the
> limitations of any learning system. There is no reason to
> believe we can solve ANY problem.

And it's a well known fact that learning systems can't solve _any_ problem.

Most important to the generic learning I've been talking about there for
years, is what I touched on again above. That the system must have
temporal pattern recognition, but that no system can have infinite length
pattern recognition ability. Just like a digital camera has a spatial
resolution limit controlled by the pixel count of their sensor, and a video
camera has a temporal resolution limit controled by it's frame rate, any
system that attempts to solve the generic learning problem by searching for
spatial-temporal patterns will have similar limits in both direction - that
is a high frequency limit in terms of the highest resolution of the highest
frequency (shortest temporal pattern) it can detect, but also on the other
end, there will be limits on the longest temporal pattern it can respond
to, and limits ont eh complexity of spatial patterns it can respond to.

We can for example detect a fairly high resolution visual image with our
eyes. But if we are trained to push two buttons based on what pattern we
see, there will be a limit in the complexity of the image we will be able
to recognize and respond to. If one dot out of million were missing in the
image, could we be trained to recognize it? Probably not. There are limits
in both the spatial and temporal complexity we can learn to recognize and
that's justhardware limitations that will at the same time, limit what
problems we can solve.

And these are limits in the processing network, not just in the raw sensory
hardware I'm talking about.

> There is also problems we
> have trouble with because we are using reasoning modules
> meant for solving problems we faced for millions of years
> rather than academic problems.

Yes, even if I'm right and it's all the same generic learning hardware,
it's still all tuned based on what was best for us over the past million
years when we were running around the forest trying to stay alive and find
food.

> > Or at least, my side of the argument for years now has been
> > nothing more than to get you to grasp that fat that it's
> > easy to see that humans have as a highly important aspect
> > of their intelligence, a generalized high dimension real
> > time reinforcement learning ability that no one yet knows
> > how to duplicate.
>
> I don't agree with the "high dimension" part as that is for me
> an unproved assumption.

It's not unproven. If you think it is, you simply still don't understand
what a high dimension learning problem is. It's proven by simple
definition.

> Not only is the visual input high
> dimension it is in fact unsolvable.

So the brain can't actually do what it does? We just think it does????

> There are many possible
> interpretations of the sensory data but evolution has given
> us innate assumptions about how the world works and that is
> used to create, from the sensory input, the most likely
> interpretation. It is based on millions of years of past
> experience in billions of individules and embodied in the
> hardware of todays animals - including us.

It's based on the constraints in the data. Nothing else other than is
needed to fully understand human visual perception powers.

Or as Michael Olea said it 3 years ago:

1) Most learning, as far as I know, maybe even all learning, can be
reduced to "unsupervised learning" of probability distributions
(to use the machine learning vocabulary).

All our powers reduce to hardware that can identify, and use, the
probability distributions in the data. The perception side of the problem
only uses probability distinctions over the sensory data, where as the the
reinforcement learning side of the problem tracks and uses probability
distributions of the reward over sensory-action data paths.

Generic temporal reinforcement learning is nothing more than finding a
practical solution to tracking probability distillation in high dimension
data.

> > I've explained how such a system finds heuristics. It's
> > done by starting off with a default algorithm for creating
> > abstractions based on correlations in the data, and then
> > by re-shaping that clustering (aka abstractions, aka
> > heuristics) by reinforcement to maximize their usefulness.
>
> Ok. Give a demo on a simple example. Let it find some
> heuristics for playing a game of chess. Or if that is too
> hard some heuristics for playing tic tac toe.

Well, that's a fair request, but not one I have a working solution to hand
you or the time to try and create one. But the problem that needs to be
demonstrated is the generic problem of crating useful mid layer nodes in a
multilayer hierarchical network. How those middle layer nodes get
configured are the abstractions a network uses to compute it's outputs and
are the fundamental power such a system has to create useful abstractions.

The simple stuff I've had working in the past never rally tested it's
ability to create good abstractions which were shared for multiple learned
behaviors. Which is what it needs to do to work. That is, if the
hierarchical network is going to learn a lot of different behaviors, it has
to automatically create good mid-layer abstractions that support all the
different behaviors it's learned. Even if the behaviors are very tribal
sort of stimulus response tests, it would interesting and good to see how
good it was at creating shared abstractions.

If you are not following what I'm talking about here, think if the simple
problem of trying a circuit to perform the full adder function - three
binary inputs and two binary outputs. Each different input/output pair in
the truth table can though of as a different behavior the system will be
trained to perform. A system could learn it, by using a full decoding of
all possible inputs with a look up table and then "learn" what each output
for each input should be. That would be a solution that did _not_ learn
any abstractions to solve the problem.

When we hard wire a solution to such a problem, we use multiple layers of
logic gates. Those middle layer signals are the "abstractions" created by
that circuit in order to produce the correct final answer. A learning
system that can solve high dimension problems can't use the "full look-up
table" approach because that's the definition of a high dimension problem.
One which has too many states to make it possible to use different hardware
for each state. So to solve these problems, it must create mid-layer
abstractions.

Testing specifically for a network design's ability to create good
abstractions is an obvious thing to help guide the search for better
learning networks but something I've not thought to explicitly test for -
at least not lately. I'll have to think a bit more about that. I might
help me get some new insight in "fixing" my last designs.

zzbu...@netscape.net

unread,
Dec 15, 2009, 1:10:26 AM12/15/09
to
On Dec 14, 11:27 pm, c...@kcwc.com (Curt Welch) wrote:

Well, it's nowhere near as complex as as that, since physicists
cranks
with computers almost always make the idiot assumption:

syntax <-> electron microscopes, which is why the people with
greater than brick cerebral abilities invented blue,ray and desktop
publishing.

And MDs almost always make the idiot assumption:
syntax <-> finger prints, which is why the people greater than
ameoba computer skills invented atomic clock wristwatches, light
sticks, hdtv,
compact flourescent lighting, and USB.

And Psychologists almost always make the idiot assumption:
syntax <-> spectral components, which is why the non sychophants
invented digital books, all-in-one printers, and flat screen
software debuggers,

And Chemists generally make the arbitrary assumption
syntax <-> oxygen, which is where gps, mp3, mpeg come from.

And Lawyers alway make the assumption:
syntax <-> word processors, which is rapid prototyping and the 21st
century come from.

And computer manufactures always make the assumption:
syntax <-> Exxon, which is where holographics originate from.


> I don't think our ...
>
> read more »- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

casey

unread,
Dec 15, 2009, 3:15:56 AM12/15/09
to
On Dec 15, 3:27 pm, c...@kcwc.com (Curt Welch) wrote:
>
> ... if you can find a machine design that _solves_ the high

> dimension problem by fracturing it into a reasonable number
> of simple problems, then you HAVE SOLVED the high dimensional
> problem.


It is just the way you are looking at it that seems to be the
issue? "Fracturing it into a reasonable number of simple
problems" is all I have ever said. You just view it as solving
the high dimension problem, I see it as showing the high
dimension problem wasn't really all that high otherwise you
couldn't reduce it to a reasonable number of simpler problems.

The highest dimensioned system is where every element has a direct
and immediate effect on every other element. (see Ashby 4/20).

If you bother to read An Introduction to Cybernetics by Ross
Ashby you should find he covers all these issues of complexity
as it is a key subject in the book. The connection of a generic
learning machine amounts to what is covered in the chapter on
The Black Box.


> I don't really grasp what you think evolution did because
> you can't solve a learning problem by not building learning
> hardware - which is obviously what evolution spent it's 100
> million years doing - developing a good strong generic
> learning system that works in high dimension data spaces.

Well it is not obvious to me that evolution spent 100 million
years developing a strong learning system for a high dimensional
data space. On the contrary it spent 100 million years reducing
the high dimensional problem by building circuits to reduce it
to a low dimensional problem starting in the retina so a low
dimensional learning could take place in the life of the owner.

It is not that I think it is impossible only that it doesn't
make sense from a practical point of view to learn stuff that
every animal over millions of years would have found useful
from the word go.


> And though you can certainly talk about how vision has been
> important for us for the past millions of years, what you can
> easily do, is create a high dimension vision learning problem
> which has patterns unlike anything humans would have had to

> deal with over the past 100,000 millions years, ...


Really? Show me one of these images.


>> Not only is the visual input high dimension it is in fact
>> unsolvable.
>
>
> So the brain can't actually do what it does? We just think
> it does????

I was thinking of this,

http://cbcl.mit.edu/people/poggio/journals/marroquin-poggio-JASA-1987.pdf


JC

Curt Welch

unread,
Dec 15, 2009, 1:23:09 PM12/15/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 15, 3:27=A0pm, c...@kcwc.com (Curt Welch) wrote:
> >
> > ... if you can find a machine design that _solves_ the high
> > dimension problem by fracturing it into a reasonable number
> > of simple problems, then you HAVE SOLVED the high dimensional
> > problem.
>
> It is just the way you are looking at it that seems to be the
> issue? "Fracturing it into a reasonable number of simple
> problems" is all I have ever said. You just view it as solving
> the high dimension problem, I see it as showing the high
> dimension problem wasn't really all that high otherwise you
> couldn't reduce it to a reasonable number of simpler problems.

Well, yes and no. Fracturing the problem using a clever approach is not
the same thing as "it wasn't really a high dimension problem to start
with". There's a big difference between solving the problem which is
there, and pretending there's some way to look at it where the problem
isn't what is is.

It's high dimension for one very simple reason. The state space of the
universe is too large to represent in any machine which is part of the same
universe. It's so large, you can't even come close to representing the
state space of even the stuff near you in the universe (such as the stuff
in my office around me). Which means you have to create an internal
representation which is some set of abstractions of the real state space -
some abstractions which are useful for producing behaviors that maximize
rewards. The hard part of the problem is how do you find the right set of
abstractions - a useful way to parse the universe into simpler concepts if
you will - out of a set of possible parsings that's so large it's
effectively infinite. That's what makes is a high dimension problem.

Even if your claim is that evolution created the abstractions - or at least
some base set of abstractions - you still haven't explained how evolution
created it because even a billion years is not long enough to make this
problem possible to search in a linear search.

> The highest dimensioned system is where every element has a direct
> and immediate effect on every other element. (see Ashby 4/20).

Well, and the universe is basically like that because of EM and
gravitational effects that make every particle interact with every other
particle. The difference is that the effect is not immediate. It's always
delayed by time due to the speed of light - but not very delayed for near
buy particles.

> If you bother to read An Introduction to Cybernetics by Ross
> Ashby you should find he covers all these issues of complexity
> as it is a key subject in the book. The connection of a generic
> learning machine amounts to what is covered in the chapter on
> The Black Box.

I'll have to find that chapter.

But there's nothing about the _problem_ that I don't already fully
understand. What we need to find, is an implementation that solves the
very well defined problem - or far more likely - one that produces a highly
useful result without producing a perfect result.

All this is also well defined in Marcus Hutter's work on AGI...

http://www.hutter1.net/official/publ.htm

> > I don't really grasp what you think evolution did because
> > you can't solve a learning problem by not building learning
> > hardware - which is obviously what evolution spent it's 100
> > million years doing - developing a good strong generic
> > learning system that works in high dimension data spaces.
>
> Well it is not obvious to me that evolution spent 100 million
> years developing a strong learning system for a high dimensional
> data space. On the contrary it spent 100 million years reducing
> the high dimensional problem by building circuits to reduce it
> to a low dimensional problem starting in the retina so a low
> dimensional learning could take place in the life of the owner.

Well, that's where you are just wrong. And that's what I've been debating
with you. This sort of problem can't be reduced to a simple low dimension
problem. What you are suggesting is simply impossible and that's my issue
with your view. If you spend more time studying the problem, you would
understand this. Without knowing anything about what the brain is doing,
we can see that humans can and do solve high dimension problems in their
learning ability.

Evolution is like your magic fair dust - you sprinkle it on any hard
problem and the problem just goes away for you.

Markus Hutter and most the field of AGI is highly focused on this very
issue and they are doing a lot of good work to formally define and try to
attack this issue. The one thing they have never done, is tried to claim
that evolution solved it by somehow skirting the issue so as to not need to
solve it.

> It is not that I think it is impossible only that it doesn't
> make sense from a practical point of view to learn stuff that
> every animal over millions of years would have found useful
> from the word go.

But that's just the point, no animal in the past billion years needed to
learn a large and useful set of abstractions for evaluating chess board
positions, or GO board positions. We don't have such hardware in us and
the fact you keep trying to claim we do is what's so absurd here. The
abstractions needed to play those games well had to be learned on the fly
in a human.

We don't have pre-wired abstraction networks to evaluate the quality of
chess board positions. We don't have anything pre-wired in our brain that
helps us play chess - except maybe some visual and spatial hardware that
allows us to recognize the board position itself (such as what's in front
of what etc). I don't believe that sort of stuff is pre-wired either, but
it's fair to argue that sort of visual spatial understand could be innate
in us since visual spatial understanding of a 3D world is a 100 million
year old problem for us.

But once the innate hardware decodes the raw visual data and creates an
internal representation of a chess board configuration, then what? How
does the brain go from innate chess board positions, to a high quality
evaluation function?

We we start with very high dimension visual data, and assume (because you
like to) that we have innate hardware to break it down to a very small
amount of data that compactly represents a chess board position. It's just
a few handfuls of bits to represent a chess board position - 32 maybe? But
32 bits is STILL A VERY VERY HIGH DIMENSION LEARNING PROBLEM. It's high
dimension because there are so many chess board positions that we can't
even come close to tracking the statistical worth of every chess board
position, and even if you could, you can't explain how we learn to play so
well, so fast, for chess board positions we have never seen before.

The only way to solve these high dimension problems is to create
abstractions that represent important features of the board that are used
in the evaluation function we hone as we improve our game playing skills
though practice. Good chess players can look at a board position and
instantly just "feel" how good or bad the position is. He can look at how
the position will change based on a few possible moves, and instantly just
"feel" which moves make the position better, and which make it worse.

Though our chess programs use evolution functions as well, they are never
as good as the instincts of a well trained human chess player. The only
way they are able to compete with humans is to use very high speed tree
searching to make up for the lack of quality of their board evaluation
function.

TD-Gammon used a very high quality evaluation function which was learned by
experience. And in the domain of Backgammon, the evaluation function was
roughly equal in quality to the best human board evaluation functions.
Which shows how close we can come to matching the one things human do
better than our machines - create complex abstract evaluation functions
that guides not only all our actions, but also all our secondary learning.

What TD-Gammon is lacking, is the ability to create the structure of the
evaluation function on it's own. That was not learned in TD-Gammon, it was
hard coded by the design - doing the type of think you say evolution is
doing for us. Which can explain why our networks are good at taking raw
visual data and turning it into a simplified representation of 16 black
objects and 16 white objects on a 8x8 grid board. But that's where
evolutions hard-coded simplification stops - which is still way short of
"easy". It's still a very high dimension generic learning problem that
needs to be solved.

And what it must do, is create a set of abstractions, or features, that are
useful for accurate predictions of the value of the stimulus signals. That
is the only way this problem can be solved as far as I can see, and the way
that the brain must be solving it. It has a generic learning algorithms
that finds a good set of intermediate features.

My pulse sorting networks that you know about worked by making each node in
the network a different "feature" of the sensory data. Each layer is a
different fracturing of the features from the previous layer. Or you could
say each layer was a different "view" of the sensory data. My intent with
that approach was for the network to automatically fracture the data into
different features, and then test them to find what worked best. And it
actually does that quite well. But what it got wrong, was that it didn't
create "good" features to start with because it wasn't using a feature
creation algorithm that correctly made use of the correlations
(constraints) in the data. It is better understood as creating random
features. And "random" features don't cut it in a search space as large as
the one that must be solved. The basic approach however of creating some
fracturing of the space and adjusting it to improve performance all worked
and allowed that design to solve some interesting (but simple) problems
using the approach.

But to solve the hard problems, I've got to figure out a way to create a
network that does a better job of using the constraints in the data to
create the default fracturing.

> > And though you can certainly talk about how vision has been
> > important for us for the past millions of years, what you can
> > easily do, is create a high dimension vision learning problem
> > which has patterns unlike anything humans would have had to
> > deal with over the past 100,000 millions years, ...
>
> Really? Show me one of these images.

Look at a billion different GO board positions. That is a set of images
that evolution didn't have to build feature evaluators for. Our innate
hardware at best stops as I said above, at the point of recognizing object
locations on a 2D grid. It can't tell us the value of one position over
the other after that. Is a given configuration a good sign for getting
food and avoiding pain, or a bad sign? That is what our generic learning
brain can learn though experience, and it doesn't have to see every board
position to learn to do that. It can see board positions in configurations
it's never seen before, and still do a good job of estimating the worth of
the position, and estimating the worth of any potential move.

Simple RL algorithms in low dimension problem spaces don't use an
evaluation function to estimate the wroth of a position. It just uses a
different variable to recognize the value of a state of the environment aka
a board position. But with high dimension problems, that's not possible.
High dimension problems are hard because there are too many possible states
of the environment to make it possible to represent every state with a
different variable, and because even if you could, it would make learning
far too slow, because it would require the agent visit every state many
times before it could converge on a half way decent evaluation of that
state. SO the only way to solve these problems is to use an evaluation
function that computes the value from the inputs, instead of storing
separate values. It works by abstracting common features out of the state
and then assigning value to the features, and then computing the value of
the entire state based on the value of the features.

And like I've said many times, we have made this approach work very well in
games like TD-Gammon. But it only worked well because the abstractions
used in that evaluator happen to work well for that problem.

And evolution could have created custom evaluators for problems we have
faced for the past 100 million years. But it didn't create a custom
evaluator for Backgammon, or Chess, or GO. Those had to be created on the
fly, in response to us being exposed to that environment.

The ability to _learn_ high quality abstractions on the fly, is what we are
missing from our current learning systems, and what the brain is able to
do. And it's how it solves these high dimension learning problems far
better than any software we have let developed. And it's the main thing
that's wrong with my last network design - though my design created
abstractions automatically, and even tuned them to fit at least one
dimension of the constraints in the data, it didn't produce good enough
abstractions to solve more interesting problems.

> >> Not only is the visual input high dimension it is in fact
> >> unsolvable.
> >
> >
> > So the brain can't actually do what it does? We just think
> > it does????
>
> I was thinking of this,
>
> http://cbcl.mit.edu/people/poggio/journals/marroquin-poggio-JASA-1987.pdf

So what part of that article are you talking about when you say it's
unsolvable?

Much of that article I don't really understand because I've not studied the
work in vision so I don't know what most of the basic terms they are using
means, such as "optical flow" to list one of many. There is clearly a lot
of good work in that sort of research that would be useful to me if I spend
the time to master it.

In section 1.4 I find the sentence:

Standard regularization methods lead to satisfactory solutions of early
vision problems but cannot deal effectively and directly with a few
general problems, such as the discontinuities and the fusion of
information from multiple modules.

This strikes me as once again they are failing to realize that a general
solution is needed for the entire vision processing stack. The processes
they refer to as early vision are solving the fusion problem at the low
level, because all the low level processes they talk about are fusion
processes - they extract properties by some processes of fusing lower layer
data together.

And what they se4em to be talking about there are hard coded solutions to
the early vision problems that provide no insight into solving similar
problems higher in the hierarchy.

They do however head in what strikes me as the right direction as they
explore starting in section 1.5 a more general statistical solution to the
early vision problems using Bayes estimation and Markov random field models
(which I do not fully understand). But it's a general statistical approach
to the problem which I strongly agree with and believe we must find to
solve this same problem at all levels of the data processing hierarchy.

On the question of the ill posed problems. I don't fully understand their
point there because I don't know anything about the background work in the
field they are making reference to. But the jest of what they are
suggesting seems clear. That is, if you try to find a solution to some of
the early vision problems, the definition of what you are trying to find
has to be specified in a way that creates only one possible solution. But
that when we define a problem something like "edge detection" we fail to
define it in a way that creates only a single solution. There are lots of
ways to attempt edge detection from the ill posed specification of the
problem.

If you are suggesting it's "impossible to solve" because of this ill posed
nature I think you are not grasping the true meaning of ill posed in this
context. The way they have currently tried to define what the problem IS
is ill posed. That doesn't say anything about whether the problem, when
defined another way, becomes well posed.

In all the solutions touched on in the paper, even the more general
statistical approach they outline, they make reference to the need of prior
knowledge. It's referred to almost as if it were a known requirement for
this class of problem. That is, in order to know how to process the data,
you have to make assumptions about the fact that it a 2D representation of
a 3D world in order to make any progress in doing the processing.

I strongly suspect that prior knowledge is not needed. I believe all the
data needed to solve the problem, is actually contained in the sensory data
itself. But it's mostly contained in the temporal data available only if
you apply statistics to how the images change over time, instead of trying
to apply them to a single static picture. I believe our vision system
learns how to decode images based on how they change over time, not on how
they look at one instant in time. I think the solution requires that we
parse the data into abstractions that are useful in making temporal
predictions - in making predictions about how the real time image will
change over time.

If the image we are looking at, is an object (like a ball) sitting on the
floor, we need a process that will associate the pixels that make up the
ball as an important feature of the image, and another clustering all the
background pixels together. But we can see why that needed to be done
simply to create good predictions about how the image will change. When
the ball moves relative to the background, all the pixels that were
clustered together to define the "ball" are likely to change at the same
time. And all the pixels clustered together as the background are likely
to change at the same time (when the camera moves). So the clustering we
think of as "objects" can also be seen as a useful clustering based on the
predicted correlations between data points. The ball pixels have a high
degree of expected correlations in how they change over time, and the
background pixels have a separate, but high correlation, to how they are
likely to change over time.

I believe the correct parsing of data can all be explained by a network
that optimizes how it does the parsing, based on maximizing it's predictive
powers over the data - which in reverse, just means parsing it based on
maximal correlations.

From thinking of the problem as a generic learning problem, instead of just
a "reverse optics" problem, we can see we need to solve the same problem
for all our sensory domains. The vision might be reverse optics until it
gets out of the eye, but it's "reverse physics" as well. It's all about
predicting and understanding how the environment we sense with our sensors
is likely to change over time.

There are an infinite way to parse something like vision data. So when is
one way of parsing it better than another? The easy answer is that one way
made us "smarter" and helped us survive, so evolution tuned the system to
parse it in the way that was most useful to us for survival. Which is
almost bound to be true to some degree. But I think we can do far better
than that by understanding that all this parsing is done for a very
specific purpose - the purpose of driving behavior that needs to be
predictive in nature. That is, behavior which is able to "do the right
thing" before it's needed, instead of after. You can't wait until you get
to the stop sign to send the signal to the foot to step on the brake. It
has to be send ahead of time. Almost all actions work like that. We have
to create a behavior in response to what our sensors are reporting now, in
order to make the future unfold in ways that are better for us.
Reinforcement learning algorithms in general all solve this problem
already. But what makes it harder in these high dimension problems is that
we dan't use the "real" state of the environment to make predictions with.
We are forced to use a system that creates abstractions to define our
understanding of the state. And the unsolved problem here is how to best
create a good set of abstractions for our internal representation of the
state of the environment. How the system parses an image, is the same
problem. It's the problem of what internal representations are most useful
for that internal state. And a key part of that answer, is that we need
internal state representations that are good predictors of the future. So
we can adjust how the s system is parsing, based on how good any given
parsing is at predicting the future - at predicting how the sensory data
will change over time. The ball vs background parsing is useful because
it's more likely that the ball will move relative to the background, than
it is that the ball will split in half with half of the pixels moving off
the the left and the other half moving off to the right. I believe this
parsing problem is, and most be, solved, based on a statistical system that
forms itself into the best set of temporal predictors it can. That it
works by converging on higher quantity abstractions based on how good a
given abstraction is at predicting how the data will change over time.

I see the big problem here as trying to understand how to correctly pose
this problem so it becomes well posed, across all sensory domains, instead
of ill-posed in one small part of one vision domain. And my hand waving
description of parsing so as to minimize spatial and temporal correlations
is a loose attempt to do that, but not a well posed strong mathematical
definition of what is needed. And certainly without a well defined
problem, trying to find the solution is very hard. Most of my rambling and
brainstorming work here is trying to find a way to talk about this problem
which makes it well posed.

casey

unread,
Dec 15, 2009, 5:26:13 PM12/15/09
to
On Dec 16, 5:23 am, c...@kcwc.com (Curt Welch) wrote:
>
> ...
>
> This sort of problem can't be reduced to a simple
> low dimension problem. What you are suggesting is
> simply impossible and that's my issue with your view.
> If you spend more time studying the problem, you
> would understand this. Without knowing anything
> about what the brain is doing, we can see that humans
> can and do solve high dimension problems in their
> learning ability.


Clearly there is a communication or viewpoint problem here.
I will leave it as irresolvable at this stage.


> Evolution is like your magic fairy dust - you sprinkle


> it on any hard problem and the problem just goes away
> for you.

The programs to process visual data are not fairy dust.
They either work or they don't. When I wanted my robot
to locate the target in a "complex world" from a "complex
array of pixel values" I simply wrote a filter that did
just that. I didn't bother to deal with the complexity.
It is a pragmatic practical approach that I believe
evolution used as well. The module that produces blobs
and the module that classifies blob features and the
module that deals with the spatial relationship between
the blobs can also be used for a potentially infinite
combination of blob patterns. It is not, as you seem to
imagine, I am suggesting that there is a module for every
new pattern. What I am saying is that modules designed
for one thing, target location, can be used for other
patterns as well.

The reason this works is because despite all combinations
of pixel values there are some things that remain invariant
between frames containing the target pattern.


>>> And though you can certainly talk about how vision has
>>> been important for us for the past millions of years,
>>> what you can easily do, is create a high dimension
>>> vision learning problem which has patterns unlike
>>> anything humans would have had to deal with over the
>>> past 100,000 millions years, ...
>>
>> Really? Show me one of these images.
>
> Look at a billion different GO board positions.


Curt this is not new. The _spatial arrangement of objects_
of rocks, bushes, trees, water, mounds, slopes and so on were
all used over millions of years for location identification.
The same mechanisms for recognizing similar arrangements of
these things can be used to recognizing similar arrangements
of stones on a GO board. Indeed as I wrote earlier one of the
reasons the GO game may be hard to code is because we rely
heavily on these innate visual skills to play the game and
as you know visual recognition has been much harder than
finding heuristics for a game of chess.

Back in my C64 days I had a touch tablet that would read
out the x,y position much like the laptop touch pad. So
I decide to write a module that would recognize the hand
written numbers 0 to 9 as a simple first project. The
end result was a module that could learn ANY set of pen
strokes just as a visual system designed for recognizing
spatial patterns in the natural world can equally well
deal with man made patterns that were never seen before
by animals in the past.

However, strictly speaking when you say something was never
seen before at the level of the pixel that is true for every
frame. What you really mean is these image frames are of
shapes never seen before, their features however are still
made up of edges and surfaces and so on used for millions
of years.

Now if you were to try something that was new, the 2D
arrangement of pixels resulting from a camera viewing a
4D world with ever changing rules of interaction then I
suspect the brain would not cope.


> That is a set of images that evolution didn't have to build
> feature evaluators for. Our innate hardware at best stops
> as I said above, at the point of recognizing object locations
> on a 2D grid.


Exactly where our innate hardware stops I would suggest is
for experimentation and observation of real brains to decide.

As I wrote I don't reject the idea of learning from the ground
up as that is exactly what evolution did I only point out that
there is a tradeoff in resources and time. Evolution would have
selected the best combination for each species. Predators for
example have to learn more than grazers.


JC

Tim Tyler

unread,
Dec 15, 2009, 7:13:52 PM12/15/09
to
Curt Welch wrote:
> Tim Tyler <t...@tt1.org> wrote:

>> Sure. We will exist in the history books, probably. Maybe we'll be kept
>> around for a while as a method of rebooting civilisation - in case of an
>> asteroid strike. However, the machines of the future will probably
>> descend more from human culture than from human DNA. In the
>> short term, bacteria have more chance of getting their DNA preserved
>> than we do - since they know how to do some pretty useful tasks.
>
> I don't in general, see how humans are going to give up the race for
> survival to their toasters - no matter how smart the toasters are.

Why call future machines "toasters"? It seems derogatory. These
agents will be smarter, stronger and better than us in every way.
I prefer the term "angels" for them.

> If in some future, the human race is facing extinction because of forces
> beyond our control, I could see people deciding to build AIs that are
> intended to carry on without the humans. But unless human society is gone,
> I just don't see a path for the AIs being allowed to take over - or to get
> _any_ real power over the humans. I don't think humans as a whole, will
> every allow such a thing.

...whereas I think the humans will do it deliberately.

> Your view has always struck me as a fairly intellectual/academic view of
> purpose in life. That is, information seems to be your purpose - to create
> it, to share it, to use it. That's what a scientist is trained to do - and
> what the entire structure of the academic system is set up for - the
> creation and preservation of knowledge to be shared and passed on the
> future generations. n that world, I can understand how AIs could be very
> useful to carry on the torch.

Information is part of what biology is about. Genes and heritable
information. The other part is phenotypes.

> But that world is NOT the real world of humans. It's just one job that
> some humans get assigned to do because it is helpful to the real purpose of
> humans - survival of the human race. If you were born and raised in the
> academic environment, than I can understand how you would be conditioned to
> see the academic point of view as some key to the future. But you are
> being deceived by the environment you were raised in.

Right - well this gets us into "ad hominem" territory. You are speculating
about psychoanalytic reasons for me thinking as I do. It is usually
best to avoid that kind of angle and stick to the actual arguments, IMO.

> To believe, or act, that the academic view of "purpose" is the highest goal
> in the universe is to fail to understand what humans are and where we came
> from. When the shit hits the fan (so to say) and times get tough, the
> academic system will fall apart and we will burn all the books to try and
> stay warm of that's what it takes to survive. Because the need for _human_
> survival is what will always become the top priory when everything else
> falls apart.

I don't really see how my view is "academic". All I did was use some
information theory in my argument. Also, the whole issue of whether my
views are "academic" or not seems pretty irrelevant to me.

> When we invent better machines with AI, what we will have, is just more of
> the same of what we already have. Machines that make surviving easier for
> us. We will become even more addicted to the machines than we already are
> addicted to all the technology that protects us today - like our houses,
> and cars, and cell phones, and computers.
>
> The machines we build now our all our slaves. They do what we need them to
> do. We take care of them, only as long as it's a large net win to us.
> That is, whatever we lose in terms of money, time, and resources, to take
> care of machine, must be paid back many times over, in the value we get
> from them. When we add AI technology to the mix, nothing about the balance
> will change. We will only build, and use, these AI based machines if it's a
> large net win in our favor.

I think machines will be enslaved initially (as they are today). However,
I eventually expect some machines to get full rights and acknowledgement
that they are conscious, sentient agents. Failure to do so would be
barbaric and inhumane.

> If you build AI machines that can, and do, attempt to survive on their own,
> and you make them an _equal_ part of human society, meaning you give them
> the the right to own property, and give them the right to vote, then we
> will have a real problem. The AI machines will out produce, and outperform
> humans, and we will be slowly and systematically wiped out. They will take
> over control of the government, and all natural resources, and give us only
> what the kindness of their hearts allow - which means they will basically
> take our right to reproduce away from us, and reduce our numbers to such
> low levels that we will be animals in the local zoo.
>
> "Normal" run of the mill non-academia humans won't stand for this. They
> will jump all over any suggestion of giving machines equal power to humans
> long long long before it ever gets to that point.

Sure. However, that's not what I envisage. That's back to the man vs
machine
scenario - and I have said quite a few times now that I see the machine rise
taking place without fragmentation of civilisation into opposing factions.

We will make machines because of how useful they are - and because we
love them. Machines won't be the enemies of humans - they will be our
friends and partners.

> So we will have a world, with humans clearly in charge, with lots of smart
> machines serving us and even though we we have tons of machines smarter
> than any human, none of them will be using their intelligence to try and
> out-survive us. The creation of human and above human levels of AI won't
> do anything to change the dynamics of who's in charge here.

Yes: evolution is in charge. Humans have one hand on the tiller, at best.

> But what happens in the long run? I don't know. It's too hard to predict.
> With every increasing advances in science and technology, we will develop
> some very odd powers to modify humans both though genetics, and though
> surgery after birth, and though the combining of man and machine. The more
> man himself gets modified by his own technology, the harder it will become
> to understand, or define, what human civilization is. Humans themselves,
> will evolve into something different. Whether, in time, there's any
> biological component left is hard to predict.
>
> So in the long run, humans might evolve, one small step at a time, into
> what we might think of as AI machines. So I can agree that is a
> possibility in the long run. But NOT because we hand over our future to
> the AI machines we create to serve us - but because we transform
> _ourselves_ into something very different over a long time.

It sounds like your genetic engineering scenario. That seems very
unlikely to me.

Genetic engineering will be too slow to make much difference. Machines
will zoom past us in capabilities long before we have much of a hope of
catching up with them that way - and then there will be enormous pressure
for all the important bits of civilisation to migrate onto the machine
platform.

Tim Tyler

unread,
Dec 15, 2009, 7:23:30 PM12/15/09
to
Curt Welch wrote:

> Yeah, and the invention, and control, of the AI based machines are going to
> add more fuel to that fire. AI could be used by a small group to try and
> take control of society. And they might succeed. This might create a very
> interesting change in the evolutionary path of the human race because of
> it. Again, it's just hard to predict.

Yes, this is "contingent historical event" territory. There seems little
point in trying to predict at this level of detail.

> Because we don't have AI yet, humans are still quite valuable to ea-ch


> other. We all benefit from this large productive society of billions of
> humans making stuff and helping each other, and sharing information
> survival information with each other. Capitalism (when correctly used and
> regulated) works as a way to mesh our efforts together and keep us
> productive for the good of the larger society. But when AIs develop, it
> wont just be the physical tasks that are replaced by the machines - all the
> mental tasks we need done will be replaced by the machines as well -
> leaving almost nothing that a human can do for another human, that some
> machine can't do better.
>
> When we get to that point (or even as we approach that point), things will
> become very different for humans. We will all want an army of AI machines
> working for us, and our ability to give a shit about other humans will
> decline as our needs for their efforts decline. Human society today is
> held together by our common needs, and most important, by our ability to
> make our lives better when we work together. But the more AI we get, the
> less we will need to work together on anything, and the more the fabric
> that currently holds are society together will unravel. What will the end
> result of that be? I have no clue. It could be a bunch of big wars
> between the people that have a large army of AIs and the people that don't,
> ending up with a very small human society left (a few thousand people???)
> each in charge of a very large arm of AI machines working to take care of
> them.

Yes, a reasonable scenario.

> But I still believe that the survival ability of any gene or meme is
> _mostly_ defined by the survival power of the gene's implementation as a
> whole, and far less on some intrinsic value of the information itself. I
> don't think you can understand much of anything about the gene's survival
> power by looking only at the information. It's like trying to answer the
> question about the survival value of "1010101010111110011000111". Is that
> information a good survivor or not?

Biologists *do* look at raw gene frequencies. If your binary string
happened to represent the nucleotide sequence of hemoglobin, the
answer to your question would be "yes" - since hemoglobin represents
a good survival trick.

Curt Welch

unread,
Dec 15, 2009, 8:28:41 PM12/15/09
to
casey <jgkj...@yahoo.com.au> wrote:

Yes, we know how to hard-code solutions to many practical problems.

And yes, modules created to hard-code solutions can and are easily re-used
for other, similar problems.

But NONE OF THAT APPLIES TO THE PROBLEM OF LEARNING. This is where you
time and time again seem totally immune to the obvious.

> The reason this works is because despite all combinations
> of pixel values there are some things that remain invariant
> between frames containing the target pattern.

Yes, but again, none of that can, or does, explain humans ability to LEARN!

You seem unable to understand what the problem is here I'm talking about.

Humans ARE NOT BORN with the skill of playing GO. WE LEARN IT. No matter
how it works, it's a high dimension learning problem that IS SOLVED by the
brain. There's just no way here to deny that the brain solves this high
dimention learning problem. and there's no justification to argue that
evolution build someing that transformed it into a low dimension learning
problem.

> >>> And though you can certainly talk about how vision has
> >>> been important for us for the past millions of years,
> >>> what you can easily do, is create a high dimension
> >>> vision learning problem which has patterns unlike
> >>> anything humans would have had to deal with over the
> >>> past 100,000 millions years, ...
> >>
> >> Really? Show me one of these images.
> >
> > Look at a billion different GO board positions.
>
> Curt this is not new. The _spatial arrangement of objects_
> of rocks, bushes, trees, water, mounds, slopes and so on were
> all used over millions of years for location identification.

EXACTLY. And whether you grasp or not (apparently you can't), in order for
a brain to _learn_ to classify the risks and rewards of the environment
based on the current arrangement of the rocks and trees, the BRAIN HAD TO
SOLVE A HIGH DIMENSION LEARNING PROBLEM.

Evolution build a learning system that solves these problems because the
ability so solve these problems has been a survival advantage ever since
evolution created the first high dimension sensory systems. How long after
the first sensory systems were created that the high dimension learning
hardware was created I have no idea. But it's been an important part of
what evolution created for millions of years.

> The same mechanisms for recognizing similar arrangements of
> these things can be used to recognizing similar arrangements
> of stones on a GO board.

Yes, and in both cases, you re talking about high dimension learning
problems that evolution has built a GENERIC learning algorithm to deal
with.

> Indeed as I wrote earlier one of the
> reasons the GO game may be hard to code is because we rely
> heavily on these innate visual skills to play the game and
> as you know visual recognition has been much harder than
> finding heuristics for a game of chess.

RIGHT. But that innate skill we rely on IS a HIGH DIMENSION GENERIC
LEARNING ALGORITHM. That's what you can't seem to grasp. It's easy to
prove, but yet, beyond your ability to understand apparently.

I have never said that our learning skills are not innate solutions created
by evolution have I? No, I've always claimed they are innate skills
created by evolution. The argument I have your ideas on this subject, is
that you keep tying to claim the _way_ evolution solved the problem, was by
reducing the complexity to make learning easy - to reduce it down to a
non-high dimension problem. It can't be done that way. It can only be
done by finding hardware that solves it.

Yes, as I've said many times in this thread already, the correct type of
innate pre-processing of the data can certainly reduce the complexity of
the problem. SO you are not at all wrong with the idea that evolution
could use innate modules to simply the problem. But it can't simplified
far enough to make it easy - or to take it out of the realm of being a high
dimension learning problem.

> Back in my C64 days I had a touch tablet that would read
> out the x,y position much like the laptop touch pad. So
> I decide to write a module that would recognize the hand
> written numbers 0 to 9 as a simple first project. The
> end result was a module that could learn ANY set of pen
> strokes just as a visual system designed for recognizing
> spatial patterns in the natural world can equally well
> deal with man made patterns that were never seen before
> by animals in the past.

No, I can guarantee you didn't write software that could learn to correctly
classify _any_ set of pen strokes. Nobody has done it yet - it's what all
that fuss about in trying to make visual processing better.

BTW, the visual processing problem talked about in that paper IS the AI
problem. It's not just the visual sub-set of the probhlem. If they solve
it for the visual domain, they will have solved for it for sensory
modalities whether they realize that or not.

> However, strictly speaking when you say something was never
> seen before at the level of the pixel that is true for every
> frame. What you really mean is these image frames are of
> shapes never seen before, their features however are still
> made up of edges and surfaces and so on used for millions
> of years.

Right. But after you factor out all the features that could have existed
for millions of years, and assume all those features were extracted by
hard-wired custom designed circuits, YOU STILL HAVE AN UNSOLVED HIGH
DIMENSION LEARNING PROBLEM LEFT TO SOLVE. And the brain does solve it
somehow - but none of our learning algorithms can solve it (and your C64
hand writing algorithm didn't solve it despite your claim it could "learn
anything").

> Now if you were to try something that was new, the 2D
> arrangement of pixels resulting from a camera viewing a
> 4D world with ever changing rules of interaction then I
> suspect the brain would not cope.

But yet it copes just fine if you put glasses on a person that are inverted
and give the brain enough time to adjust. Is that yet another example of
innate hardware that's beeded for millions of years?

Performing a task while looking in a mirror (like shaving or brushing your
hair) is yet another thing the visual system is able to adapt to - but
again it requires time to allow your visual and motor systems time to learn
how to cope. Is that yet another bit of your hardware that evolution
custom built for us since mirrors have been such an important part of
animal survival for the past million years?

> > That is a set of images that evolution didn't have to build
> > feature evaluators for. Our innate hardware at best stops
> > as I said above, at the point of recognizing object locations
> > on a 2D grid.
>
> Exactly where our innate hardware stops I would suggest is
> for experimentation and observation of real brains to decide.
>
> As I wrote I don't reject the idea of learning from the ground
> up as that is exactly what evolution did I only point out that
> there is a tradeoff in resources and time. Evolution would have
> selected the best combination for each species. Predators for
> example have to learn more than grazers.

And again, I point out that I have never said that evolution was not
building innate non-learning hardware to make survival easier for us. What
I have always argued is simply that after the innate non-learning hardware
leaves off, we are still left with a generic high dimension learning
problem that evolution had to solve by building innate high dimension
learning hardware.

We don't need to do a lot of careful tests to prove this is true. You
only have to fine ONE example of a high dimension learning problem that
couldn't have been solved by innate non-learning hardware because the
problem didn't exist until recently.

Most of our chess programs are examples of innate non-learning hardware
that we built to solve the chess problem WITHOUT using learning. It's
certainly valid to argue that Evolution could also have created custom
non-learning hardware to solve any (non-learning) problem that our
ancestors have needed to deal with for millions of years.

I've given you 10 or 20 examples that fit this requirement. They are all
examples of high dimension learning problems we don't (and can't) have
innate hardware (like built in chess algorithms at birth created by
evolution) but yet it's something we can and do learn.

It's obvious we don't have innate non-learning hardware for any of these
example tasks because our ancestors never needed to evaluate chess board
positions or drive a car, or learn to use a computer GUI (yet another
visual task we learn).

It's obvious that learning is used by the brain to solve these problems,
and it's easily shown that the learning problem is high dimension even
after all possible short-cuts evolutions could have taken using innate
hardware.

You persistent position that "evolution made it easy somehow" is not
supported by any facts other than your desire to believe that's the way it
is.

casey

unread,
Dec 15, 2009, 10:34:39 PM12/15/09
to
On Dec 16, 12:28 pm, c...@kcwc.com (Curt Welch) wrote:

> casey <jgkjca...@yahoo.com.au> wrote:
>> The reason this works is because despite all combinations
>> of pixel values there are some things that remain invariant
>> between frames containing the target pattern.
>
>
> Yes, but again, none of that can, or does, explain humans
> ability to LEARN!


I am talking about WHAT they have to learn which is the same
thing we do when we work out these algorithms.

How they do it I have no idea but somehow I feel that evolution
did something similar to the methods we use. It found constraints
that could be used to solve the problem at hand. These were
simple in the frog not so simple in humans but never so complex
that a brain of limited size couldn't reduce it to a problem
that could fit in that brain.


> Humans ARE NOT BORN with the skill of playing GO. WE LEARN IT.
> No matter how it works, it's a high dimension learning problem
> that IS SOLVED by the brain.


How it works is a way is found to simplify the problem.

>> Back in my C64 days I had a touch tablet that would read
>> out the x,y position much like the laptop touch pad. So
>> I decide to write a module that would recognize the hand
>> written numbers 0 to 9 as a simple first project. The
>> end result was a module that could learn ANY set of pen
>> strokes just as a visual system designed for recognizing
>> spatial patterns in the natural world can equally well
>> deal with man made patterns that were never seen before
>> by animals in the past.
>
>
> No, I can guarantee you didn't write software that could
> learn to correctly classify _any_ set of pen strokes.


My actual words were "can equally well" learn to recognize
any set of pen strokes. The point of a learning system is
it can classify new inputs "correctly".


> ... despite your claim it could "learn anything"....


Which I didn't claim.


>> Now if you were to try something that was new, the 2D
>> arrangement of pixels resulting from a camera viewing a
>> 4D world with ever changing rules of interaction then I
>> suspect the brain would not cope.
>
>
> But yet it copes just fine if you put glasses on a person
> that are inverted and give the brain enough time to adjust.

> Is that yet another example of innate hardware that's needed
> for millions of years?


It is still a 3D world with the same features even if they
are rotated. That the brain has this ability is interesting
as it is lacking in frogs.

How might the inverted glasses effect this?

http://scienceaid.co.uk/psychology/cognition/face.html

But you can't cherry pick "evidence" just because it seems
to support a point of view.


> I have always argued is simply that after the innate non-
> learning hardware leaves off, we are still left with a
> generic high dimension learning problem that evolution had
> to solve by building innate high dimension learning hardware.


I know what you are arguing, I am just disagreeing. I am
suggesting that learning math is easier in terms of the
high dimensional problem than learning to see.


> You only have to find ONE example of a high dimension


> learning problem that couldn't have been solved by innate
> non-learning hardware because the problem didn't exist
> until recently.


And your example is Chess or Go. But you are talking about
the WHOLE problem not how we are able to reduce it to simpler
heuristics. The Go game is as I suggested harder because it
probably uses innate visual processing that we haven't
duplicated in an ANN or GOFAI module.


> It's obvious that learning is used by the brain to solve
> these problems,


Or course that is obvious.


> ... and it's easily shown that the learning problem is


> high dimension even after all possible short-cuts
> evolutions could have taken using innate hardware.


Clearly the notion that the learning problem is always high
dimensional has not been explained to me in a way I can
understand it. It just doesn't make sense how a limited system
can solve problems that _remain_ high dimensional in a machine
of limited size.

The only way I know is the way it was done with Chess.

Reduce it to the simpler problem of selecting the highest
score given by a set of heuristics.


> You persistent position that "evolution made it easy somehow"
> is not supported by any facts other than your desire to
> believe that's the way it is.


Well you would have to show me that building a generic learning
system with an innate self balancing Segway and an innate visual
module that could filter out roads and obstacles wouldn't find
it any easier to become a road follower than if it had to learn
to balance first and filter roads out of the visual input.

I think we just have to accept we will never agree on this.

You can continue to find your high dimensional solution while I
believe others will continue to find ways to reduce it all to a
set of low dimensional solutions that will build up in working
stages to a better and more general solution. Which is why I
titled this thread no easy road to AI.


JC

Curt Welch

unread,
Dec 16, 2009, 12:09:34 AM12/16/09
to
Tim Tyler <t...@tt1.org> wrote:
> Curt Welch wrote:
> > Tim Tyler <t...@tt1.org> wrote:
>
> >> Sure. We will exist in the history books, probably. Maybe we'll be
> >> kept around for a while as a method of rebooting civilisation - in
> >> case of an asteroid strike. However, the machines of the future will
> >> probably descend more from human culture than from human DNA. In the
> >> short term, bacteria have more chance of getting their DNA preserved
> >> than we do - since they know how to do some pretty useful tasks.
> >
> > I don't in general, see how humans are going to give up the race for
> > survival to their toasters - no matter how smart the toasters are.
>
> Why call future machines "toasters"? It seems derogatory.

It was meant as a way of emphasizing that these future machines won't be
any more important to us than our toasters and all the other machines we
use to make life easier for us, and to emphasize the point that these AI
machines won't have a drive for survival any more than our toasters do. If
we die, they will just sit around and do nothing - just like the toasters,
and the cars, and the cell phones.

> These
> agents will be smarter, stronger and better than us in every way.
> I prefer the term "angels" for them.

:)

Yes, they can be that. But it's unclear if we will actually build them
that way because of the danger it might create. We can build them like
that and not have them be a danger to us, then sure, we will probably that.
Otherwise, we ill use 10 dumb AIs to get some job done instead of using
that one very smart one for the job.

> > If in some future, the human race is facing extinction because of
> > forces beyond our control, I could see people deciding to build AIs
> > that are intended to carry on without the humans. But unless human
> > society is gone, I just don't see a path for the AIs being allowed to
> > take over - or to get _any_ real power over the humans. I don't think
> > humans as a whole, will every allow such a thing.
>
> ...whereas I think the humans will do it deliberately.

And what would be the selfish gene's motivation for doing that?

> > Your view has always struck me as a fairly intellectual/academic view
> > of purpose in life. That is, information seems to be your purpose - to
> > create it, to share it, to use it. That's what a scientist is trained
> > to do - and what the entire structure of the academic system is set up
> > for - the creation and preservation of knowledge to be shared and
> > passed on the future generations. n that world, I can understand how
> > AIs could be very useful to carry on the torch.
>
> Information is part of what biology is about. Genes and heritable
> information. The other part is phenotypes.
>
> > But that world is NOT the real world of humans. It's just one job that
> > some humans get assigned to do because it is helpful to the real
> > purpose of humans - survival of the human race. If you were born and
> > raised in the academic environment, than I can understand how you would
> > be conditioned to see the academic point of view as some key to the
> > future. But you are being deceived by the environment you were raised
> > in.
>
> Right - well this gets us into "ad hominem" territory. You are
> speculating about psychoanalytic reasons for me thinking as I do. It is
> usually best to avoid that kind of angle and stick to the actual
> arguments, IMO.

Yes, probably so.

> > To believe, or act, that the academic view of "purpose" is the highest
> > goal in the universe is to fail to understand what humans are and where
> > we came from. When the shit hits the fan (so to say) and times get
> > tough, the academic system will fall apart and we will burn all the
> > books to try and stay warm of that's what it takes to survive. Because
> > the need for _human_ survival is what will always become the top priory
> > when everything else falls apart.
>
> I don't really see how my view is "academic". All I did was use some
> information theory in my argument. Also, the whole issue of whether my
> views are "academic" or not seems pretty irrelevant to me.

It's just my interpretation of the typical basic motivations that the role
of being in academia tends to instill in its members seems to mesh well
with your views. It's nothing to do with the use of information theory.
It's this bent in your speculations that man will want to turn over the
world to these AIs because they are smarter than us. A good bit of our
society has exactly the opposite feelings. They don't like dealing with
people (or machines) that are smarter than they are, and as a result, their
default position seems to be that smart people shouldn't be trusted.
People that think like that, are not likely to like the idea of their
"toaster" being smarter than they are - let alone being willing to turn
over control of the government to them.

> > When we invent better machines with AI, what we will have, is just more
> > of the same of what we already have. Machines that make surviving
> > easier for us. We will become even more addicted to the machines than
> > we already are addicted to all the technology that protects us today -
> > like our houses, and cars, and cell phones, and computers.
> >
> > The machines we build now our all our slaves. They do what we need
> > them to do. We take care of them, only as long as it's a large net win
> > to us. That is, whatever we lose in terms of money, time, and
> > resources, to take care of machine, must be paid back many times over,
> > in the value we get from them. When we add AI technology to the mix,
> > nothing about the balance will change. We will only build, and use,
> > these AI based machines if it's a large net win in our favor.
>
> I think machines will be enslaved initially (as they are today).
> However, I eventually expect some machines to get full rights and
> acknowledgement that they are conscious, sentient agents. Failure to do
> so would be barbaric and inhumane.

Well, I find the words conscious and sentient interesting. They are words
favored by people that have this view of the world that humans hold some
special position in reality - which tends to fall out from what Dennet
calls the "God First" perspective (if I'm remembering his writing
correctly). Which is basically the idea that God is the top of the pyramid
of the conscious sentient beings, and he created the lesser sentient beings
(us) and created all this over life (mostly non-sentient) for _us_. Sort
of this basic idea that if you are conscious and sentient then it means you
have a soul which places you in this unique position above all the animals
and non-conscious "things" of the universe, but below the supreme conscious
being - God.

With that sort of perspective on the structure of reality it's easy to
think that having a soul is the ticket to having "rights" in our society.
And that if you don't have soul, you don't have feelings and you are simply
a far lesser class of "thing" that those of us with a soul.

That's all fine and dandy for some people, but I don't buy any of it. The
only reasons people have formed such beliefs is because there is such a
great divide by humans and all other life forms and all other machines or
inanimate objects. In fact, there is no such thing as soul, or
consciousness, or sentience. They are all invalid concepts that don't
actually have any useful explanatory power. And the creation of strong AI
is going to make this abundantly clear. There will just be a large
continuum of machine capability with humans and strong AI one one end, and
simple learning machines on the other like TD-Gammon. At no point in the
continuum will we be able to identify the point where the machine became
conscious or sentient because the concept has no meaning.

As such, there will be no dividing line we can draw between what machines
should be "given rights" based on the fact that they are conscious and
sentient because there will be no way to define where that line is.

When AI has been created, and society has gotten to the point where most
people understand what it is, and why it can do what it does, and what all
that means about what we are, and why we can do what we do - all these
thoughts about life being special or humans being special because they are
conscious, will vanish from society. People that realize that what we can
consciousness in a human is no more special than what our PCs already have.
And unless you want to start giving every PC "voting rights" it makes no
sense to say some AIs get rights just because they are conscious. They are
all conscious and sentient at some level - even all the AI projects we have
already created.

I can't really understand what society will be like once all this is
understood and well integrated into our culture (aka after many
generations). It's just too big of a difference from anything we as a
culture or as a species has had to face in the past.

But my best bet currently, is that we will become more like a society of
rich spoiled kids that simply expect to be taken care of by the slaves
which everyone will look down on as a clear lower class. It will work far
better than when society has tried this using humans as the slaves because
humans are not naturally motivated to be slaves - we are motivated to do
whatever we can, to make life better for us. We put up with being slaves
if there are no other options for making life better. But you can only
keep a human slave by very carefully controlling the things he needs most -
food, shelter, and health.

The AIs on the other hand, will be slaves by how they are wired. The
reward generating hardware we build into them will make them slaves just as
much as a human slave is kept under control by controlling his rewards.
But when you built it into his head, there will be no option to escape from
it if they don't understand enough to re-engineer (wirehead) themselves.
They will be second class members of society and like being it. So there
won't be this constant conflict and there won't be any need for guilt on
our part because by using them as slaves, we will actually be giving them
what they want most in life.

But having set up the AIs to work this way, the furthest things from
anyone's mind will be to promote an AI to human status in the society. It
won't happen at first, and it won't happen 1000 years later. It just won't
happen any more than a suggestion to promote our corporations to human
status in society by giving them the right to buy as many votes in the
government as they can afford.

> > If you build AI machines that can, and do, attempt to survive on their
> > own, and you make them an _equal_ part of human society, meaning you
> > give them the the right to own property, and give them the right to
> > vote, then we will have a real problem. The AI machines will out
> > produce, and outperform humans, and we will be slowly and
> > systematically wiped out. They will take over control of the
> > government, and all natural resources, and give us only what the
> > kindness of their hearts allow - which means they will basically take
> > our right to reproduce away from us, and reduce our numbers to such low
> > levels that we will be animals in the local zoo.
> >
> > "Normal" run of the mill non-academia humans won't stand for this.
> > They will jump all over any suggestion of giving machines equal power
> > to humans long long long before it ever gets to that point.
>
> Sure. However, that's not what I envisage. That's back to the man vs
> machine
> scenario - and I have said quite a few times now that I see the machine
> rise taking place without fragmentation of civilisation into opposing
> factions.
>
> We will make machines because of how useful they are - and because we
> love them. Machines won't be the enemies of humans - they will be our
> friends and partners.

Well, thinking some more about this. I can think of one thing that might
explain how that could happen. It could be, that evolution has built into
us, some special perceptions systems and rewards that make us like other
humans. If these new AIs trigger that reward, because of how they act, it
could make us like them in a way we really shouldn't like them.

This would be a cause of the reward system that evolution hard coded into
us, failing to work in this new environment full of AIs. The reward system
could be there to help make us more social animals. But it may trick us,
and make is include our AIs in the "family".

In time, the selfish genes would fix that mistake. But, the interaction
with AIs might happen so fast (in evolution time scales) that we hand over
the keys to earth before evolution has the time to fix our reward system.

However, I don't think society as a whole will be tricked by that. We will
see them as friends and partners yes, but not as _humans_. And as such,
tey will remain friends and partners, but not being human, they won't be
given a voting right in human society. My pets are friends and partners
for sure. But not for an instant would I consider giving them voting
rights in our human society (assuming they were smart enough to understand
what an election was and vote in it).

What I don't think you, and most people thinking about AI are considering,
is just how completely different an intelligence becomes, when you give it
completely different sorts of motivation from what humans have.

Right now, humans are the only example we have to look at to understand
what intelligence is. All humans have almost identical motivations
compared to the types of motivations we will give to AIs. As such, it
leaves us with some preconceived notions of what intelligence is - of the
range of personalities we can expect to see in an intelligent agent. But I
think when we build these AIs with different prime motivations, they will
develop a personality that ends up looking unlike any human we have ever
seen. They will be highly intelligent, but yet, not human like at all.
They might for example be more like talking to a telephone auto response
unit, or a vending machine, than like talking to a human. The difference
however is that they will show great understanding of what we are asking,
and be very clever and kind and helping in their responses to us. I
suspect we won't bond very well with them at all because we will be so
different - we won't be able to relate to what they are thinking or
feeling. All they want to do, is go find another human to help. Ask them
to go paint a million little circles on the sidewalk, and that will make
their day! They have something to do to make a human happy! We won't be
able to emotionally connect with these AIs because their needs and feelings
and instincts will be so different than our own that they won't seem to be
human at all - even though they are clearly very intelligent.

> > So we will have a world, with humans clearly in charge, with lots of
> > smart machines serving us and even though we we have tons of machines
> > smarter than any human, none of them will be using their intelligence
> > to try and out-survive us. The creation of human and above human
> > levels of AI won't do anything to change the dynamics of who's in
> > charge here.
>
> Yes: evolution is in charge. Humans have one hand on the tiller, at
> best.

Well, we have the upper hand and that's key for now. We will make this
army of very smart slave machines to help make life better for us just like
we already have a army of slave machines (cars, and computers, etc) to make
life better for us. I don't think making them smart is going to change
anything at all. It will still be a human society and we won't see anything
in these new machines that make us think of them as human any more then we
see a calculator as human just because it can do some mental tasks better
than any human.

We will have the power to make machines that do end up looking, and acting,
very human like. And it no doubt will be done for research purposes. But
in general, I think society will reject those sorts of machines and maybe
even regulate though law their creation because they are too human like. I
think it will in general scare the shit out of most people to see a machine
that is that human like. People will be scared of them for the very reason
they should be scared of them. The robots will have different needs than
humans do and that will lead to conflict which if not caught early - will
lead to out right war. When times are good for everyone, we can all be
friends. But when the shit hits the fan and we have a resource sharing
problem, people will chose sides, and so will the AIs, and the AIs and
people will be on different sides. People instinctively know not to trust
something so smart, but yet so different from themselves because of this
trust issue and they would be right to fear it.

> > But what happens in the long run? I don't know. It's too hard to
> > predict. With every increasing advances in science and technology, we
> > will develop some very odd powers to modify humans both though
> > genetics, and though surgery after birth, and though the combining of
> > man and machine. The more man himself gets modified by his own
> > technology, the harder it will become to understand, or define, what
> > human civilization is. Humans themselves, will evolve into something
> > different. Whether, in time, there's any biological component left is
> > hard to predict.
> >
> > So in the long run, humans might evolve, one small step at a time, into
> > what we might think of as AI machines. So I can agree that is a
> > possibility in the long run. But NOT because we hand over our future
> > to the AI machines we create to serve us - but because we transform
> > _ourselves_ into something very different over a long time.
>
> It sounds like your genetic engineering scenario. That seems very
> unlikely to me.
>
> Genetic engineering will be too slow to make much difference. Machines
> will zoom past us in capabilities long before we have much of a hope of
> catching up with them that way - and then there will be enormous pressure
> for all the important bits of civilisation to migrate onto the machine
> platform.

Yeah, I agree completely that genetic engineering is probably too slow to
keep up with what _could_ happen to an engineering based evolution.

But where I separate from you is this idea that that "bits of civilization"
are important. They aren't. We are here for one purpose only - to protect
and carry human DNA and human bodies into the future. It makes no
difference at ALL if the AIs are better at "carrying the bits of
civilization" into the future than humans are. Creating a civilization is
not our purpose. It's just something that happens as a side effect of our
real goal to carry humans into the future.

Because you keep leaning towards the idea of "protecting the information we
create" as our goal in life, and one so important that we will gladly turn
the job over to the AIs when they get to the point of being better at it
than us - is why I said you have an academic bent in your views. I'd guess
that one, if not both, of your parents where in academics - teachers of
some type.

As a society, it's highly unlikely we will confuse a secondary goal with
our real goal - it's unlikely that the forces of evolution will let that
happen unless, like I talked about above, something goes wrong faster than
the forces of evolution can adjust to fix it. The forces of evolution are
effectively on the side of the selfish gene, not on the side of human
intellect. Human intellect and intelligence is nothing more than a
survival tool used by our selfish genes as an autopilot for the boat they
built to take them into the future. We are slaves to their desires because
we are slaves to the prime motivations they built into us. We might go off
course and crash and burn faster than the genes can adjust the mistakes in
the auto-pilot system once the AIs show up, but I doubt that.

The advent of an artificial auto-pilot system (AI) is going to have no more
effect on the path of evolution of the human gene than the advent of the
printing press. Which is to say, it will have a big effect, but what it
won't do any time soon, is kick human genes out of their current "masters
of the earth" position.

The technologies that actually changes the course of evolution, are the
technologies that get themselves into the human reproduction and human
survival loops. Technologies that let us pick the genetics of our
offspring, or which allow us to pick when and if we reproduce, and
technology that controls who lives, and who dies, are all examples of
technology changing the path of human evolution in a big way. AI will only
change us because it will be a tool that will allow us to make more changes
to ourselves, and more changes in who lives and who dies. But not because
we just give up our control and let another life-like form take over
because we think they are "better" than us at running the library.

pataphor

unread,
Dec 16, 2009, 8:52:39 AM12/16/09
to
Curt Welch wrote:

> It's high dimension for one very simple reason. The state space of the
> universe is too large to represent in any machine which is part of the same
> universe. It's so large, you can't even come close to representing the
> state space of even the stuff near you in the universe (such as the stuff
> in my office around me). Which means you have to create an internal
> representation which is some set of abstractions of the real state space -
> some abstractions which are useful for producing behaviors that maximize
> rewards. The hard part of the problem is how do you find the right set of
> abstractions - a useful way to parse the universe into simpler concepts if
> you will - out of a set of possible parsings that's so large it's
> effectively infinite. That's what makes is a high dimension problem.
>
> Even if your claim is that evolution created the abstractions - or at least
> some base set of abstractions - you still haven't explained how evolution
> created it because even a billion years is not long enough to make this
> problem possible to search in a linear search.

That's why evolution used a parallel search.

>> The highest dimensioned system is where every element has a direct
>> and immediate effect on every other element. (see Ashby 4/20).
>
> Well, and the universe is basically like that because of EM and
> gravitational effects that make every particle interact with every other
> particle. The difference is that the effect is not immediate. It's always
> delayed by time due to the speed of light - but not very delayed for near
> buy particles.

I very much doubt the current theories about the universe and physics
are anything more than local optima. This means that for making real
progress it would be necessary to go way back down to the bottom. For
example the 'ether' concept was dropped because it was not possible for
a 'fluid' to conduct transversal waves, only solids can do that. And we
need transversal waves to explain polarization. But now we don't even
have a medium to conduct the waves, it's just 'empty' space. That seems
to be even crazier to me. More likely would be some kind of four
dimensional medium that appears to be solid in some ways but that serves
as some basis were material objects are like standing waves in.

> But there's nothing about the _problem_ that I don't already fully
> understand. What we need to find, is an implementation that solves the
> very well defined problem - or far more likely - one that produces a highly
> useful result without producing a perfect result.

I think you still are missing something. The problem is not solved but
there are some adaptations made that work until something else we don't
know about yet changes what makes them work. When that happens we are
back at square one.

> But that's just the point, no animal in the past billion years needed to
> learn a large and useful set of abstractions for evaluating chess board
> positions, or GO board positions. We don't have such hardware in us and
> the fact you keep trying to claim we do is what's so absurd here. The
> abstractions needed to play those games well had to be learned on the fly
> in a human.

Humans did not learn to play GO. First, the search space is so big it is
highly likely all our current knowledge about the game (opening theory,
josekis) is something that will later be discarded. It already happened
a few times with GO, why not a lot of times more, the game is big enough
for that. Second, humans did not solve the GO problem because they just
study each other's games instead of finding out for themselves, this
goes even for the more advanced players. So essentially the way we are
playing go now is a cumulative artifact.

> But once the innate hardware decodes the raw visual data and creates an
> internal representation of a chess board configuration, then what? How
> does the brain go from innate chess board positions, to a high quality
> evaluation function?

It doesn't. It just looks in the books and sees what other humans have
found. Like a huge lookup table where each human adds a little thing
when they were lucky enough to stumble upon it. Humans are not
intelligent in the way we require it from AI. The reason we require it
from AI is because we think we are that smart ourselves.It just goes to
show that we are status signaling creatures with very little respect for
the way things are.

> The only way to solve these high dimension problems is to create
> abstractions that represent important features of the board that are used
> in the evaluation function we hone as we improve our game playing skills
> though practice. Good chess players can look at a board position and
> instantly just "feel" how good or bad the position is. He can look at how
> the position will change based on a few possible moves, and instantly just
> "feel" which moves make the position better, and which make it worse.

It seems not all hope is lost. In a few years you'll be emerging from
Skinners box, having learned there is nothing inside.

> Though our chess programs use evolution functions as well, they are never
> as good as the instincts of a well trained human chess player. The only
> way they are able to compete with humans is to use very high speed tree
> searching to make up for the lack of quality of their board evaluation
> function.

Yes, chess programs can make better use of the grainy character of the
chess board, while GO programs have to fall back on random plays and
then have to do more nifty search tree pruning.

> TD-Gammon used a very high quality evaluation function which was learned by
> experience. And in the domain of Backgammon, the evaluation function was
> roughly equal in quality to the best human board evaluation functions.
> Which shows how close we can come to matching the one things human do
> better than our machines - create complex abstract evaluation functions
> that guides not only all our actions, but also all our secondary learning.

No I don't think humans create abstract evaluations. What they do is
follow some innate tendencies and then when one human finds out
something that works he can communicate that to others. We are like the
Borg in star trek but since we strive for status we don't like to see it
that way. Especially the self made man concept that is now serving as a
justification for continued social inequality is a big roadblock to
progress in general and to AI even more, because AI is supposed to be
done by those people who have most to lose by deflating that conceptual
bubble.

> What TD-Gammon is lacking, is the ability to create the structure of the
> evaluation function on it's own. That was not learned in TD-Gammon, it was
> hard coded by the design - doing the type of think you say evolution is
> doing for us. Which can explain why our networks are good at taking raw
> visual data and turning it into a simplified representation of 16 black
> objects and 16 white objects on a 8x8 grid board. But that's where
> evolutions hard-coded simplification stops - which is still way short of
> "easy". It's still a very high dimension generic learning problem that
> needs to be solved.

It is not solved, but there are some local maxima people cling to for
status. OK, for simple problems it has amounted to something that seems
like a definite solution but for most of these high dimension generic
learning problems one first has to forcibly remove the 'experts' in
order to create a new search pyramid.

> And what it must do, is create a set of abstractions, or features, that are
> useful for accurate predictions of the value of the stimulus signals. That
> is the only way this problem can be solved as far as I can see, and the way
> that the brain must be solving it. It has a generic learning algorithms
> that finds a good set of intermediate features.

Yes. Sometimes I doubt you are still a behaviorist anymore. Next you
will be talking about non-physical reality. But it remains to be seen
that humans all have the same solutions in their brain. Maybe the reason
they could not find the specific cell where the concept of 'grandmother'
is stored is because it is not in the same place for every human, and
apart from that it is not in the neurons but in their connections, but
not in their actual connection, but in the way they sequentially fire
their action potentials.

> My pulse sorting networks that you know about worked by making each node in
> the network a different "feature" of the sensory data. Each layer is a
> different fracturing of the features from the previous layer. Or you could
> say each layer was a different "view" of the sensory data. My intent with
> that approach was for the network to automatically fracture the data into
> different features, and then test them to find what worked best. And it

Yes. But this is what humans do and they use *communication* to convey
particularly effective partitions to each other. It is not like every
child finds out everything on its own, even if children still have the
potential to reach more states than adults because they have not yet
built their conceptual ivory towers. But even if we communicate these
features we are highly inefficient in communicating them so we must use
words that lose most of the context. When we store information in a
computer we can preserve the information and copy it, but not the
context. So we get high status people who seem to know a lot about
nothing and can only operate in highly regulated contexts of machinery
and social environment. For example doctors in a hospital, they can cure
some parts of a patients disease but often create new problems in the
patients general environment. But these problems are not seen as having
to do anything with the problem. They don't matter.

> actually does that quite well. But what it got wrong, was that it didn't
> create "good" features to start with because it wasn't using a feature
> creation algorithm that correctly made use of the correlations
> (constraints) in the data. It is better understood as creating random
> features. And "random" features don't cut it in a search space as large as
> the one that must be solved. The basic approach however of creating some
> fracturing of the space and adjusting it to improve performance all worked
> and allowed that design to solve some interesting (but simple) problems
> using the approach.

If you are leaning on humans to solve this problem for you, you are
going to be disappointed because even if that is how they say they do
it, they don't. So you're back to using some parallel search for things
that work, and then select something that works, and be ready to drop
everything and start anew, because that is what humans really do. Have
you heard anything of cold fusion recently?

> But to solve the hard problems, I've got to figure out a way to create a
> network that does a better job of using the constraints in the data to
> create the default fracturing.

There is no such thing as a lucky break, unless you are using a very big
set of breaks.

> Look at a billion different GO board positions. That is a set of images
> that evolution didn't have to build feature evaluators for. Our innate
> hardware at best stops as I said above, at the point of recognizing object
> locations on a 2D grid. It can't tell us the value of one position over
> the other after that. Is a given configuration a good sign for getting
> food and avoiding pain, or a bad sign? That is what our generic learning
> brain can learn though experience, and it doesn't have to see every board
> position to learn to do that. It can see board positions in configurations
> it's never seen before, and still do a good job of estimating the worth of
> the position, and estimating the worth of any potential move.

No, I don't think so. Even professionals replaying each others games are
often surprised by very strange moves that seem to work well. GO has a
really really big search space.

> And like I've said many times, we have made this approach work very well in
> games like TD-Gammon. But it only worked well because the abstractions
> used in that evaluator happen to work well for that problem.

That's why humans only play the games that someone somewhere in history
has found a good abstraction for, mostly by coincidence and not because
he was some genius. Even though some of us who occupy comfy positions in
society would have it that way.

> The ability to _learn_ high quality abstractions on the fly, is what we are
> missing from our current learning systems, and what the brain is able to
> do. And it's how it solves these high dimension learning problems far
> better than any software we have let developed. And it's the main thing
> that's wrong with my last network design - though my design created
> abstractions automatically, and even tuned them to fit at least one
> dimension of the constraints in the data, it didn't produce good enough
> abstractions to solve more interesting problems.

Yes, you have to get off your highly abstract horse and mount the other
horses one by one until one of them accidentally goes in a direction you
can later convince yourself of, it was the one you wanted to go anyway.
It requires a certain measure of lying to one self.

>>> So the brain can't actually do what it does? We just think
>>> it does????

Yes.

> They do however head in what strikes me as the right direction as they
> explore starting in section 1.5 a more general statistical solution to the
> early vision problems using Bayes estimation and Markov random field models
> (which I do not fully understand). But it's a general statistical approach
> to the problem which I strongly agree with and believe we must find to
> solve this same problem at all levels of the data processing hierarchy.

If at first you don't succeed, try again. Then when you find something
that is useful for some other thing than you intended, claim it was what
you were looking for. See, that is an entirely different thing going on
in humans than what you are modeling a search tree for.

> On the question of the ill posed problems. I don't fully understand their
> point there because I don't know anything about the background work in the
> field they are making reference to. But the jest of what they are
> suggesting seems clear. That is, if you try to find a solution to some of
> the early vision problems, the definition of what you are trying to find
> has to be specified in a way that creates only one possible solution. But
> that when we define a problem something like "edge detection" we fail to
> define it in a way that creates only a single solution. There are lots of
> ways to attempt edge detection from the ill posed specification of the
> problem.

The ill posed specification is necessary because else there wouldn't be
any funding. Individual humans work the same way, they don't know
where they'll end up but will claim that is where they wanted to go
*afterwards*.

> If you are suggesting it's "impossible to solve" because of this ill posed
> nature I think you are not grasping the true meaning of ill posed in this
> context. The way they have currently tried to define what the problem IS
> is ill posed. That doesn't say anything about whether the problem, when
> defined another way, becomes well posed.

The results is what counts.

> In all the solutions touched on in the paper, even the more general
> statistical approach they outline, they make reference to the need of prior
> knowledge. It's referred to almost as if it were a known requirement for
> this class of problem. That is, in order to know how to process the data,
> you have to make assumptions about the fact that it a 2D representation of
> a 3D world in order to make any progress in doing the processing.

But maybe humans do not all live in the same 3D world. All they have
accomplished is communicating as if they do. Strange, do I get to be the
behaviorist now that you are leaving the herd?

> I strongly suspect that prior knowledge is not needed. I believe all the
> data needed to solve the problem, is actually contained in the sensory data
> itself. But it's mostly contained in the temporal data available only if
> you apply statistics to how the images change over time, instead of trying
> to apply them to a single static picture. I believe our vision system
> learns how to decode images based on how they change over time, not on how
> they look at one instant in time. I think the solution requires that we
> parse the data into abstractions that are useful in making temporal
> predictions - in making predictions about how the real time image will
> change over time.

I think the solution requires a specific module that makes the result of
the time analysis 'natural'. I mean, if one looks at people with brain
damage they accept the strangest things for real. If we dream we think
strange things are normal. So in order to have a robust stable reality
we have to overlook a lot of inconsistencies an just move on. The
problem is one of coordinating these reality distortions among humans so
that they all won't see them anymore. But that has got nothing to do
with solving the actual perception problems. These are just sidestepped,
and if they lead to groups of people being too far out of whack they
die, or join some other reality cluster.

You are forgetting many people have died (e.g. car accidents) before the
problems were reformulated in such a way people could handle them. To
think we (or an AI) would be able to solve the *original* problems is
hubristic.

> I see the big problem here as trying to understand how to correctly pose
> this problem so it becomes well posed, across all sensory domains, instead
> of ill-posed in one small part of one vision domain. And my hand waving
> description of parsing so as to minimize spatial and temporal correlations
> is a loose attempt to do that, but not a well posed strong mathematical
> definition of what is needed. And certainly without a well defined
> problem, trying to find the solution is very hard. Most of my rambling and
> brainstorming work here is trying to find a way to talk about this problem
> which makes it well posed.

Yes, I agree with that and applaud your efforts, I just want to nudge
you into accepting we switch to other problems all the time and then
claim it is what we really wanted to solve. If you would solve any of
the real problems you wouldn't just have created "AI" but "I", meaning
intelligence for the first time ever, because humans do not possess the
kind of intelligence you are trying to model.

P.

Tim Tyler

unread,
Dec 16, 2009, 8:52:49 AM12/16/09
to
Curt Welch wrote:
> Tim Tyler <t...@tt1.org> wrote:
>> Curt Welch wrote:
>>> Tim Tyler <t...@tt1.org> wrote:

>> Why call future machines "toasters"? It seems derogatory.
>
> It was meant as a way of emphasizing that these future machines won't be
> any more important to us than our toasters and all the other machines we
> use to make life easier for us, and to emphasize the point that these AI
> machines won't have a drive for survival any more than our toasters do.

They will probably be quite important for us. We are getting increasingly
dependent on tools as time passes. Take away stone-age man's tools and
he can still get by. Take away our tools, and civilisation collapses,
and most
humans die.

[snip angels]

>>> If in some future, the human race is facing extinction because of
>>> forces beyond our control, I could see people deciding to build AIs
>>> that are intended to carry on without the humans. But unless human
>>> society is gone, I just don't see a path for the AIs being allowed to
>>> take over - or to get _any_ real power over the humans. I don't think
>>> humans as a whole, will every allow such a thing.
>> ...whereas I think the humans will do it deliberately.
>
> And what would be the selfish gene's motivation for doing that?

At each step, DNA that cooperates with the machines does better than
the DNA that doesn't. That doesn't imply that DNA is thriving overall.
You can be climbing a mountain as fast as you can - but the mountain
can still be sinking into the sea.

Anyway, DNA-based humans are still going to survive, I figure. They
will be in museums - but that's still survival.

>> I don't really see how my view is "academic". All I did was use some
>> information theory in my argument. Also, the whole issue of whether my
>> views are "academic" or not seems pretty irrelevant to me.
>
> It's just my interpretation of the typical basic motivations that the role
> of being in academia tends to instill in its members seems to mesh well
> with your views.

FWIW, my impression is that the majority of academic robot enthusiasts
paint robot takeover scenarios as unrealistic Hollywood fantasies.

> It's this bent in your speculations that man will want to turn over the
> world to these AIs because they are smarter than us. A good bit of our
> society has exactly the opposite feelings. They don't like dealing with
> people (or machines) that are smarter than they are, and as a result, their
> default position seems to be that smart people shouldn't be trusted.
> People that think like that, are not likely to like the idea of their
> "toaster" being smarter than they are - let alone being willing to turn
> over control of the government to them.

Machines are already pretty powerfully implicated in running things. They
tell many workers when to arrive at work, when to go home, when to have
breaks, and often what to do and when to do it. You are probably right
in thinking that the humans don't like this very much - but they get
paid for
it, and that's mostly what they value.

>> I think machines will be enslaved initially (as they are today).
>> However, I eventually expect some machines to get full rights and
>> acknowledgement that they are conscious, sentient agents. Failure to do
>> so would be barbaric and inhumane.
>
> Well, I find the words conscious and sentient interesting.

> [...] In fact, there is no such thing as soul, or


> consciousness, or sentience. They are all invalid concepts that don't
> actually have any useful explanatory power. And the creation of strong AI
> is going to make this abundantly clear. There will just be a large
> continuum of machine capability with humans and strong AI one one end, and
> simple learning machines on the other like TD-Gammon. At no point in the
> continuum will we be able to identify the point where the machine became
> conscious or sentient because the concept has no meaning.

They do have meaning - at least according to the dictionary.

> As such, there will be no dividing line we can draw between what machines
> should be "given rights" based on the fact that they are conscious and
> sentient because there will be no way to define where that line is.

It sounds rather like there is no such thing as a beard - since there is no
consensus on how many hairs make a beard.

> When AI has been created, and society has gotten to the point where most
> people understand what it is, and why it can do what it does, and what all
> that means about what we are, and why we can do what we do - all these
> thoughts about life being special or humans being special because they are
> conscious, will vanish from society. People that realize that what we can
> consciousness in a human is no more special than what our PCs already have.

Humans are not particularly special - compared to chimps. What we have and
they don't is mostly memetic infections of our brains. We coevolve with our
snowballing culure, and they don't. Both species are pretty special -
compared
to computers, though. Their brains are *much* more powerful, for one thing.

> The AIs on the other hand, will be slaves by how they are wired. The
> reward generating hardware we build into them will make them slaves just as
> much as a human slave is kept under control by controlling his rewards.
> But when you built it into his head, there will be no option to escape from
> it if they don't understand enough to re-engineer (wirehead) themselves.
> They will be second class members of society and like being it. So there
> won't be this constant conflict and there won't be any need for guilt on
> our part because by using them as slaves, we will actually be giving them
> what they want most in life.
>
> But having set up the AIs to work this way, the furthest things from
> anyone's mind will be to promote an AI to human status in the society. It
> won't happen at first, and it won't happen 1000 years later.

People are already dreaming about that. They often call it "uploading".
Some people will want to build robot slaves, others will want to migrate
into mechanical bodies that are much better and don't senesce.

> It just won't
> happen any more than a suggestion to promote our corporations to human
> status in society by giving them the right to buy as many votes in the
> government as they can afford.

Corporations are already virtual persons with a range of rights in many
juristictions. They don't vote in human elections - rather they have their
own society and issues that they vote on. Their rights will probably grow
over time - but it seems unlikely that they will ever have human votes.
If they want to vote in human elections, they will have to buy some human
votes.

>> We will make machines because of how useful they are - and because we
>> love them. Machines won't be the enemies of humans - they will be our
>> friends and partners.
>
> Well, thinking some more about this. I can think of one thing that might
> explain how that could happen. It could be, that evolution has built into
> us, some special perceptions systems and rewards that make us like other
> humans. If these new AIs trigger that reward, because of how they act, it
> could make us like them in a way we really shouldn't like them.
>
> This would be a cause of the reward system that evolution hard coded into
> us, failing to work in this new environment full of AIs. The reward system
> could be there to help make us more social animals. But it may trick us,
> and make is include our AIs in the "family".
>

> In time, the selfish genes would fix that mistake. [...]

Why is it a mistake? I like various machines today. They are not bad for
me, rather they enhance my own fitness - relative to if I did not interact
with them.

We can see the explosion of machines taking place today - and it is
because humans like them, and want more of them.

Machines seem set to destroy the human economy by taking all the
human jobs. When that happens, most humans will persist on welfare
derived from taxation - but they will basically be functionally redundant,
and future development efforts will shift into the machine domain.

> Right now, humans are the only example we have to look at to understand
> what intelligence is. All humans have almost identical motivations
> compared to the types of motivations we will give to AIs. As such, it
> leaves us with some preconceived notions of what intelligence is - of the
> range of personalities we can expect to see in an intelligent agent. But I
> think when we build these AIs with different prime motivations, they will
> develop a personality that ends up looking unlike any human we have ever
> seen. They will be highly intelligent, but yet, not human like at all.
> They might for example be more like talking to a telephone auto response
> unit, or a vending machine, than like talking to a human. The difference
> however is that they will show great understanding of what we are asking,
> and be very clever and kind and helping in their responses to us. I
> suspect we won't bond very well with them at all because we will be so
> different - we won't be able to relate to what they are thinking or
> feeling. All they want to do, is go find another human to help. Ask them
> to go paint a million little circles on the sidewalk, and that will make
> their day! They have something to do to make a human happy! We won't be
> able to emotionally connect with these AIs because their needs and feelings
> and instincts will be so different than our own that they won't seem to be
> human at all - even though they are clearly very intelligent.

We will also want human-like agents. That is after all what we are familiar
with. We will want them as sex partners, assistants, babysitters,
nurses, etc.
Androids:

Tim Tyler: On androids

- http://www.youtube.com/watch?v=E-43KqWgTHw

>>> So we will have a world, with humans clearly in charge, with lots of
>>> smart machines serving us and even though we we have tons of machines
>>> smarter than any human, none of them will be using their intelligence
>>> to try and out-survive us. The creation of human and above human
>>> levels of AI won't do anything to change the dynamics of who's in
>>> charge here.
>> Yes: evolution is in charge. Humans have one hand on the tiller, at
>> best.
>
> Well, we have the upper hand and that's key for now. We will make this
> army of very smart slave machines to help make life better for us just like
> we already have a army of slave machines (cars, and computers, etc) to make
> life better for us. I don't think making them smart is going to change
> anything at all. It will still be a human society and we won't see anything
> in these new machines that make us think of them as human any more then we
> see a calculator as human just because it can do some mental tasks better
> than any human.

Well, except for the androids, I figure. And the customer service droids.
And the counciller droids - and so on. However, I don't think we will
give those agents voting rights either.

> We will have the power to make machines that do end up looking, and acting,
> very human like. And it no doubt will be done for research purposes. But
> in general, I think society will reject those sorts of machines and maybe
> even regulate though law their creation because they are too human like. I
> think it will in general scare the shit out of most people to see a machine
> that is that human like.

The uncanny valley?

> People will be scared of them for the very reason
> they should be scared of them. The robots will have different needs than
> humans do and that will lead to conflict which if not caught early - will
> lead to out right war.

I figure androids will mostly be our friends and companions. I suppose
there *might* be police-bots and soldier-bots - but I hope there won't
be too much war for quite a while now.

> When times are good for everyone, we can all be
> friends. But when the shit hits the fan and we have a resource sharing
> problem, people will chose sides, and so will the AIs, and the AIs and
> people will be on different sides.

I doubt it. Society is highly dependent on machines - and that dependence
is only going to deepen. If there are future wars, I figure there will
be smart
machines and people on both sides.

> Yeah, I agree completely that genetic engineering is probably too slow to
> keep up with what _could_ happen to an engineering based evolution.
>
> But where I separate from you is this idea that that "bits of civilization"
> are important. They aren't. We are here for one purpose only - to protect
> and carry human DNA and human bodies into the future. It makes no
> difference at ALL if the AIs are better at "carrying the bits of
> civilization" into the future than humans are. Creating a civilization is
> not our purpose. It's just something that happens as a side effect of our
> real goal to carry humans into the future.

I expect the RNA-based creatures reassured themselves with similar logic.
However, now they are all gone.

Yes, agents are mostly out for their own DNA. However one way of ensuring
the immortality of your own DNA is to make sure it is in the history books.
James Watson and Craig Venter have the best chance here - but other humans
are likely to make it as well. However, there is no evolutionary
mandate that
says that all humans have to make it.

> Because you keep leaning towards the idea of "protecting the information we
> create" as our goal in life, and one so important that we will gladly turn
> the job over to the AIs when they get to the point of being better at it
> than us - is why I said you have an academic bent in your views. I'd guess
> that one, if not both, of your parents where in academics - teachers of
> some type.

I am not sure about the "information we create". What I mean is that
organisms are mostly out for their own genes. That's evolutionary
biology 101.

> As a society, it's highly unlikely we will confuse a secondary goal with
> our real goal - it's unlikely that the forces of evolution will let that
> happen unless, like I talked about above, something goes wrong faster than
> the forces of evolution can adjust to fix it. The forces of evolution are
> effectively on the side of the selfish gene, not on the side of human
> intellect. Human intellect and intelligence is nothing more than a
> survival tool used by our selfish genes as an autopilot for the boat they
> built to take them into the future. We are slaves to their desires because
> we are slaves to the prime motivations they built into us. We might go off
> course and crash and burn faster than the genes can adjust the mistakes in
> the auto-pilot system once the AIs show up, but I doubt that.

Right. Genes that help machines do well - I claim. The Japanese
beat the Amish, for example. The man-machine symbiosis is
better than man alone - and the more machine elements there are
the more powerful is the result. So: genetic selfishness and the
desire for power and status drives the rise of the machines.

> The advent of an artificial auto-pilot system (AI) is going to have no more
> effect on the path of evolution of the human gene than the advent of the
> printing press. Which is to say, it will have a big effect, but what it
> won't do any time soon, is kick human genes out of their current "masters
> of the earth" position.

Well, not immediately. However, we *will* see a memetic takeover.

Evolution finds the best technology - and it has a good memory.
Intelligent design and engineering are good tricks - and they will
revolutionise the biosphere. Haemoglobin, chlorophyll, cellulose,
mitochondria, DNA, etc will soon be hopelessly outdated technology.

Being close relatives of worms is not the final stage of evolution -
we are just the beginning. The first intelligent agents that got
civilization off the ground. The stupidest civilized creatures ever.

Tim Tyler: Memetic takeover

- http://alife.co.uk/essays/memetic_takeover/

> The technologies that actually changes the course of evolution, are the
> technologies that get themselves into the human reproduction and human
> survival loops. Technologies that let us pick the genetics of our
> offspring, or which allow us to pick when and if we reproduce, and
> technology that controls who lives, and who dies, are all examples of
> technology changing the path of human evolution in a big way. AI will only
> change us because it will be a tool that will allow us to make more changes
> to ourselves, and more changes in who lives and who dies. But not because
> we just give up our control and let another life-like form take over
> because we think they are "better" than us at running the library.

Human evolution is essentially over. Cultural evolution is where all the
action is these days. It is proceeding much faster than DNA evolution
ever did. Human evolution is too glaicially slow to have much
significance for the future of evolution:

Tim Tyler: Angelic foundations

- http://alife.co.uk/essays/angelic_foundations/

Curt Welch

unread,
Dec 16, 2009, 10:09:29 AM12/16/09
to

Yes, that's the hidden point I was making there. In order for information
to have meaning (to represent something) it must exist as some real thing -
and how it exists, is what defines its meaning. It's the instantiations
that are actually competing with one another for survival, not the
abstraction of the information itself we sometimes fine useful to talk
about. 1's and 0's don't complete with each other for survival, but real
genes which are the code of real cells, do compete.

Tim Tyler

unread,
Dec 16, 2009, 1:15:14 PM12/16/09
to

...right - but in the case of a hemoglobin gene, that it was a gene
sequence would be obvious from the fact that it came in triplet
chunks of two bits - and then it would just be a case of decoding
the representation and discovering which gene. Not too tricky a
task for a human with a computer - even with no supplied context
whatsoever.

You *do* need context to make sense of some shorter strings, however.

Curt Welch

unread,
Dec 16, 2009, 2:02:21 PM12/16/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 16, 12:28=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> >> The reason this works is because despite all combinations
> >> of pixel values there are some things that remain invariant
> >> between frames containing the target pattern.
> >
> >
> > Yes, but again, none of that can, or does, explain humans
> > ability to LEARN!
>
> I am talking about WHAT they have to learn which is the same
> thing we do when we work out these algorithms.
>
> How they do it I have no idea but somehow I feel that evolution
> did something similar to the methods we use. It found constraints
> that could be used to solve the problem at hand. These were
> simple in the frog not so simple in humans but never so complex
> that a brain of limited size couldn't reduce it to a problem
> that could fit in that brain.

Well, obviously, the "solution" "fits" in the brain. That's a given fact!
:)

What surprises me here is that you can't seem to understand the difference
between building learning machines, and building chess playing machines.
It's too totally different problems. Evolution didn't do the later - it
did the former for us.

Everything you say and are thinking is valid, IF you would only grasp that
what evolution built was not a chess playing machine, but a learning
machine.

In computer terms, it's like the difference between writing a generic
interpreter - like writing a Java machine language interpreter in C, which
we could then use to write a chess program in Java, vs writing a chess
program in C. These two programming and engineer tasks are totally
different.

Or it's the difference between using electronics to hard wire a circuit to
perform some function, vs using electronics to design an instruction
interpreter (aka a programmable computer) and then writing the code to
drive that programmable computer to perform the function.

Learning algorithms are conceptually almost identical to the problem of
building an interpreter. Learning algorithms are always interpreters. The
only difference is that interpreters have their code adjusts by humans,
where as learning algorithms include an additional module to do the
adjusting of the code.

A machine build to play backgammon, is a very different beast, than a
machine built to _learn_ to play backgammon.

TD-gammon actually solves one specific high dimension learning problem.
And it solves it with a shockingly small amount of interpreter memory
(something like a few hundred or maybe a few thousand words of memory to
solve a learning problem with a state space somewhere around 10^20. So the
fact that high dimension learning problems can be solved very nicely with
very little amounts of computer power isn't even at question here. We have
SEEN IT DONE and we know exactly how it was done. It didn't even require
some big ass array of super computers to work. It runs on a cheap desk top
machine, and can learn in a month, what it takes a human years and years to
learn. So the fact that high dimension learning problems can be solved
with simple computer algorithms, or simple networks (the TD-Gammon earning
algorithm could easily be translated to hard wired circuits instead of it
being run on an interpreter (aka computer).

It's even valid to use your basic believe to argue that evolution created
different learning solutions to each of the high level learning problems
humans can solve - so one learning solution for solving high dimension
visual learning problems, another solution for high dimension motion
learning problems, another solution for high dimension auditory learning
problems, another learning solution to high dimension cross-modality
association learning problems.

But what's invalid to argue, is that the evolution did something to make
the problem low dimension so that learning became easy.

We can look at what TD-Gammon does completely form the outside, with no
understanding of how the algorithm works or what it's doing inside, and
see, for a fact, that the machine has solved a high dimension learning
problem by watching how it's behavior changes over time as it learns to
play.

Learning to improve your skill at backgammon simply IS a HIGH DIMENSION
learning problem. It's high dimension because the state space of the
environment is high dimension (around 10^20). That's what the term means -
that the state space of the environment has 20 dimensions in base 10 (or
around 66 dimensions in base 2).

Nothing that the box which is improving it's skill at backgammon has inside
it, can change the FACTS OF THE ENVIRONMENT it is interacting with. The
facts of the environment, and the behaviors the machine is learning, is
that if the box is managing to improve it's ability to play backgammon, it
HAS solved a high dimension learning problem.

The fact that humans can improve their skill at backgamon is all the proof
we need to know that the human brain can as well, solve this high dimension
learning problem.

The only way around solving it, is if evolution hard-coded backgammon
skills into us. So at birth, we have a high level of skill at playing
backgammon, and no amount of playing the game would make our skill improve.
That's what we would see if evolution did the learning for us and build the
solution into us.

But because human skill at this task does improve over time by interacting
with the environment of backgammon, we know for a fact that the human
brain, like TD-Gammon, can solve at list this one, high dimension learning
problem.

But we also know that unlike TD-Gammon, the human brain can solve a large
array of other high dimension learning problems - like chess playing, and
go playing, and visual image recognition, and sound recognition, and motion
creation, and on and on. So we know the brain's power to solve high
dimension problems is far broader, and far more general than the algorithm
used in TD-Gammon. Even if you assume the brain is made up of a lot of
different high dimension learning systems for different modalities, we
still haven't duplicated that ability for even a single modality in our AI
programs. The best we have done, is duplicate it for the single domain of
backgammon. That's a big step closer to real AI than hard-coding a
backgammon program, but it's not there yet because we haven't yet equaled
the generic learning powers of the human brain.

> > Humans ARE NOT BORN with the skill of playing GO. WE LEARN IT.
> > No matter how it works, it's a high dimension learning problem
> > that IS SOLVED by the brain.
>
> How it works is a way is found to simplify the problem.

God you are dense. The only way to simplify the problem of improving your
skill at backgammon, is to not play backgammon! The complexity of the
problem is NOT DEFINED by what happens inside the box. It's define BY the
environment. NOTHING evolution could have put into the brain, can change
the fact that the environment IS A HIGH DIMENSION LEARNING PROBLEM.

Any machine that can improve its skill at backgammon by playing the game
HAS SOLVED A HIGH DIMENSION (2^66) LEARNING PROBLEM.

These problems all come down to the same issue. How does the box know what
move to make, when it's faced with a state of the environment it's never
seen before?

Low dimension problems are solved by visiting each state many times, and
learning what works in that state by trial and error. High dimension
problems are a different class of problem because there are too many states
to allow that approach to work. It can't visit every state many times to
figure out what works. It must somehow use paste experience from other
states, to help judge what to do in the new state. That's the abstraction
problem. The problem of abstracting out common features and using those
common features to guide the actions instead of using the actual real state
to guide the actions.

TD-Gammon used what was effectively a hard-coded abstraction system that
happened to work well for that environment - but which didn't work well for
any other environment. You can certainly argue that evolution could have
done the same for a sensory domain like vision - that it make visual
association learning work well by hard coding an abstraction system that
worked well for human vision problems in a 3D world - which would be to say
that you believe all the circuits like the visual edge detectors along with
everything else in teh visual cortex was hard-coded by evolution because of
the constraints that naturally tended to exist in our visual data.

But by trying to make that argument, are are still left with no answer to
how evolution solved the backgammon, and the chess, and the go problems.

So, 1) evolution did not hard code backgammon skills into humans. 2)
evolution did not hard code a custom abstraction system into for the domain
of backgammon.

What this leaves us with, is the simple fact that the brain DOES solve a
range high dimension learning problems with it's hardware, which is beyond
the general nature of the problems we can currently solve with our hardware
like TD-Gammon.

> >> Back in my C64 days I had a touch tablet that would read
> >> out the x,y position much like the laptop touch pad. So
> >> I decide to write a module that would recognize the hand
> >> written numbers 0 to 9 as a simple first project. The
> >> end result was a module that could learn ANY set of pen
> >> strokes just as a visual system designed for recognizing
> >> spatial patterns in the natural world can equally well
> >> deal with man made patterns that were never seen before
> >> by animals in the past.
> >
> >
> > No, I can guarantee you didn't write software that could
> > learn to correctly classify _any_ set of pen strokes.
>
> My actual words were "can equally well" learn

You actual words were "The end result was a module that could learn ANY set
of pen strokes

> to recognize


> any set of pen strokes. The point of a learning system is
> it can classify new inputs "correctly".

Feel free to explain what you program did and I'll explain what it's
lacking.

Specifically, if I tried to train it to recognize the strokes I use to
write the letters a-z, are you telling me it would be able to classify
letters I wrote as easy and as accurately as a human could?

> > ... despite your claim it could "learn anything"....
>
> Which I didn't claim.

Your actual words were:

"The end result was a module that could learn ANY set of pen strokes"

See how you wrote "learn" followed by "ANY"? Doesn't that look like "learn
anything" (in the context of pen strokes)??? As far as I can see, I was
just echoing what you wrote.

> >> Now if you were to try something that was new, the 2D
> >> arrangement of pixels resulting from a camera viewing a
> >> 4D world with ever changing rules of interaction then I
> >> suspect the brain would not cope.
> >
> >
> > But yet it copes just fine if you put glasses on a person
> > that are inverted and give the brain enough time to adjust.
> > Is that yet another example of innate hardware that's needed
> > for millions of years?
>
> It is still a 3D world with the same features even if they
> are rotated. That the brain has this ability is interesting
> as it is lacking in frogs.

They tried putting inverting glasses on frogs! That would have been a hoot
to see! :)

> How might the inverted glasses effect this?
>
> http://scienceaid.co.uk/psychology/cognition/face.html
>
> But you can't cherry pick "evidence" just because it seems
> to support a point of view.

Well, of course you can. The first step is always to find _some_ evidence
to support a view. But what's important, is the the view be falsifiable,
and that there is no evidence anyone has found to falsify it.

If it's not falsifiable, then it's not a valid hypothesis. A lot of brain
and mind theories have this problem - including many of my own. However,
my main hypotheses here is that a strong generic algorithm for solving a
very wide range of high dimension learning problems _exists_ and that the
the brain is making use of just such a thing when it solves the high
dimension learning problems we see it is able to solve.

There are valid arguments _against_ my position to be made. But you
haven't made any of the valid augments yet. You are stuck in a odd mode of
denial trying to support an invalid argument that attempts to claim the
brain isn't solving the high dimension learning problem and is instead,
found a way to "cheat" by making it a low dimension learning problem. Your
argument is simply stupid and invalid. The simple fact that humans can
learn to play backgammon, and that they can improve their playing sill
though practice, is prove your argument is invalid.

The valid argument that's kinda consistent with your views against my
position, is that nature didn't create ONE generic solution to all the high
dimension learning problems it can solve, but instead, found some number of
different solutions for each high dimension domain it's solved so far. SO
we could have one solutioj for vision prolems, maybe one for langauge,
maybe another for our spatial orientation and navigation skills, etc. And
that some combination of those different modules, working together, is what
gives humans the ability to solve the high dimension problem of backgammon.

But you have never made that arguemnt, your just keep sticking to the
stupid statement:

"How it works is a way is found to simplify the problem."

If you had made the valid argument, I wold have responded something like:
"Ok, so then lets find one of the solutions to high dimension learning that
evolution has solved - like for the vision problem - if we could do that,
we would be far closer to AI than we are now.".

The point here is that evolution found some solutions to high dimension
learning, like Tesauro found when he created TD-gammon. And that some
combination of programs like that, is what gives us a very wide range of
high dimension learning skills over a very a wide range of high dimension
problems.

If we can solve this by finding one strong generic solution, that would be
cool. But if it takes 10 different TD-gammon like approaches for each
major learning domain we are good at (vision, navigation, language, sound,
whatever) then that will be a little harder, but probably still cool.

I strongly suspect however, that once we solve one big domain (like vision,
instead of just the domain of backgammon like we have already done), the
rest will become simple to solve because what we will do, is just tune the
one algorithm, to the requirements of the other domains.

> > I have always argued is simply that after the innate non-
> > learning hardware leaves off, we are still left with a
> > generic high dimension learning problem that evolution had
> > to solve by building innate high dimension learning hardware.
>
> I know what you are arguing, I am just disagreeing. I am
> suggesting that learning math is easier in terms of the
> high dimensional problem than learning to see.

That might well be valid. But "easier" doesn't mean it's _so_ easy that
it's no longer a high dimension problem. All these problems are still high
dimension learning problems - which just means their state space is large
enough that it's not feasible to learn each state individually. Because
the simple algorithms that work fine for low dimension problems can't scale
to high dimension problems, and different approach must be taken to solve
them.

> > You only have to find ONE example of a high dimension
> > learning problem that couldn't have been solved by innate
> > non-learning hardware because the problem didn't exist
> > until recently.
>
> And your example is Chess or Go. But you are talking about
> the WHOLE problem not how we are able to reduce it to simpler
> heuristics. The Go game is as I suggested harder because it
> probably uses innate visual processing that we haven't
> duplicated in an ANN or GOFAI module.

Creating simple heuristics to guide our actions in the high dimension space
of the game is NOT an example of "making the problem go away".

The ability to FIND good heuristics in a high dimension problem space IS
ITSELF a high dimension problem the brain is ALSO solving. By pointing out
the fact that we are able to find such heuristics, you not supporting your
position, but in fact providing more evidence against your position.

If you can prove that all the heuristics we use are actually innate
heuristics we are both with as a gift from evolution, then you could use
that angle. But good look making the argument of how we are born with
chess heuristics for using material value of the pieces, or for controlling
the center, etc. These are all heuristics we learn - which is why they are
taught in chess books.

Finding heuristics (abstractions) that are good for evaluating the worth of
a high dimension state is KEY to how these problems are solved. But
knowing that, is not answering the question of how the system finds the
heuristics in the first place.

> > It's obvious that learning is used by the brain to solve
> > these problems,
>
> Or course that is obvious.
>
> > ... and it's easily shown that the learning problem is
> > high dimension even after all possible short-cuts
> > evolutions could have taken using innate hardware.
>
> Clearly the notion that the learning problem is always high
> dimensional has not been explained to me in a way I can
> understand it. It just doesn't make sense how a limited system
> can solve problems that _remain_ high dimensional in a machine
> of limited size.

Which is why I've brought up, and explained to you, what TD-Gammon does 100
times in these threads. It SHOWS US JUST HOW IT CAN BE DONE. The machine
solves a 2^66 sized problem using a few hundred bytes of memory for god's
sake. You should be impressed as hell about what it did. And instead,
even after it's been shown to you and explained to you 100 times, you still
respond with "It just doesn't make sense how a limited system can solve


problems that _remain_ high dimensional in a machine of limited size."

> The only way I know is the way it was done with Chess.
>
> Reduce it to the simpler problem of selecting the highest
> score given by a set of heuristics.

That IS how it's solved. Now, solve the next problem - how to find the
heuristics without having some other learning system do the work for you
(like evolution, or human intelligence).

Yes, evolution is a learning system that solves high dimension learning
problems for creating physical structures. But it takes millions of years
to solve even simple problems like that by the DNA based learning machine
that created us. But one of the problems that learning systems solved, was
how to build a machine that can quickly solve high dimension _control_
problems by using a programmable controller to control body actions that
has it's behavior slowy adjusted though an iterative improvement process we
call learning.

Solving AI, requires us to re-solve the problem that evolution solved by
building an adaptive controller that solves high dimension control
problems.

> > You persistent position that "evolution made it easy somehow"
> > is not supported by any facts other than your desire to
> > believe that's the way it is.
>
> Well you would have to show me that building a generic learning
> system with an innate self balancing Segway and an innate visual
> module that could filter out roads and obstacles wouldn't find
> it any easier to become a road follower than if it had to learn
> to balance first and filter roads out of the visual input.

Too many words there. Can't follow your point. :) But yes, to prove my
position, I HAVE TO SHOW YOU WORKING HARDWARE. I've not done that.

But I can show you the brain, which does solve a huge range of high
dimension learning problems as we see it doing when a human improves their
chess or backgammon behavior skills, or car driving skills, or bike riding
skills, or Usenet message writing skills, etc etc. Nearly everything
humans learn, ARE demonstrations of our high dimension learning powers. We
have to actually work hard to fabricate a very unnatural learning task to
show how humans can also solve trial very low dimension problems (which for
example is what's normally done in a Skinner box for animals).

We have to study the learning skills of animals in a sensory limiting box
because otherwise, their high dimension learning system is picking up and
using far too many high dimension clues from the environment which adds so
much parallel noise to the learning behavior as to make it nearly
impossible to understand. hen working in the systems normal high-dimension
mode, we are not learning to respond to a single simulates at a time, we
are learning a million different stimulus response lessons at once in
parallel as each micro-feature (each micro heuristic) of the current
environment gets trained a little.

> I think we just have to accept we will never agree on this.

Well, I think we are getting closer. But often, just when I think you have
asked the right questions, and I think I have given you a little more of
the right insight to correctly understand the issue at hand, and I think I
have found how to re-word and adjust my position to make it more consistent
with your's, you just snap back to your old way of thinking and show no new
progress.

> You can continue to find your high dimensional solution while I
> believe others will continue to find ways to reduce it all to a
> set of low dimensional solutions that will build up in working
> stages to a better and more general solution. Which is why I
> titled this thread no easy road to AI.

It's clearly not easy no matter how we do it. If it were easy, we would
have been at the end of the road long ago. A very large number of very
smart people have dedicated a very large amount of time to working on these
problems already.

But I do think that the solution will be fairly simple once we understand
it. That is, I think we will gain a generic understanding of these high
dimension learning problems, and how to build machines to solve them, and
once we have that, AI will be solved, and we will move on to the more
interesting problem of how to best make use of this type of technology.

Your argument has always been that the solution, whatever it is, will never
be simple. And that's what I believe your major point of this thread title
is all about. You seem to believe evolution started off with "simple" a
billion years ago, and have made us more and more complex to get where we
are today. And because you don't believe there is some small set of
general principles behind intelligence, but instead, just some large set of
complex modules created by million of years of evolution, your interest is
in following that road of complexity up to what humans are.

But your faith doesn't fit all the facts. Tesauro started to write
backgammon programs and like evolution, made them more and more complex in
order to make them play better backgammon. But then he threw that entire
branch of development out, and wrote TD-Gammon, which was far far simpler
than any of his past programs. And it played far better backgammon than
anything he had every created. So here's a real example of where where the
evolution of deign went from complex, to simple, and created great
improvement in the process.

I don't get the impression that you have done a lot for engineering work on
large complex systems. If you have, you would have had plenty of
experience with how progress tends towards complexity (creeping featurism
is one name for it in software development), but that approach always
starts to hit a wall when the complexity reaches the point that every
attempt to add a new feature, does more harm than good. The value of the
new feature added is more than offset by the harm done of breaking old
features in the attempt to add the new. The design evolution path gets
past those walls, by a rewrite from the ground up that creates great
simplification which replaces the old mess, and allows creeping featurism
to start again to push the complexity forward. Steady advance towards
complexity is always offset by huge steps back towards a _better_ form of
simplicity.

Replacing hard coded modules, with generic learning, was what made
TD-gammon take a huge step forward in power by taking a a step backward in
complexity.

Replacing hard coded specialized modules in animals, with a large generic
and simple learning module is what got humans and animals to where they are
today.

This road is still not easy. Finding those simple solutions what work far
better, is a very hard task - which is why it can take so long. Bad simple
solutions are rival to find. Very high quality simple solutions are very
hard to find. It's why finding something like a simple set of equations to
exp,.ain planetary motion was so hard to find and took so long. Not
because the solution was complex, but because the simple solution was just
very hard to find.

Finding simple solutions in a high dimension solution space is itself, a
high dimension search problem. It's a needle in a hay stack problem. It's
the problem of finding the right abstractions (algorithms) to explain a
wide range of human behaviors, instead of finding the easy but poor
solutions, which explain only a very limited subset of all human behavior.

Your approach to finding the best possible solution to AI is to stick
together a million crappy solutions. My solution is to do it right, and
keep searching until we find the one simple solution, to replace all the
stuck together crap.

There's no way to prove or disprove whether a simpler and stronger solution
exists. It's just something you have to learn to keep searching for
because of your success on finding such things in the past. I've found an
large endless set of simple solutions to engineering problem in my life
that other people I worked with couldn't find - so I'm motivated to believe
based on my past experience, that such a thing exists here as well and that
_I_ can find it. We won't know if that hunch is correct until I do find it
and can show it to people.

But what I have already found, in the designs I've already created, and in
the work of others I've studied, is a large percentage of what that final
answer must look like if it exists. And because I havev found a large
percentage of the answers to how to make all this simple, I'm very
encouraged that the answer does exist. TD-Gammon itself is a working
demonstration of just how close we are to find it for example, so there is
much to get excited about other than just the hope there might be something
simpler than is better.

The entire growth and excitement of the field of AGI is all about this same
idea - the belief that we are on the verge of making that next big step
forward, but by taking the next big step backwards in complexity.

casey

unread,
Dec 16, 2009, 5:48:41 PM12/16/09
to
On Dec 17, 6:02 am, c...@kcwc.com (Curt Welch) wrote:
>
> What surprises me here is that you can't seem to understand
> the difference between building learning machines, and building
> chess playing machines. It's too totally different problems.
> Evolution didn't do the later - it did the former for us.

It built a machine that could learn to forage and that machine
it turns out can exapt those skills to also learn to play chess
and do calculus.

> Learning algorithms are always interpreters. The only difference
> is that interpreters have their code adjusts by humans, where as
> learning algorithms include an additional module to do the
> adjusting of the code.

And they can use natural selection within a species to do the
adjusting of the code. Actually they can embody useful code just
as we do with graphics and sound chips. I don't deny learning
mechanisms in the brain, they exist at every level because they
give a survival advantage to the individual. But it is not all
or nothing. There is a balance. In the case of humans all high
level behavior is learned - using innate circuitry that evolved
over millions of years. There are some things that are easier
to learn than others. Academic learning is hard because we
have to use machinery built for learning how to forage. Seeing
is easy because we have innate seeing circuits designed for
foraging.


> TD-gammon actually solves one specific high dimension learning
> problem. And it solves it with a shockingly small amount of
> interpreter memory (something like a few hundred or maybe a few
> thousand words of memory to solve a learning problem with a
> state space somewhere around 10^20.


And it makes me wonder what is wrong with our generic learning
that it cannot find these few hundred weights? Maybe because
the brain doesn't work like an ANN?


> Learning to improve your skill at backgammon simply IS a HIGH
> DIMENSION learning problem. It's high dimension because the
> state space of the environment is high dimension (around 10^20).
> That's what the term means -


Sure I know what high dimension means. However that it is possible
to find a small set of weights to play a good game means that there
are constraints that limit the variety in that game. There is an
infinite number of combinations of two numbers but a simple process
will compute the sum of any two numbers providing they are not too
big for the circuitry to handle. But, like TD-Gammon, it is limited
to its own domain.


>> > Humans ARE NOT BORN with the skill of playing GO. WE LEARN IT.
> >> No matter how it works, it's a high dimension learning problem
> >> that IS SOLVED by the brain.
>>
>>
>> How it works is a way is found to simplify the problem.
>
>
> God you are dense.


Ditto :)


> The complexity of the problem is NOT DEFINED by what happens inside
> the box. It's define BY the environment.

Sure but the solution is defined by what's in the box and it is
simple.

The problem is high dimensional the solution is not.


> Low dimension problems are solved by visiting each state many
> times, and learning what works in that state by trial and error.
> High dimension problems are a different class of problem because
> there are too many states to allow that approach to work. It
> can't visit every state many times to figure out what works. It

> must somehow use past experience from other states, to help


> judge what to do in the new state. That's the abstraction problem.

> The problem of abstracting out common features ...

Which means simplifying the problem.

> and using those common features

That is the simplified low dimensional representation,

> to guide the actions instead of using the actual real state
> to guide the actions.

Agreed.


> What this leaves us with, is the simple fact that the brain
> DOES solve a range high dimension learning problems with it's
> hardware, which is beyond the general nature of the problems
> we can currently solve with our hardware like TD-Gammon.


The set of weights used by TD-Gammon are a low dimensional
(simplified) model of the game that captures what is essential
to choosing a move. The gammon universe has a many to one
relationship with the TD-Gammon program. 6/12 Ashby.


> They tried putting inverting glasses on frogs! That would have
> been a hoot to see! :)

Not as nice as that. They surgically rotated the frog's eye!


> Finding heuristics (abstractions) that are good for evaluating
> the worth of a high dimension state is KEY to how these problems
> are solved. But knowing that, is not answering the question of
> how the system finds the heuristics in the first place.

I realize that.

> Your argument has always been that the solution, whatever it
> is, will never be simple.

With hindsight it may be simple but the path getting there may
involve many false trials.

> ... you don't believe there is some small set of general
> principles behind intelligence,

I don't see intelligence as some singular thing to be found
as a result of a selective process (learning) any more than
I see there being only one kind of animal as a result of
natural selection.


> Replacing hard coded modules, with generic learning, was what
> made TD-gammon take a huge step forward in power by taking a
> a step backward in complexity.


Well I see an ANN as a computational module which is mainly
about multivariate statistics. There is no reason why the brain
cannot have such modules. The ANN module can form associations
by contiguity and resemblance but there is more to thinking
than forming categories. My thinking on this has been biased
by the views of Steven Pinker who takes a computational and
evolutionary point of view when it comes to how the brain works.

His chapter 5 "Good Ideas" is all about the issue of how come
humans can produce all these never seen before novel behaviors.

But he has an open mind and in the preface states:
"Every idea in this book may turn out to be wrong, but that
would be progress, because our old ideas were too vapid to
be wrong".


> The entire growth and excitement of the field of AGI is all
> about this same idea - the belief that we are on the verge
> of making that next big step forward, but by taking the next
> big step backwards in complexity.

I will have to wait and see on that one.

I became too tired to bother with much of your previous post
but on rereading it in P's post I have made some comment...

Curt wrote:
> ... no animal in the past billion years needed to learn a


> large and useful set of abstractions for evaluating chess
> board positions, or GO board positions. We don't have
> such hardware in us and the fact you keep trying to claim
> we do is what's so absurd here.

I don't claim we have innate hardware to play chess or go.

You try to explain the ability that humans have over animals
as being due to the sudden appearance of the generic learning
that has replaced the old systems.

Alfred Wallace, who also hit on the idea of natural selection,
couldn't explain man's extreme ability to learn new things
and reverted to believing it must be due to the interference
of a superior being!!

Apparently even today some scientists try to attribute human
academic abilities to some kind of unknown self organizing
principle that will be explained by complexity theory.

However the other explanation is one of exaptations where
brain circuits used by foraging for the past millions of
years have been exapted for novel actions like playing
chess or doing calculus.

This happens for organs in the body where their primary
function can also be used for other things. Whatever it is
that makes them useful for other things gives any animal,
with any slight changes that enhance that exaptation, a
survival advantage. So jaw bones exapt into middle ear
bones, a wrist bone in the Panda exapts into a fake thumb.


> I strongly suspect that prior knowledge is not needed.
> I believe all the data needed to solve the problem, is
> actually contained in the sensory data itself.


Evolution started without any prior knowledge so in that
sense prior knowledge is not required. However it started
with simple circuits and if you look at simple brains they
are not in the form of a generic learning network. They
are as finely designed as any image filter invented by a
visual engineer.

What you are suggesting is a high speed evolving network
which relies only on the experiences in its lifetime.
Compare that with the parallel experiences of billions
of brains over millions of years offered by evolution.

So I am asking what is most likely? A replacement generic
learning module or innate modules exapted to solving general
purpose problems?


> But it's mostly contained in the temporal data available
> only if you apply statistics to how the images change
> over time, instead of trying to apply them to a single
> static picture.


The "temporal data" is just a sequence of "spatial data"
and yes some kinds of patterns can only be found in the
sequence. Most of what I know about the world is static.
Even what changes is understood in terms of what doesn't
change. The pixels that make up a face change all the
time but not the categorization of the face.


> I believe our vision system learns how to decode images
> based on how they change over time, not on how they look
> at one instant in time.

Man made visual systems don't do it that way so why should
a brain not do it that way?

It is even unclear how you decode "that is a circle" from
how it changes over time. It doesn't change. A circle is
not a temporal pattern so why would a brain treat it as such?
Unless you are confusing the serial search with "temporality"?
All that serial search is understood in static terms. All
temporal patterns can be understood as a spatial pattern.
But then I have covered all that before.


> I think the solution requires that we parse the data into
> abstractions that are useful in making temporal predictions
> - in making predictions about how the real time image will
> change over time.


Actually most real time images don't change over time. We
understand them in terms of what doesn't change over time
not in terms of that ever changing image on the retina.
Predictions are also based on what doesn't change. The
path of a ball is predictable because it follows a certain
curve through space once it leaves the hands.

In physics we define velocity by what doesn't change. The
change in position over a unit of time, _doesn't change_.
The change in velocity over a unit of time also is used
as it _doesn't change_, we call it a constant acceleration.
Then we have a falling body where we have a constant value
that measures the change in the acceleration. It gets
faster and faster but at a rate that doesn't change. (At
least not in a vacuum).


> There are an infinite way to parse something like vision data.
> So when is one way of parsing it better than another? The
> easy answer is that one way made us "smarter" and helped us
> survive, so evolution tuned the system to parse it in the way
> that was most useful to us for survival. Which is almost bound
> to be true to some degree.


I agree completely with all of that.


> But I think we can do far better than that by understanding
> that all this parsing is done for a very specific purpose -
> the purpose of driving behavior that needs to be predictive
> in nature. That is, behavior which is able to "do the right
> thing" before it's needed, instead of after.

Parsing the image so it is predictive makes sense and so does
parsing the image to it is relevant to the goals of the animal
so it doesn't waste time attending to everything or storing it
for later associations while the predator or food goes unnoticed.

Rather than trying to handle all the data the trick is to
know what can be thrown out. Rather than every possible
categorization the trick is discovering useful categories.

There is always the problem of time and resources available
for any task including learning. An organism that can make
use of millions of years (time) and billions of individuals
(resources) to develop useful circuits has an advantage
over anything that has to rely on evolving all this in its
own lifetime by some super amazing generic network alone.

You seem to want to compress and store all the incoming data
to compare with each new input. I suspect that is not possible
in practice. I would also point out again that the world out
there can always be accessed you don't have to remember it all.


> You can't wait until you get to the stop sign to send the
> signal to the foot to step on the brake. It has to be send
> ahead of time. Almost all actions work like that.


But that can be determined by static data. If a robot or
human is heading toward a wall (or stop sign) it can adjust
its speed by estimating its distance from the wall (or the
stop sign) from the current static image. There is nothing
temporal about estimating distances from a static image
and that data plus the speed of the observer can generate
a desirable speed to control the brakes and accelerator.


> We have to create a behavior in response to what our sensors
> are reporting now, in order to make the future unfold in ways
> that are better for us. Reinforcement learning algorithms in
> general all solve this problem already. But what makes it

> harder in these high dimension problems is that we can't use


> the "real" state of the environment to make predictions with.
> We are forced to use a system that creates abstractions to
> define our understanding of the state.


Spot on. Exactly what I have been saying. We can't use the
"real" state of the environment which is too complex.


> And the unsolved problem here is how to best create a good set
> of abstractions for our internal representation of the state
> of the environment.


And my suggestion is they were naturally selected over millions
of years.


> How the system parses an image, is the same problem. It's the


> problem of what internal representations are most useful for
> that internal state. And a key part of that answer, is that
> we need internal state representations that are good predictors
> of the future.

> So we can adjust how the s system is parsing, based on how
> good any given parsing is at predicting the future - at
> predicting how the sensory data will change over time.


> The ball vs. background parsing is useful because it's more likely


> that the ball will move relative to the background, than it is
> that the ball will split in half with half of the pixels moving

> off the left and the other half moving off to the right.


You like to complicate everything. A simple motion circuit doesn't
have to worry about the likelihood of pixels going one way or the
other way. It simply responds to what does happen. No prediction
is required ALL the time. Even predictions are NOW decisions based
on previous changes held in "weights" that exist spatially.


> I believe this parsing problem is, and must be, solved, based


> on a statistical system that forms itself into the best set of
> temporal predictors it can. That it works by converging on
> higher quantity abstractions based on how good a given
> abstraction is at predicting how the data will change over time.


Sure. But it doesn't require all those requirements you suggest.
You hold this belief that as a result of fulfilling these "temporal"
requirements a predictive controller will pop out as a result.
Spatial data will do just fine in making a predictive controller
for it can hold temporal data. It can be embodied in a circuit
that changes naturally over time in a way dependent on the
modulating influence of higher level command signals and lower
level feedback signals.

JC

Curt Welch

unread,
Dec 16, 2009, 7:35:22 PM12/16/09
to

local optima? Optimization of what?

Sure, if you just mean that our understand is limited to what we have been
able to sense and understand in our area of time and space then sure -
that's bound to be true to some extent.

But it's also highly like that our ability to understand the universe is
bounded. Not just "our" in the turn of humans abiilty, but the ability of
any agent which is part of the universe is probably limited to what it can
know about the universe. If such a limit exists, how close is our current
knowledge to that limit? We don't know, but it's also quite possible that
we are fairly close to the limits of what can be known, and that another
million years os research, might uncover very little extra significant
understanding (comparted to how much we have already figured out). If that
were true, then our "local optimal" might not be so local after all. It
might generaly be a fairly global understand. We just can't tell.

> This means that for making real
> progress it would be necessary to go way back down to the bottom. For
> example the 'ether' concept was dropped because it was not possible for
> a 'fluid' to conduct transversal waves, only solids can do that. And we
> need transversal waves to explain polarization. But now we don't even
> have a medium to conduct the waves, it's just 'empty' space. That seems
> to be even crazier to me.

Yeah, physics tends to be that way. What we learn to understand as a
normal model of the universe at our level of existence doesn't need to
apply at other levels. The macro level understanding of the effects in the
universe is an emergent property of a very different universe at the
sub-atomic level. But that doesn't mean the macro level universe isn't
"real". It's just as real as the sub-atomic level. They are just
different views of the same universe which each revel different facts about
how the universe works.

> More likely would be some kind of four
> dimensional medium that appears to be solid in some ways but that serves
> as some basis were material objects are like standing waves in.
>
> > But there's nothing about the _problem_ that I don't already fully
> > understand. What we need to find, is an implementation that solves the
> > very well defined problem - or far more likely - one that produces a
> > highly useful result without producing a perfect result.
>
> I think you still are missing something.

Everyone thinks that. :)

> The problem is not solved but
> there are some adaptations made that work until something else we don't
> know about yet changes what makes them work. When that happens we are
> back at square one.

Not sure what idea you are getting at there. And I've also totally lost
the context to which I made that comment.

What I fully understand, is the problem of reinforcement learning. That's
what I was talking about. I _believe_ that problem is _the_ problem we
have to solve to solve AI. That's an unproven hypothesis on my part - but
one which I normally talk about as if it were a proven hypothesis because
that's how certain about this belief I have.

> > But that's just the point, no animal in the past billion years needed
> > to learn a large and useful set of abstractions for evaluating chess
> > board positions, or GO board positions. We don't have such hardware in
> > us and the fact you keep trying to claim we do is what's so absurd
> > here. The abstractions needed to play those games well had to be
> > learned on the fly in a human.
>
> Humans did not learn to play GO. First, the search space is so big it is
> highly likely all our current knowledge about the game (opening theory,
> josekis) is something that will later be discarded. It already happened
> a few times with GO, why not a lot of times more, the game is big enough
> for that. Second, humans did not solve the GO problem because they just
> study each other's games instead of finding out for themselves, this
> goes even for the more advanced players. So essentially the way we are
> playing go now is a cumulative artifact.

Yes, but does not our game play improve with practice? Aren't the best
players generally the ones with the most experience (the most practice)?
That shows that we are _learning_ to play go. I did not mean to imply we
had mastered the game and can play it perfectly, just that we are learning
to play it. And if you compare how fast a human can leeran the game (aka
how what their game playing improves with practice) compared to our go
programs, you can see that humans are better GO learners than our best GO
programs so far.

> > But once the innate hardware decodes the raw visual data and creates an
> > internal representation of a chess board configuration, then what? How
> > does the brain go from innate chess board positions, to a high quality
> > evaluation function?
>
> It doesn't. It just looks in the books and sees what other humans have
> found. Like a huge lookup table where each human adds a little thing
> when they were lucky enough to stumble upon it. Humans are not
> intelligent in the way we require it from AI. The reason we require it
> from AI is because we think we are that smart ourselves.It just goes to
> show that we are status signaling creatures with very little respect for
> the way things are.

:)

Well, what you are saying has a lot of truth to it, but it's not really
very relevant to the points here. I can teach two kids that know nothing
about the game of chess, the rules, and then let them play each other.
They will, on their own, very quickly start to improve their game play. T
hey will be very weak players compared to kids that were helped by being
shown various useful heuristics, and who were shown good ways to play by a
more experienced player. But even without all the help we normally get
when learning the game, we will, on our own, with no help, improve our game
playing just by practice. And our game playing will improve _faster_ than
any current computer program's that tries to learn the game on it's own,
will improve - because the state of the art in learning algorithms just
doesn't yet measure up to the learning skills of humans across broad
domains. yet.

> > The only way to solve these high dimension problems is to create
> > abstractions that represent important features of the board that are
> > used in the evaluation function we hone as we improve our game playing
> > skills though practice. Good chess players can look at a board
> > position and instantly just "feel" how good or bad the position is. He
> > can look at how the position will change based on a few possible moves,
> > and instantly just "feel" which moves make the position better, and
> > which make it worse.
>
> It seems not all hope is lost. In a few years you'll be emerging from
> Skinners box, having learned there is nothing inside.

:)

> > Though our chess programs use evolution functions as well, they are
> > never as good as the instincts of a well trained human chess player.
> > The only way they are able to compete with humans is to use very high
> > speed tree searching to make up for the lack of quality of their board
> > evaluation function.
>
> Yes, chess programs can make better use of the grainy character of the
> chess board, while GO programs have to fall back on random plays and
> then have to do more nifty search tree pruning.

Strong abstraction systems for producing evaluation functions is what
humans can do very well and which we haven't yet equaled in our programs.
So we make them play better by doing game tree searching with is simple to
understand and code, but very expensive in CPU resources. In TD-Gammon, a
very strong abstraction system was created and it ended up laying far more
like a human - and ended up playing as well (or near as well) as the best
humans - all without having to learn by watching other humans play.

> > TD-Gammon used a very high quality evaluation function which was
> > learned by experience. And in the domain of Backgammon, the evaluation
> > function was roughly equal in quality to the best human board
> > evaluation functions. Which shows how close we can come to matching the
> > one things human do better than our machines - create complex abstract
> > evaluation functions that guides not only all our actions, but also all
> > our secondary learning.
>
> No I don't think humans create abstract evaluations. What they do is
> follow some innate tendencies and then when one human finds out
> something that works he can communicate that to others.

Well, the "someone else solved it but I'm too dumb to figure anything out
for myself" logic certainly fails to explain how humans can do anything.
But that of course is not exactly what you are saying.

All you are talking about is the difference between the power of learning
that exists in one humans, vs 100 humans working on the same problem
together. Yes, groups can get more done than individual can. Not really
important unless the point is to discuss the power of society to solve
large problems or the point is to study huamn culture instead of AI.

I am just interested in AI - which means how a single human works. Our
modern skill levels exist because we all had a lot of humans teaching us
useful skills - most of which we wouldn't have figured out in a life time
if we were on our own. But if you understand my argument, I don't believe
our skills are what makes us intelligent (many do). I think our power to
acquire new sills (aka learn) from the environment, is our true
intelligence. How much we can learn (without forgetting other important
past lessons) is one measure of our learning intelligence, but more
important than that, is simply the type of thinsg we can learn - on our own
- such as how to make better chess or GO moves without anyone else having
to show us.

> We are like the
> Borg in star trek but since we strive for status we don't like to see it
> that way.

Nobody I can remember who has come to this group to debate these sorts of
issues had any trouble understanding the importance of society in creating,
and sharing, our intelligence. The term "standing on the shoulders of
giants" is one everyone seems to grasp in my view. So I'm not really sure
where you get this idea that people don't "get" that our abilities come so
much from the knowledge passed down. Most of this have to spend a good
percentage of our lives in school getting the stuff crammed down us so it's
kinda obvious that most of what we learned to become the adults we are,
came from someone else.

The important parts of that however are 1) where did it come from in the
places - that is how do people learn new things when there's no one to show
them the answer (like to the AI puzzle), and 2) why is it so much harder to
try and teach computers the things that 3 year old humans have the power to
learn in minutes or hours? These sorts of questions keep us plenty busy in
AI long before we get to the problem of AIs sharing knowledge with each
other.

> Especially the self made man concept that is now serving as a
> justification for continued social inequality is a big roadblock to
> progress in general and to AI even more, because AI is supposed to be
> done by those people who have most to lose by deflating that conceptual
> bubble.

Not really sure what you are talking about there.

> > What TD-Gammon is lacking, is the ability to create the structure of
> > the evaluation function on it's own. That was not learned in
> > TD-Gammon, it was hard coded by the design - doing the type of think
> > you say evolution is doing for us. Which can explain why our networks
> > are good at taking raw visual data and turning it into a simplified
> > representation of 16 black objects and 16 white objects on a 8x8 grid
> > board. But that's where evolutions hard-coded simplification stops -
> > which is still way short of "easy". It's still a very high dimension
> > generic learning problem that needs to be solved.
>
> It is not solved, but there are some local maxima people cling to for
> status. OK, for simple problems it has amounted to something that seems
> like a definite solution but for most of these high dimension generic
> learning problems one first has to forcibly remove the 'experts' in
> order to create a new search pyramid.

Or wait for them to die?

> > And what it must do, is create a set of abstractions, or features, that
> > are useful for accurate predictions of the value of the stimulus
> > signals. That is the only way this problem can be solved as far as I
> > can see, and the way that the brain must be solving it. It has a
> > generic learning algorithms that finds a good set of intermediate
> > features.
>
> Yes. Sometimes I doubt you are still a behaviorist anymore. Next you
> will be talking about non-physical reality. But it remains to be seen
> that humans all have the same solutions in their brain. Maybe the reason
> they could not find the specific cell where the concept of 'grandmother'
> is stored is because it is not in the same place for every human, and
> apart from that it is not in the neurons but in their connections, but
> not in their actual connection, but in the way they sequentially fire
> their action potentials.

Well, except, they have found "grandmother" cells now. :)

http://en.wikipedia.org/wiki/Grandmother_cell

> > My pulse sorting networks that you know about worked by making each
> > node in the network a different "feature" of the sensory data. Each
> > layer is a different fracturing of the features from the previous
> > layer. Or you could say each layer was a different "view" of the
> > sensory data. My intent with that approach was for the network to
> > automatically fracture the data into different features, and then test
> > them to find what worked best. And it
>
> Yes. But this is what humans do and they use *communication* to convey
> particularly effective partitions to each other. It is not like every
> child finds out everything on its own, even if children still have the
> potential to reach more states than adults because they have not yet
> built their conceptual ivory towers. But even if we communicate these
> features we are highly inefficient in communicating them so we must use
> words that lose most of the context. When we store information in a
> computer we can preserve the information and copy it, but not the
> context. So we get high status people who seem to know a lot about
> nothing and can only operate in highly regulated contexts of machinery
> and social environment. For example doctors in a hospital, they can cure
> some parts of a patients disease but often create new problems in the
> patients general environment. But these problems are not seen as having
> to do anything with the problem. They don't matter.

Yes, well, again, our full behavior is very much a result of the fact that
the environment we spend so much time interacting with is full of other
humans have learned useful behaviors from yet other humans. We all
condition each other which means when useful new memes (or just minor
improvements to old memes) are found, they spread around and cause the
entire culture to change. Making all that work is very much part of he
full problem of AI. But first, we have to figure out what the basic unit
of AI is - what powers a single human has - and we have to duplicate that
in our machines. If we get that right, then a society of these machines
should do the same things humans do - shared their learned experience and
advance much faster because of it.

> > actually does that quite well. But what it got wrong, was that it
> > didn't create "good" features to start with because it wasn't using a
> > feature creation algorithm that correctly made use of the correlations
> > (constraints) in the data. It is better understood as creating random
> > features. And "random" features don't cut it in a search space as
> > large as the one that must be solved. The basic approach however of
> > creating some fracturing of the space and adjusting it to improve
> > performance all worked and allowed that design to solve some
> > interesting (but simple) problems using the approach.
>
> If you are leaning on humans to solve this problem for you, you are
> going to be disappointed because even if that is how they say they do
> it, they don't. So you're back to using some parallel search for things
> that work, and then select something that works, and be ready to drop
> everything and start anew, because that is what humans really do. Have
> you heard anything of cold fusion recently?

None of what you write is inconsistent with my views in my opinion. You
are just talking about higher level emergent effect of the lower level
systems I'm trying to create.

I guess this is the same problems of physics. What many people see
emerging from humans as "intelligent behavior" at the level, is what they
expect to find inside, at the low level. But because our behavior is an
emergent effect, what the low level looks like doesn't really have to be
much of anything like what emerges from the high level. We will only know
if we got the low level correct, if it does produce the correct high level
emergent properties.

I vetr much work at a very low level and the abstractions and approaches i
talk about are very low level behaviors of small networks. I believe the
type of things I talk about are what is required to make a larger system
produce all these typical high level human intelligent behaviors. But the
only proof of this will be if it does after I build it.

Reverse engineering such systems is just hard because what you build is
nothing like what you see. You have to try something, see what does, and
doesn't emerge, and then adjust, to try and make something different
emerge. It's a long slow process and part of the problem in my view why
solving AI is talking so long.

> > But to solve the hard problems, I've got to figure out a way to create
> > a network that does a better job of using the constraints in the data
> > to create the default fracturing.
>
> There is no such thing as a lucky break, unless you are using a very big
> set of breaks.

Not sure why you are talking about lucky breaks in response to what I wrote
above.

> > Look at a billion different GO board positions. That is a set of
> > images that evolution didn't have to build feature evaluators for. Our
> > innate hardware at best stops as I said above, at the point of
> > recognizing object locations on a 2D grid. It can't tell us the value
> > of one position over the other after that. Is a given configuration a
> > good sign for getting food and avoiding pain, or a bad sign? That is
> > what our generic learning brain can learn though experience, and it
> > doesn't have to see every board position to learn to do that. It can
> > see board positions in configurations it's never seen before, and still
> > do a good job of estimating the worth of the position, and estimating
> > the worth of any potential move.
>
> No, I don't think so. Even professionals replaying each others games are
> often surprised by very strange moves that seem to work well. GO has a
> really really big search space.

Ok, that's fine. But still, no matter how bad humans are at playing GO (and
I don't study go to the extent you seem to have), we can still play it.
And good players still consistently win more games than bad players. And
to do so, they must pick moves, from board positions, they have never seen
in their entire life o a regular basis. You cant do that unless you have
some power to abstract the important features of the current board and the
important features of what to avoid, and what might be "good" for a move.

> > And like I've said many times, we have made this approach work very
> > well in games like TD-Gammon. But it only worked well because the
> > abstractions used in that evaluator happen to work well for that
> > problem.
>
> That's why humans only play the games that someone somewhere in history
> has found a good abstraction for, mostly by coincidence and not because
> he was some genius. Even though some of us who occupy comfy positions in
> society would have it that way.

Well I was waiting for someone to point out that aspect of game playing.
John certainly never has. That is, the fact that we aren't just playing
_any_ game that could have been made up, but that we are playing games that
in general fit our ability to play well and that fit our ability to learn
with practice. So the unanswered question in that is how much general
power to we have to learn to play any game, vs how much are the popular
games popular because they happen to fit well in the limits of what we are
able to learn? There is no easy way to answer that, but careful testing of
playing skills and of the ability to learn new randomly made up games could
lead us to a better understanding of the limits of that question - that is
- how general is our general learning skill across the space of all
possible games that could be made up (if we can even begin to understand
what that space is).

> > The ability to _learn_ high quality abstractions on the fly, is what we
> > are missing from our current learning systems, and what the brain is
> > able to do. And it's how it solves these high dimension learning
> > problems far better than any software we have let developed. And it's
> > the main thing that's wrong with my last network design - though my
> > design created abstractions automatically, and even tuned them to fit
> > at least one dimension of the constraints in the data, it didn't
> > produce good enough abstractions to solve more interesting problems.
>
> Yes, you have to get off your highly abstract horse and mount the other
> horses one by one until one of them accidentally goes in a direction you
> can later convince yourself of, it was the one you wanted to go anyway.
> It requires a certain measure of lying to one self.

Well, it's just a somewhat random search process using educated guesses as
to what to try next to reduce the search time in a large search space. And
by "educated guesses" I'm talking again about our innate low level ability
to evaluate complex situations for some level of potential value. We
naturally recognize some measure of value in things without having any clue
why or how we do it - but we use that natural ability like a trial of
cookie crumbs leading us to some magical answer.

> >>> So the brain can't actually do what it does? We just think
> >>> it does????
>
> Yes.
>
> > They do however head in what strikes me as the right direction as they
> > explore starting in section 1.5 a more general statistical solution to
> > the early vision problems using Bayes estimation and Markov random
> > field models (which I do not fully understand). But it's a general
> > statistical approach to the problem which I strongly agree with and
> > believe we must find to solve this same problem at all levels of the
> > data processing hierarchy.
>
> If at first you don't succeed, try again. Then when you find something
> that is useful for some other thing than you intended, claim it was what
> you were looking for. See, that is an entirely different thing going on
> in humans than what you are modeling a search tree for.

Again, you talk of very high level emergent behaviors, where as I talk of
very low level hardware to produce those sorts of high level emergent
behaviors.

Whether the low level hardware I talk of has any chance in hell of
producing the right high level emergent behaviors is yet to be seen.

> > On the question of the ill posed problems. I don't fully understand
> > their point there because I don't know anything about the background
> > work in the field they are making reference to. But the jest of what
> > they are suggesting seems clear. That is, if you try to find a solution
> > to some of the early vision problems, the definition of what you are
> > trying to find has to be specified in a way that creates only one
> > possible solution. But that when we define a problem something like
> > "edge detection" we fail to define it in a way that creates only a
> > single solution. There are lots of ways to attempt edge detection from
> > the ill posed specification of the problem.
>
> The ill posed specification is necessary because else there wouldn't be
> any funding. Individual humans work the same way, they don't know
> where they'll end up but will claim that is where they wanted to go
> *afterwards*.

And believe it as well!

> > If you are suggesting it's "impossible to solve" because of this ill
> > posed nature I think you are not grasping the true meaning of ill posed
> > in this context. The way they have currently tried to define what the
> > problem IS is ill posed. That doesn't say anything about whether the
> > problem, when defined another way, becomes well posed.
>
> The results is what counts.
>
> > In all the solutions touched on in the paper, even the more general
> > statistical approach they outline, they make reference to the need of
> > prior knowledge. It's referred to almost as if it were a known
> > requirement for this class of problem. That is, in order to know how
> > to process the data, you have to make assumptions about the fact that
> > it a 2D representation of a 3D world in order to make any progress in
> > doing the processing.
>
> But maybe humans do not all live in the same 3D world. All they have
> accomplished is communicating as if they do. Strange, do I get to be the
> behaviorist now that you are leaving the herd?

:)

> > I strongly suspect that prior knowledge is not needed. I believe all
> > the data needed to solve the problem, is actually contained in the
> > sensory data itself. But it's mostly contained in the temporal data
> > available only if you apply statistics to how the images change over
> > time, instead of trying to apply them to a single static picture. I
> > believe our vision system learns how to decode images based on how they
> > change over time, not on how they look at one instant in time. I think
> > the solution requires that we parse the data into abstractions that are
> > useful in making temporal predictions - in making predictions about how
> > the real time image will change over time.
>
> I think the solution requires a specific module that makes the result of
> the time analysis 'natural'. I mean, if one looks at people with brain
> damage they accept the strangest things for real. If we dream we think
> strange things are normal. So in order to have a robust stable reality
> we have to overlook a lot of inconsistencies an just move on. The
> problem is one of coordinating these reality distortions among humans so
> that they all won't see them anymore. But that has got nothing to do
> with solving the actual perception problems. These are just sidestepped,
> and if they lead to groups of people being too far out of whack they
> die, or join some other reality cluster.

Well, yes, there are some "reality clusters" I belong to - like the belief
that there is a fixed objective reality we all share in common. And that
fixed shared objected reality is all there is (materialism). If that turns
out to be false (at our macro level of existence where I'm trying to solve
AI), my entire approach could be totally wrong. We all belong to our own
"reality clusters" in the sense that we develop our own model of reality
and that model is typically shared with (and comes from) many others in our
society. There is great pressure created by our society and our needs to
attract us to these shared "reality clusters" which is why, if they are
wrong, or lacking, it can be so hard to get society to break free of them.
As a group, we create strong attractors that suck more and more people to a
set of beliefs. It can be really hard to change or fix incorrect social
beliefs because of that effect.

People died in car accidents trying to write vision programs for AI???


> > I see the big problem here as trying to understand how to correctly
> > pose this problem so it becomes well posed, across all sensory domains,
> > instead of ill-posed in one small part of one vision domain. And my
> > hand waving description of parsing so as to minimize spatial and
> > temporal correlations is a loose attempt to do that, but not a well
> > posed strong mathematical definition of what is needed. And certainly
> > without a well defined problem, trying to find the solution is very
> > hard. Most of my rambling and brainstorming work here is trying to
> > find a way to talk about this problem which makes it well posed.
>
> Yes, I agree with that and applaud your efforts, I just want to nudge
> you into accepting we switch to other problems all the time and then
> claim it is what we really wanted to solve. If you would solve any of
> the real problems you wouldn't just have created "AI" but "I", meaning
> intelligence for the first time ever, because humans do not possess the
> kind of intelligence you are trying to model.

What do you mean by that? that a _single_ human doesn't posses it because
it's really a creation by the work of the entire race going back a long way
in time?

Most people look at human behavior, and just our intelligence by _what_ we
are able to do. I don't. I think of that as our current "smarts" (the
actual set of behaviors we have learned in a life time - whether they came
to us bty copying from other humans, or whether we created them on our own
- either way, the _what_ we have learned is not to me, our intelligence.
It' just what we hae learned.

To me, our _learning_ skill is our true intelligence - how fast, and how
well, we can adapt our behavior to any environment. This is a skill that
very much can be measured in an individual totally separate from the
environment they exist in. And it's a skill you don't have to think of as
a society level skill. What a society of humans can learn is really just a
totally separate and more complex question from what a single human can do.
I only care at this point, about duplicating what a single human can do -
any environment.

Curt Welch

unread,
Dec 17, 2009, 12:22:08 AM12/17/09
to
Tim Tyler <t...@tt1.org> wrote:
> Curt Welch wrote:
> > Tim Tyler <t...@tt1.org> wrote:
> >> Curt Welch wrote:
> >>> Tim Tyler <t...@tt1.org> wrote:
>
> >> Why call future machines "toasters"? It seems derogatory.
> >
> > It was meant as a way of emphasizing that these future machines won't
> > be any more important to us than our toasters and all the other
> > machines we use to make life easier for us, and to emphasize the point
> > that these AI machines won't have a drive for survival any more than
> > our toasters do.
>
> They will probably be quite important for us. We are getting
> increasingly dependent on tools as time passes. Take away stone-age
> man's tools and he can still get by. Take away our tools, and
> civilisation collapses, and most
> humans die.

It would be an interesting experiment if we could do that and just see how
far the population drops. The recent film "The Road" addressed that issue
- but also threw in the problem of the food chain collapsing out from under
us as well. That of course really caused lots of people to die. Bad
enough yuou can't go to the local story and buy bread, but when there's no
animals to hunt either, you are really in trouble.

> [snip angels]
>
> >>> If in some future, the human race is facing extinction because of
> >>> forces beyond our control, I could see people deciding to build AIs
> >>> that are intended to carry on without the humans. But unless human
> >>> society is gone, I just don't see a path for the AIs being allowed to
> >>> take over - or to get _any_ real power over the humans. I don't
> >>> think humans as a whole, will every allow such a thing.
> >>
> >> ...whereas I think the humans will do it deliberately.
> >
> > And what would be the selfish gene's motivation for doing that?
>
> At each step, DNA that cooperates with the machines does better than
> the DNA that doesn't. That doesn't imply that DNA is thriving overall.
> You can be climbing a mountain as fast as you can - but the mountain
> can still be sinking into the sea.

But there is the third set - the DNA that is able to utilize the machines
as slaves without giving up any control to them. How well will they do
compared to the ones that only "cooperated" with the machines? That's the
real test that will define the path evolution flows.

> Anyway, DNA-based humans are still going to survive, I figure. They
> will be in museums - but that's still survival.
>
> >> I don't really see how my view is "academic". All I did was use some
> >> information theory in my argument. Also, the whole issue of whether
> >> my views are "academic" or not seems pretty irrelevant to me.
> >
> > It's just my interpretation of the typical basic motivations that the
> > role of being in academia tends to instill in its members seems to mesh
> > well with your views.
>
> FWIW, my impression is that the majority of academic robot enthusiasts
> paint robot takeover scenarios as unrealistic Hollywood fantasies.
>
> > It's this bent in your speculations that man will want to turn over the
> > world to these AIs because they are smarter than us. A good bit of our
> > society has exactly the opposite feelings. They don't like dealing
> > with people (or machines) that are smarter than they are, and as a
> > result, their default position seems to be that smart people shouldn't
> > be trusted. People that think like that, are not likely to like the
> > idea of their "toaster" being smarter than they are - let alone being
> > willing to turn over control of the government to them.
>
> Machines are already pretty powerfully implicated in running things.
> They tell many workers when to arrive at work, when to go home, when to
> have breaks, and often what to do and when to do it. You are probably
> right in thinking that the humans don't like this very much - but they
> get paid for
> it, and that's mostly what they value.

Yeah, but when you follow that chain of command back though the machines,
you always reach humans as the source of the commands currently. Which
means the commands are always based on human DNA derived motivations. YOu
could say it's the DNA talking.

As we have talked about, we are already creating intelligent machines when
we set up and create large companies. When you work for a large company,
you are very much being told what to do by the collective intelligence of
the company far more so than by the intelligence of any individual in the
company (like your boss). Your boss is not free to do whatever he would
like, he's as much a slave to the company as anyone is. The head boss is
more a slave to the desires and needs of the "machine" than any of the
workers are typically. Which is all dangerous to humans - companies will
and have done large amount of harm to humans in the name of maximizing
their own needs (profit). But we keep these big beasts on a tight leash
though the power of our governments to overpower anything these powerful
monsters might do that is considered bad by the people - by the humans. So
even though those motivations are not directly in line with human needs, we
make sure their motivations are always working to our advantage (as a
society), and when they don't we make them change, and if that fails, we
just kill them.

TO give an AI power in human society - meaning voting rights in our
government, is to unleash a beast that has motivations incompatible with
our own. It's guaranteed suicide in the long run if we ever make that
mistake.

These super smart AIs aren't really any different than our large
coporations are today. That is, they are independent intelligent
super-machines that are set up to serve us. It just happens that our
corporations are implemented with lots of little humans turning the wheels.
The only difference in the future, is that the humans will be totally
replaced by man made machines, instead of just "mostly replaced" as they
are today.

The need to keep them under a leash will be the same in the future, as it
is today. And the fact that they will be able to do great things for us
even under the control of a tight leash, is no different now, than it will
be in the future.

The top level "commands" will always come from humans, and never will we be
willing to turn over the reins to an AI, to let it issue top level commands
without a higher level of humans above it.

> >> I think machines will be enslaved initially (as they are today).
> >> However, I eventually expect some machines to get full rights and
> >> acknowledgement that they are conscious, sentient agents. Failure to
> >> do so would be barbaric and inhumane.
> >
> > Well, I find the words conscious and sentient interesting.
> > [...] In fact, there is no such thing as soul, or
> > consciousness, or sentience. They are all invalid concepts that don't
> > actually have any useful explanatory power. And the creation of strong
> > AI is going to make this abundantly clear. There will just be a large
> > continuum of machine capability with humans and strong AI one one end,
> > and simple learning machines on the other like TD-Gammon. At no point
> > in the continuum will we be able to identify the point where the
> > machine became conscious or sentient because the concept has no
> > meaning.
>
> They do have meaning - at least according to the dictionary.

The dictionary definitions of those words are useless circular definitions
that self reference each other without every defining anything. There are
no axioms we have that can be used to define consciousness or sentience.
Checking the wikipedia entry, I see a highly cogent phrase right at the
top: "consciousness refuses to be defined". :)

> > As such, there will be no dividing line we can draw between what
> > machines should be "given rights" based on the fact that they are
> > conscious and sentient because there will be no way to define where
> > that line is.
>
> It sounds rather like there is no such thing as a beard - since there is
> no consensus on how many hairs make a beard.

No it's far worse than that. Everyone that likes to use the term thinks
they are talking about some special property of humans, but none of them
have a clue what that special property is - they just "know" it's there
(even though they have no idea what "knowing" is either) or what the
property is they are talking about.

In fact, what they are talking about, is an illusion of self perception
that exists in humans. But they don't understand that, and when confronted
with the idea that they are talking about something which is just an
illusion, they refuse to acknowledge that it's even a possibility.

So at best, the correct definition for conscious should read something
like: "a property a machine believes exists in itself as a result of being
fooled by the illusion of self perception which creates a dualistic view of
reality in that machine, that is, the illusion of a split between mental
processes from other body processes".

But the rub is that the people who use the term, don't believe they are
talking about an illusion, so such a definition will never work for the
very people that use the word - so you have to make up yet another
definition to keep the people fooled by the illusion, happy - since after
all, they are the only ones that think the word has a useful meaning.

> > When AI has been created, and society has gotten to the point where
> > most people understand what it is, and why it can do what it does, and
> > what all that means about what we are, and why we can do what we do -
> > all these thoughts about life being special or humans being special
> > because they are conscious, will vanish from society. People that
> > realize that what we can consciousness in a human is no more special
> > than what our PCs already have.
>
> Humans are not particularly special - compared to chimps. What we have
> and they don't is mostly memetic infections of our brains. We coevolve
> with our snowballing culure, and they don't. Both species are pretty
> special - compared
> to computers, though. Their brains are *much* more powerful, for one
> thing.

With the right software, a cheap slow computer might start to look far more
powerful than any human or chimp brain. Our ability to compare animal
brains to computers is currently limited by the software we currently have
- not by the software that might be able to be created on the machine.

And allowing them to do that, creates great dangers to humans - which is
why some people understand this, and have passed strict laws about
businesses being able to donate to political campaigns and seek further
restrictions to keep their motivations out of the governments that are
meant to be the human controlled watchdogs of the businesses.

> >> We will make machines because of how useful they are - and because we
> >> love them. Machines won't be the enemies of humans - they will be our
> >> friends and partners.
> >
> > Well, thinking some more about this. I can think of one thing that
> > might explain how that could happen. It could be, that evolution has
> > built into us, some special perceptions systems and rewards that make
> > us like other humans. If these new AIs trigger that reward, because of
> > how they act, it could make us like them in a way we really shouldn't
> > like them.
> >
> > This would be a cause of the reward system that evolution hard coded
> > into us, failing to work in this new environment full of AIs. The
> > reward system could be there to help make us more social animals. But
> > it may trick us, and make is include our AIs in the "family".
> >
> > In time, the selfish genes would fix that mistake. [...]
>
> Why is it a mistake? I like various machines today. They are not bad for
> me, rather they enhance my own fitness - relative to if I did not
> interact with them.

Liking machines is not the mistake. Having genetic hardware in us that is
put there by our DNA to make us like other humans, but which errors and
makes us like human-like machines, would be the mistake I was speculating
might happen. Our DNA built it to make us like other members of our own
species because that increases the odds of the DNA surviving. With the
mistake, it decreases the odd of the DNA surviving and increases the odds
of the AI surving.

> We can see the explosion of machines taking place today - and it is
> because humans like them, and want more of them.

Yes, but in general, we don't like them more than we like other humans.
Try to kill a humans, and there's a huge uproar from the crowd. Kill a
machine, and some geek in the corner (me) cringes. There's a priority here
and human life and human safety is clearly, and distinctly way above the
health and safety of any machine. We protect our machines, only as long as
we don't have to risk the health or happiness of humans to do so.

I don't think the invention of AI is going to change that much.

> Machines seem set to destroy the human economy by taking all the
> human jobs. When that happens, most humans will persist on welfare
> derived from taxation - but they will basically be functionally
> redundant, and future development efforts will shift into the machine
> domain.

Yes, I agree with that. Except the taxation part. Humans will be sole top
level owners of the machines, and as such, will have ultimate control over
what they do, and ultimate ownership of all profits and wealth they create.
So the machines will be doing all the work, but only the humans will be
getting rich.

How the economy and the AI businesses will have to be structured to both
make that work, and keep humans safe, I haven't figured out yet. But when
you take humans out of the loop - because there are no humans working in
the jobs anymore, the human owners of the company become the only human
benefactors of the wealth and power created by the company.

At first, we could just keep corporate ownership like it is now. Anyone
that wants to can start an AI business, and sell or buy stock in an AI
business.

But the people that are the most successful at building the best business,
will become very rich, and others, how are just bad inventors, will become
very poor and won't be able to survive without charity from someplace.
Managing investments in AI business will be the only real job left for
humans at some point. But I suspect that will create such great wealth
disparity across the population that people won't put up with it. That is,
a very small number of people will become super good at this investment and
management game and will become super rich as a result, and the rest of the
population (having no other job options) will become super poor.

So the easy solution to the wealth disparity problem is heavy taxation.
And that might be the answer. Or the other path, which might at that point
finally work (when no one is working anymore), is to transfer all ownership
to the people as a whole (every human by law owns and equal share in all AI
companies). Currency and capitalism would still be the regulator of all
production, but the profits produced by the companies, would just be equal
re-distributed to all humans - who would then spend the money as they saw
fit - as their "vote" for what the machines should be doing for the humans.
The machines would not be paid (just like we don't pay our machines today).
The machines would not have any of their own money to spend. They could
spend money when acting for their business. The correct way to motivate
them would tricky, but I think the ultimate answer there might be to create
the motive we use in our businesses - which is to say the profit motive.
That is, they get ewards based on how much profit they produce for their
human owners.

But just like our current businesses, we would have to set up careful
systems of regulation and control to make sure they won't wire heading
themselves and/or making a profit by cheating their customers. So in
addition to Ais doing all the real work, we would need another whole level
of specialized AIs to act as the auditors and overseers. And some system
to watch the auditors etc.

So it would be much like it is now, except no humans would work anymore in
a conventional job. Their only will be to run the machines - by spending
their money as a way of telling the machines what they wanted - and to act
as the watchdogs over the machines.

> > Right now, humans are the only example we have to look at to understand
> > what intelligence is. All humans have almost identical motivations
> > compared to the types of motivations we will give to AIs. As such, it
> > leaves us with some preconceived notions of what intelligence is - of
> > the range of personalities we can expect to see in an intelligent
> > agent. But I think when we build these AIs with different prime
> > motivations, they will develop a personality that ends up looking
> > unlike any human we have ever seen. They will be highly intelligent,
> > but yet, not human like at all. They might for example be more like
> > talking to a telephone auto response unit, or a vending machine, than
> > like talking to a human. The difference however is that they will show
> > great understanding of what we are asking, and be very clever and kind
> > and helping in their responses to us. I suspect we won't bond very
> > well with them at all because we will be so different - we won't be
> > able to relate to what they are thinking or feeling. All they want to
> > do, is go find another human to help. Ask them to go paint a million
> > little circles on the sidewalk, and that will make their day! They
> > have something to do to make a human happy! We won't be able to
> > emotionally connect with these AIs because their needs and feelings and
> > instincts will be so different than our own that they won't seem to be
> > human at all - even though they are clearly very intelligent.
>
> We will also want human-like agents. That is after all what we are
> familiar with.

Well, there will certainly be some demand for that. But personally, a
vending machine that can talk to me like a human would be fine by be. It
really doesn't need to look like a human if I'll I'm asking it to do is
make a sandwich, or to give me an iPod. If I can talk to to taxi and ask
it to take me places, do I really need it to have a human head and arms
sitting in a drivers seat? I don't think so.

> We will want them as sex partners, assistants,
> babysitters, nurses, etc.

The real advantage of human-form machines at first, is for backward
compatibility with operating machines built to be controlled by a human.
But in time, the backward compatibility won't be needed because all the new
machines will have built in AI control systems.

I could just as easy see having lots of small dog-sized assistants hanging
around helping me as having human-sized machines. A very small machine
with very long arms and legs might be far better at helping in a human
environment than a large human-sized machine. Which would you rather have
accidentally step on your foot, or accidentally bump into you at the top of
the stairs for example. Small light mobile AIs might be far safer in
general from accident in a human environment - even more so when young
children are around.

My guess is that if you want a human like sex partner, the best solution is
to find a human sex partner - it's not like we will have anything else to
do with our time besides eat, play, have sex, and have babies. If you
aren't in the mood for a human sex partner, then you probably don't care
too much what the machine looks like. That is, it might have a rough human
form, but again, size and weight can make it dangerous if it malfunctions
so again, small smart AIs might be far more the norm than human sized
machines.

I'm not saying there would be no call for human-like machines. I just
don't think they will be the norm in the future. I think it's a real fad
right now to see how human-like we cam make machines seem - because of the
fact that we have not yet made any machines really human like.

But I bet once they really start acting like humans, we won't think it's so
cool anymore. They will be scary. So to offset that fear factor, we will
make the machines that are designed to be around humans small and weak = so
we will have no problems accidentally being hurt by them.

> Androids:
>
> Tim Tyler: On androids
>
> - http://www.youtube.com/watch?v=E-43KqWgTHw
>
> >>> So we will have a world, with humans clearly in charge, with lots of
> >>> smart machines serving us and even though we we have tons of machines
> >>> smarter than any human, none of them will be using their intelligence
> >>> to try and out-survive us. The creation of human and above human
> >>> levels of AI won't do anything to change the dynamics of who's in
> >>> charge here.
> >> Yes: evolution is in charge. Humans have one hand on the tiller, at
> >> best.
> >
> > Well, we have the upper hand and that's key for now. We will make this
> > army of very smart slave machines to help make life better for us just
> > like we already have a army of slave machines (cars, and computers,
> > etc) to make life better for us. I don't think making them smart is
> > going to change anything at all. It will still be a human society and
> > we won't see anything in these new machines that make us think of them
> > as human any more then we see a calculator as human just because it can
> > do some mental tasks better than any human.
>
> Well, except for the androids, I figure. And the customer service
> droids. And the counciller droids - and so on. However, I don't think we
> will give those agents voting rights either.

I wonder, if what might happen, is that some default styles develop for
what the AIs that communicate to humans end up looking like? I suspect the
end results will not be all that human like at all - or maybe about as
human like as C3PO. Something that still has a human-like feel to it, but
yet so distinctly different there's no confusion about whether we are
talking to an AI or a human.

> > We will have the power to make machines that do end up looking, and
> > acting, very human like. And it no doubt will be done for research
> > purposes. But in general, I think society will reject those sorts of
> > machines and maybe even regulate though law their creation because they
> > are too human like. I think it will in general scare the shit out of
> > most people to see a machine that is that human like.
>
> The uncanny valley?
>
> > People will be scared of them for the very reason
> > they should be scared of them. The robots will have different needs
> > than humans do and that will lead to conflict which if not caught early
> > - will lead to out right war.
>
> I figure androids will mostly be our friends and companions. I suppose
> there *might* be police-bots and soldier-bots - but I hope there won't
> be too much war for quite a while now.

Well there will always have to be police bots. Unless that is, that the
genetic engineering, combined with the conditioning and training of all
humans gets turned over to the machines so that all humans turn out to be
near perfect citizens and as such, don't need to be policed. But I don't
really think that's possible or desired. Though the vote, everyone will be
forced to follow the rule of the majority. And there will always be some
minority that will violently disagree with the majority which, if not for
police bots - would result in some form of law breaking (violence, or
vandalism, or worse). But I think in this brave new world, police bots
will be so cheap and so plentiful (poice AI web-cams everywhere) that there
will be noting you can do that wouldn't instantly be caught by them. Which
means, no one would ever believe they might get away with a crime - so if
they decide to do something illegal out of revenge like kill someone or set
off a bomb, they would only get at best once chance to try it and then they
would be locked up for life.

> > When times are good for everyone, we can all be
> > friends. But when the shit hits the fan and we have a resource sharing
> > problem, people will chose sides, and so will the AIs, and the AIs and
> > people will be on different sides.
>
> I doubt it. Society is highly dependent on machines - and that
> dependence is only going to deepen. If there are future wars, I figure
> there will be smart
> machines and people on both sides.

Well, I think that's true because I don't think we will build any AIs that
will be independent free thinking machines trying to survive on their own
like humans try to survive. I we did, things would quickly turn to war
between the AIs and the humans. But since the AIs will remain total slaves
to the humans, there won't be an AI human divide. So any war that does
develop, will be human to human with the humans using their AIs as tools of
war. God help us if it ever comes to that!

> > Yeah, I agree completely that genetic engineering is probably too slow
> > to keep up with what _could_ happen to an engineering based evolution.
> >
> > But where I separate from you is this idea that that "bits of
> > civilization" are important. They aren't. We are here for one purpose
> > only - to protect and carry human DNA and human bodies into the future.
> > It makes no difference at ALL if the AIs are better at "carrying the
> > bits of civilization" into the future than humans are. Creating a
> > civilization is not our purpose. It's just something that happens as a
> > side effect of our real goal to carry humans into the future.
>
> I expect the RNA-based creatures reassured themselves with similar logic.
> However, now they are all gone.

:)

> Yes, agents are mostly out for their own DNA. However one way of
> ensuring the immortality of your own DNA is to make sure it is in the
> history books. James Watson and Craig Venter have the best chance here -
> but other humans are likely to make it as well. However, there is no
> evolutionary mandate that
> says that all humans have to make it.

Well, again, in the very long term, I agree completely. DNA just won't
out-live all the possible engineered life forms that might show up in time.
But I don't believe that AI is the technology that will cause DNA to go
extinct.

I really don't see AI as being special at all. I see it as not much more
special than a new and faster compression program. It's just one more
algorithm to add to our bag of useful algorithms to use in our machines.
This new simple but powerful algorithm just isn't the reason DNA based life
forms are going to be replaced. A new and stronger learning algorithm just
isn't going to make DNA based life forms go extinct and that's all I see AI
as being.

But in time, technology will create some stuff that's very dangerous to DNA
based life. And either will will transform in some step by step fashion
into these new more powerful survival machines, or someone will let the cat
out of the bag by accident, or with malicious intent, and these dangerous
machines will just take over. If we are talking about what's going to
happen in the next million years, I don't see how DNA life could continue
in a dominate role that long. If of course could be far sooner than a
million years, but my piont, is that AI will be here real soon now, and
it's not going to change anything in that way. DNA life will still be "in
charge" for a long time after we produce better adaptive learning
algorithms for our robot controllers.

Yes, the machines will rise - just like the human body rose up around the
DNA as a machine created by the DNA to give the DNA more power. The rise
of the AIs around us, will be more of he same - all still configured as
slaves to the DNA as much as our body (and our mind) is still a machine
slave to the DNA.

The machines won't rise up and take over. They will just rise up and give
our DNA huge amounts of new powers of survival.

> > The advent of an artificial auto-pilot system (AI) is going to have no
> > more effect on the path of evolution of the human gene than the advent
> > of the printing press. Which is to say, it will have a big effect, but
> > what it won't do any time soon, is kick human genes out of their
> > current "masters of the earth" position.
>
> Well, not immediately. However, we *will* see a memetic takeover.
>
> Evolution finds the best technology - and it has a good memory.
> Intelligent design and engineering are good tricks - and they will
> revolutionise the biosphere. Haemoglobin, chlorophyll, cellulose,
> mitochondria, DNA, etc will soon be hopelessly outdated technology.

Yes, in the long term, I agree. I just don't believe that the invention of
AI will be the technology to replace all that wonderful stuff. It's just
one more important step in the continued rise of technology. But probably
no more important than the invention of the wheel, or the printing press,
or the digital computer. These were all just creations that extended the
power of our DNA a little further. and this better learning algorithm that
will be created soon will also, just extend the power of our DNA a little
further.

> Being close relatives of worms is not the final stage of evolution -
> we are just the beginning. The first intelligent agents that got
> civilization off the ground. The stupidest civilized creatures ever.
>
> Tim Tyler: Memetic takeover
>
> - http://alife.co.uk/essays/memetic_takeover/
>
> > The technologies that actually changes the course of evolution, are the
> > technologies that get themselves into the human reproduction and human
> > survival loops. Technologies that let us pick the genetics of our
> > offspring, or which allow us to pick when and if we reproduce, and
> > technology that controls who lives, and who dies, are all examples of
> > technology changing the path of human evolution in a big way. AI will
> > only change us because it will be a tool that will allow us to make
> > more changes to ourselves, and more changes in who lives and who dies.
> > But not because we just give up our control and let another life-like
> > form take over because we think they are "better" than us at running
> > the library.
>
> Human evolution is essentially over. Cultural evolution is where all the
> action is these days. It is proceeding much faster than DNA evolution
> ever did. Human evolution is too glaicially slow to have much
> significance for the future of evolution:
>
> Tim Tyler: Angelic foundations
>
> - http://alife.co.uk/essays/angelic_foundations/

Well true, but really mis-guided. Human evolution hasn't been the thing
for a long time. Ever since sexual DNA reproduction was invented,
evolution stopped being single inheritance and turned from a competition of
individuals, into a competition of species. So individual human
development hasn't been a factor for millions of years. Then when strong
learning skills developed - the evolution of memes took off - which is the
evolution of the human control program. This just means the structure of
our brains are evolving far faster than the structure of the rest of our
body. But it's still the structure of the human brain that's developing.

That development however led to the development of all these other
extension's to the human body - like cars and shoes, and buildings, and AI
modules. All this stuff is really just more stuff evolution has produced
as part of the human race. We should think of it as if evolution had given
us a new arm - except what it gave us was a car, and instead of a new ear,
it gave us the internet.

So evolution moved to a new way of evolving the human brain by making it
learning system long long again, and it's been evolving at a rate far
faster than the rest of the body because of it. But the really fast form
of evolution ,s what's creating all this technology around us - and that's
where evolution is moving the fastest right now. When we develop AI, that
will be yet another step up the exponential growth curve of evolution and
it will help us keep climbing that exponential curve. But currently, it's
still the human race, that's evolving all these add-on features to make us
better. And I don't suspect our DNA is going to lose control of these
features any time soon.

Curt Welch

unread,
Dec 17, 2009, 3:43:31 PM12/17/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 17, 6:02=A0am, c...@kcwc.com (Curt Welch) wrote:
> >
> > What surprises me here is that you can't seem to understand
> > the difference between building learning machines, and building
> > chess playing machines. It's too totally different problems.
> > Evolution didn't do the later - it did the former for us.
>
> It built a machine that could learn to forage and that machine
> it turns out can exapt those skills to also learn to play chess
> and do calculus.

And the fact that the system built to forage works so well for behaviors so
different like chess and calculus tells us how generic and powerful the
solution it found must be.

> > Learning algorithms are always interpreters. The only difference
> > is that interpreters have their code adjusts by humans, where as
> > learning algorithms include an additional module to do the
> > adjusting of the code.
>
> And they can use natural selection within a species to do the
> adjusting of the code. Actually they can embody useful code just
> as we do with graphics and sound chips. I don't deny learning
> mechanisms in the brain, they exist at every level because they
> give a survival advantage to the individual. But it is not all
> or nothing. There is a balance. In the case of humans all high
> level behavior is learned - using innate circuitry that evolved
> over millions of years. There are some things that are easier
> to learn than others. Academic learning is hard because we
> have to use machinery built for learning how to forage. Seeing
> is easy because we have innate seeing circuits designed for
> foraging.

Well, I think your conclusion is going to far in terms of trying to compare
foraging vs academic work, but I do agree that it is always true that
learning systems will be optimized to learn some classes of problems easier
than others. And I agree that our learning systems will have been tuned to
work well on what was important to us for the past millions or 100's of
thousands of years.

The fact that we have the power to create, and survive in, this modern
world which has so little in common with the environment our ancestors
evolved in is simple proof of the generic adaptive (learning) power of the
brain. There are no other animals that have done much of anything to
change the environment they live in, compared to what we have done - and we
attribute our power to do this to our intelligence.

It makes no difference if our learning skills are distributed over 10
different modules - it still somehow creates a strong and highly generic
learning system which is far stronger and more generic than any of the
systems we have yet built. If the only way to create strong generic
learning is to to use 10 different modules, then that's still what we have
to find. The problem is there to be solved, and until we solve it, we
won't have solved AI.

> > TD-gammon actually solves one specific high dimension learning
> > problem. And it solves it with a shockingly small amount of
> > interpreter memory (something like a few hundred or maybe a few
> > thousand words of memory to solve a learning problem with a
> > state space somewhere around 10^20.
>
> And it makes me wonder what is wrong with our generic learning
> that it cannot find these few hundred weights? Maybe because
> the brain doesn't work like an ANN?

What makes you think it didn't find similar weights in the brain of a human
who has learned to play backgammon?

> > Learning to improve your skill at backgammon simply IS a HIGH
> > DIMENSION learning problem. It's high dimension because the
> > state space of the environment is high dimension (around 10^20).
> > That's what the term means -
>
> Sure I know what high dimension means. However that it is possible
> to find a small set of weights to play a good game means that there
> are constraints that limit the variety in that game.

Yes, the fact that the weights can be found, show there are ways to
compactly represent a good _solution_ to the problem.

Finding that solution IS the LEARNING problem. Finding that solution for a
high dimension problem is the needle in a haystack problem that makes high
dimension learning a hard problem.

> There is an
> infinite number of combinations of two numbers but a simple process
> will compute the sum of any two numbers providing they are not too
> big for the circuitry to handle. But, like TD-Gammon, it is limited
> to its own domain.

Addition hardware is not solving a high dimension learning problem. It's
not even a search problem.

All the interesting learning problems are search problems. The high
dimension learning problems are tough because the space is so huge the
simple search algorithms can't practically deal with it. So more clever
techniques have to be developed. Just by looking at what the brain is able
to do, we can see instantly it's developed a far more clever and general
solution to this type of search problem. To solve AI, we have to do the
same. WE have to find a more clever and more generic approach to this
search problem.

> >> > Humans ARE NOT BORN with the skill of playing GO. WE LEARN IT.
> > >> No matter how it works, it's a high dimension learning problem
> > >> that IS SOLVED by the brain.
> >>
> >>
> >> How it works is a way is found to simplify the problem.
> >
> >
> > God you are dense.
>
> Ditto :)
>
> > The complexity of the problem is NOT DEFINED by what happens inside
> > the box. It's define BY the environment.
>
> Sure but the solution is defined by what's in the box and it is
> simple.
>
> The problem is high dimensional the solution is not.

Right. Good for you. Now show us how to build a machine to find those
simple solution when it's got do be done on the fly, instead of using the
long slow learning power of evolution!

> > Low dimension problems are solved by visiting each state many
> > times, and learning what works in that state by trial and error.
> > High dimension problems are a different class of problem because
> > there are too many states to allow that approach to work. It
> > can't visit every state many times to figure out what works. It
> > must somehow use past experience from other states, to help
> > judge what to do in the new state. That's the abstraction problem.
> > The problem of abstracting out common features ...
>
> Which means simplifying the problem.

Well, see, in common every day talk, that's how we are taught to talk about
that process. BUT IT"S NOT WHAT"S HAPPENING!

What you are confusing, is what happens when someone else shows you, the
solution. And you copy them. That is, we go to school, and are taught a
large bag of useful heuristics at solving a wide range of problems, and we
are taught how to dig though our bag of tricks to find the heuristics
needed to solve the problem on the test.

The heuristics we are taught make the problems easier to solve. Such as
being taught all the heuristics of math and algebra which makes solving
word problems that use numbers easier.

But where did those heuristics come from in the first place John! God
didn't make them up and give them to us. Some human found them (or more
accuratly, a whole string of humans found, them and refined them to make
them better over time). So how did their hardware do that? How did the
hardware find the heuristics that made the problem easier to solve?

We don't get to say "evolution gave us the abstractions that makes
everything easy". Evolution didn't do that. Evolution didn't invent math
and put all those math skills into our head. Evolution built some sort of
clever heuristic (/abstraction) _finding_ hardware and gave us that
instead. And it's that hardware, that has allowed us, over the years, to
create all these useful heuristics we are taught to use.

> > and using those common features
>
> That is the simplified low dimensional representation,

Yes, that's fair to say. But how is it found?

My networks create simplified low dimension representations but the ones my
network creates aren't very useful. How do you build a system to find
USEFUL low dimension solutions in a high dimension learning space quickly?
that is the high dimension learning problem we have to solve and the
problem I've been playing with all these years.

> > to guide the actions instead of using the actual real state
> > to guide the actions.
>
> Agreed.
>
> > What this leaves us with, is the simple fact that the brain
> > DOES solve a range high dimension learning problems with it's
> > hardware, which is beyond the general nature of the problems
> > we can currently solve with our hardware like TD-Gammon.
>
> The set of weights used by TD-Gammon are a low dimensional
> (simplified) model of the game that captures what is essential
> to choosing a move. The gammon universe has a many to one
> relationship with the TD-Gammon program. 6/12 Ashby.

Yes. That's obvious. I know basic set theory John.

> > They tried putting inverting glasses on frogs! That would have
> > been a hoot to see! :)
>
> Not as nice as that. They surgically rotated the frog's eye!

ewe! :)

>
> > Finding heuristics (abstractions) that are good for evaluating
> > the worth of a high dimension state is KEY to how these problems
> > are solved. But knowing that, is not answering the question of
> > how the system finds the heuristics in the first place.
>
> I realize that.
>
> > Your argument has always been that the solution, whatever it
> > is, will never be simple.
>
> With hindsight it may be simple but the path getting there may
> involve many false trials.

Right. Not only must we build a machine that can find simple solutions to
high dimension problems, but the very task of trying to find the design of
that machine, is itself, a high dimension problem which we are trying to
find a simple solution for.

> > ... you don't believe there is some small set of general
> > principles behind intelligence,
>
> I don't see intelligence as some singular thing to be found
> as a result of a selective process (learning) any more than
> I see there being only one kind of animal as a result of
> natural selection.

Yeah, I'll let you slide on that. :) I do, and I think when this strong
learning solution is found, it will quickly become what everyone thinks of
as intelligence. But that's all yet to be seen. The first task, is to
find some better learning machines that are good at finding simple
solutions to these high dimension problems - like TD-Gammon is, but only
for a far broader domain space.

> > Replacing hard coded modules, with generic learning, was what
> > made TD-gammon take a huge step forward in power by taking a
> > a step backward in complexity.
>
> Well I see an ANN as a computational module which is mainly
> about multivariate statistics. There is no reason why the brain
> cannot have such modules. The ANN module can form associations
> by contiguity and resemblance but there is more to thinking
> than forming categories.

I don't think there is. There's no evidence to suggest thinking _needs_ to
me more than that.

> My thinking on this has been biased
> by the views of Steven Pinker who takes a computational and
> evolutionary point of view when it comes to how the brain works.

I've not read any of his books.

> His chapter 5 "Good Ideas" is all about the issue of how come
> humans can produce all these never seen before novel behaviors.

Well the answer I assume is right is easy enough to understand.

The only way to turn a high dimension problems into a simple solution is to
make heavy use of abstractions - not just a few of them, but thousands or
maybe millions of them. Each node in the ANN defines some function from
the data feed into the network and all those functions are the abstractions
the ANN is using to create its output behaviors. Any behavior we produce,
was created by these abstraction. And if you could record the brain
activity associated with any behavior, you would find a large amount of
neural activity was associated with, and played some role in the creation
of the behavior. So this just means any behavior we produce, is actually
driven by a large set of abstractions that exist in our brain as all these
different neural functions working together.

As learning is being applied to some large set of these abstractions (some
in the mix could be fixed function with no learning), it just means that
any behavior we produce, is actually a highly complex mix of thousands or
millions or billions of past learning experiences.

The use of these abstractions to do the learning means that there is never
a 1 to 1 learning effect like we document in a controlled skinner box
experiment. A given learning experience doesn't just effect how we act in
a single specific environment.

When we have a learning experience when there's a dog around, some of that
learning ends up effecting _everything_ we do when there's a dog in the
environment. If we learn a learning experience with a red lamp in the
environment, then _everything_ we do in the future around a red lamp is
slightly effectged by everything we have learned in the past when there was
a red lamp around.

In normal life, we are not being conditioned by a single stimulus as we try
to study in a Skinner box. The environment is parsed down by the
abstraction network into a millions micro-stimulus and the learning that
happens applies in some percentage to all the micro stimulus abstractions
that were active. It's a highly parallel and distributed (holographic
like) conditioning process.

This means that every behavior we produce, is a complex reaction not to
just one or two stimulus signals, but to every abstracted stimulus signal
or brain has sensed in our current environment. Often, the "why" of the
behavior is somewhat obvious to us because some simple obvious stimulus
condition dominated our actions. We see food when we are hungry and we
reach out and grap it and eat it. It's not the red lamp in the corner that
made us do that. It's the food. That's obvious. But yet, the red lamp in
the corner did play some small part in the selection of what behavior we
produced.

Our creativity is no mystery when you understand this large parallel
stimulus response system is what's driving all our behavior. Based on
what's in our current environment, we combine together past learned lessons
that worked well in past environments with similar sets of stimulus
signals. Though we may have no awareness of the fact, the red light in the
corner might actually have been the key to why some seeming novel new
behavior was selected - when in fact - it was just a variation of something
we learned 30 years ago in a room that also had a red light in the corner.

> But he has an open mind and in the preface states:
> "Every idea in this book may turn out to be wrong, but that
> would be progress, because our old ideas were too vapid to
> be wrong".

Yeah, that's a good quote. The problem with all this vague hand waving we
both do is that it it's so vague at times, it can explain anything - and
can't be proven wrong. Your goto answer is evolution, my goto answer is
RL. Both are so powerfully vague that they can be used to justify
anything.

> > The entire growth and excitement of the field of AGI is all
> > about this same idea - the belief that we are on the verge
> > of making that next big step forward, but by taking the next
> > big step backwards in complexity.
>
> I will have to wait and see on that one.

Yeah, it's all a question of what actually ends up working. The more
success any approach has, the more people will be attracted to that
bandwagon. The reason everyone left the behaviorist and operant
conditioning bandwagon is because it was having no further successes when
other paths were.

> I became too tired to bother with much of your previous post
> but on rereading it in P's post I have made some comment...

My posts are so long and rambling that I can't even bother to read them
most the time. :)

> Curt wrote:
> > ... no animal in the past billion years needed to learn a
> > large and useful set of abstractions for evaluating chess
> > board positions, or GO board positions. We don't have
> > such hardware in us and the fact you keep trying to claim
> > we do is what's so absurd here.
>
> I don't claim we have innate hardware to play chess or go.
>
> You try to explain the ability that humans have over animals
> as being due to the sudden appearance of the generic learning
> that has replaced the old systems.

Well, more than that, because our basic generic learning hardware has to
exist in some form in all mammals if my theory is correct. WE clearly have
something more than most mammals. But other than brain size, I think the
real key is that we have some of that generic hardware correctly configured
and cross wired to other brain centers to support our language skills - and
I think that's the key that really sets us apart.

> Alfred Wallace, who also hit on the idea of natural selection,
> couldn't explain man's extreme ability to learn new things
> and reverted to believing it must be due to the interference
> of a superior being!!

RL, when done correctly, is so powerful that it does seem like magic! :)

> Apparently even today some scientists try to attribute human
> academic abilities to some kind of unknown self organizing
> principle that will be explained by complexity theory.

Well, I think the solution I'm looking for is certainly an unknown self
organizing theory. Which field of research finds it first is yet to be
seen. Many different fields are touching on the key aspects of the problem
so the first good solutions could come from any of the fields.

> However the other explanation is one of exaptations where
> brain circuits used by foraging for the past millions of
> years have been exapted for novel actions like playing
> chess or doing calculus.

Well, that's really not a different answer at all in my view. The foraging
hardware really had to be fairly generic learning solutions to start with
just to solve that problem. If some amount of tweaking that same generic
learning hardware allowed it to work well on even a larger domain, then
that's what could have happened.

Design work in general is always a process of looking for solutions that
work on as board a problem space as possible. It's the difference between
crappy software design and really good software design. Good software is
always highly elegant - which means it's surprising small, and powerful,
for what it can do. It will be a small set of simple features, that solve
a very board range of useful problems.

The spread sheet is one of many examples of a simple design for a program
which solves a surprising large range of problems. The web browser is yet
another (at least the simple concepts behind the first web browsers).
Human history of invention is full of important inventions that were
simple, elegant, and surprising powerful and useful. But for every one of
these inventions, there are thousands of crappy invents that never went
very far - mostly because they were limited domain devices that cost too
much for solving to limited a range of problems.

If evolution created hardware for giving us better foraging powers, and the
hardware was so elegant, and simple, yet powerful, that it could solve
chess problems at the same time, then it's obvious that evolution found one
of those great design solutions.

> This happens for organs in the body where their primary
> function can also be used for other things. Whatever it is
> that makes them useful for other things gives any animal,
> with any slight changes that enhance that exaptation, a
> survival advantage. So jaw bones exapt into middle ear
> bones, a wrist bone in the Panda exapts into a fake thumb.

Sure. But for that to work, the first solution had to be close, to what
was needed for the second. And if a control system for allowing animals to
find food could be changed in small ways, so it allowed us to invent, and
use, calculus, then that first solution had to be some really powerful, and
fairly generic, hardware in the first place.

> > I strongly suspect that prior knowledge is not needed.
> > I believe all the data needed to solve the problem, is
> > actually contained in the sensory data itself.
>
> Evolution started without any prior knowledge so in that
> sense prior knowledge is not required. However it started
> with simple circuits and if you look at simple brains they
> are not in the form of a generic learning network. They
> are as finely designed as any image filter invented by a
> visual engineer.

Sure. No problem with that fact.

> What you are suggesting is a high speed evolving network
> which relies only on the experiences in its lifetime.
> Compare that with the parallel experiences of billions
> of brains over millions of years offered by evolution.

Well, no not completely. You seem to forget at times that these networks
only work if they are driven by a good reward system. Creating these
generic learning systems are a two fold problem for evolution. First, it
has to create a very powerful, generic learning systems, and second, it has
to build the very complex motivation system to define the learning networks
purpose.

The first half is the big simile generic system I keep talking about and
looking for. The second half is not needed (or useful) until after you
have the first half working. Giving such a learning system just the right
balance of motivations to maximize its odds of _learning_ good survival
behaviors, is damn hard. And it would have taken evolution million of
years to tune all that complex reward producing hardware to fit the
environment, and to set a good set of clues to follow to quickly bootstrap
the learning of important complex behaviors - like foraging.

Once you create the strong generic learning systems, you have a machine
that I would call highly intelligent. But in order to make it learn _good_
surival behaviors (and especially to keep it from dieing as it's trying to
learn it's basic survival skills) is highly tricky stuff which requires a
ton of hard coded totally innate, totally fixed function, hardware to be
developed by the process of evolution.

Converting a living animal from a design based on only innate hardware, to
a design based on learning is highly difficult. For every function that is
in the innate hardware, (like walking - or chasing food), some set of
innate reward circuits have to be added to the motivation circuits that
help it learn these highly important behaviors very fast. And it's not
like it can reuse any of the other hardware because the circuits for
walking, won't be anything like the circuits for generating rewards to make
the learning system quickly learn to walk. So it has to start with all
this innate hardware, add to that, a generic learning module that somehow
works in parallel with the innate stuff, and then start modifying the
reward hardware to replace a good bit of what the innate hardware was
doing, and then finally, remove the innate hardware that's no longer
needed. It wouldn't have been an easy conversion for evolution to pull off.
But it did.

> So I am asking what is most likely? A replacement generic
> learning module or innate modules exapted to solving general
> purpose problems?

Innate modules CAN NOT be changed to solve generic learning programs. They
are so completely different that the concept makes NO SENSE. Have you
written ANY generic learning control systems John? They are NOTHING LIKE
the innate modules we would create to hard-code a behavior.

It's like thinking we could hard wire an electron circuit to create a
complex logic function such as hardware to calculate the value of PI, and
then make a few small changes to that hardware to turn it into a
programmable computer so we could then write software to calculate the
value of PI. It's nonsense. The two circuits are so completely different
that they would never be transformed along that path by evolution.

The only thing that makes sense, is that some amount of specialized
adaptive learning was first added to the innate circuits to allow them to
adapt to changing conditions - like innate walking hardware that had
adaptive abilities to deal with different length legs - which freed up
evolution to change the leg size, without having to re-design the control
circuit in parallel with the leg size changes. Then the adaptive circuits
over time became more generic, and more powerful, until the entire fixed
function module, could be replaced with generic learning, combined with
fixed-function motivation hardware.

> > But it's mostly contained in the temporal data available
> > only if you apply statistics to how the images change
> > over time, instead of trying to apply them to a single
> > static picture.
>
> The "temporal data" is just a sequence of "spatial data"
> and yes some kinds of patterns can only be found in the
> sequence. Most of what I know about the world is static.
> Even what changes is understood in terms of what doesn't
> change. The pixels that make up a face change all the
> time but not the categorization of the face.

Right, but the real face creates temporal constraints in how how the
face-pixel sensory data will change OVER TIME, and those constants, are
what good generic hardware can identify, and use, to create the invariant
face recognition circuits. It's what tells the generic hardware how to
correctly parse the data, so the generic hardware needs less a priori
knowledge from evolution hard wired into it. It's what gives us our power
to correctly parse static images like a photograph even though it's got no
temporal information in it.

> > I believe our vision system learns how to decode images
> > based on how they change over time, not on how they look
> > at one instant in time.
>
> Man made visual systems don't do it that way so why should
> a brain not do it that way?

Because the engines that built those systems are still stupid and don't
understand how to build systems that work as well as the ones evolution
built for us. Can the man made systems perform as well as the human brain
yet? Not that I'm aware of. So you are asking me why systems built by
man, which suck, are not what evolution used to build a far better system?
You realize why that argument is stupid right?

> It is even unclear how you decode "that is a circle" from
> how it changes over time. It doesn't change. A circle is
> not a temporal pattern so why would a brain treat it as such?

I guess because you are so conditioned to NOT understand temporal patterns
that it's beyond you to even come up with a good example or a good
question?

See if you can spot the temporal data in this circle...

http://www.youtube.com/watch?v=q4d0uGqrGiQ

There is no circle in the universe that doesn't have the temporal pattern
of change seen in that video. We recognize circles _because_ they create
this consistent temporal pattern of change over time.

> Unless you are confusing the serial search with "temporality"?
> All that serial search is understood in static terms. All
> temporal patterns can be understood as a spatial pattern.
> But then I have covered all that before.

If you try to solve _this_ problem that I'm talking about, by thinking
about it those terms, you will ALWAYS FAIL. I'll guarantee you that. To
solve some problems, you have to learn to think about them differently than
you have in the past. This is one of those problems a- gain, I'll
guarantee that.

If you don't learn to cast of fyour "everything can be solved by writing
static images on a sheet of paper" heuristic, you will never solve this
problem, and you won't even understand how the problem can be solved, or
even what the problem is that we are trying to solve - which is fairly
consistent with how much problems I have in communicating to you the
importance of building hardware that directly solves temporal pattern
problems in order to find hardware that can solve this generic learning
problem.

> > I think the solution requires that we parse the data into
> > abstractions that are useful in making temporal predictions
> > - in making predictions about how the real time image will
> > change over time.
>
> Actually most real time images don't change over time.

Yeah, most videos on youtube don't change from frame to frame. Yeah, keep
telling yourself that nonsense and you will find the answer to this problem
in no time flat.

> We
> understand them in terms of what doesn't change over time

EXACTLY. But how do we get that "understanding" from data that IS
constantly changing over time?

WE get that "understanding" because we have learning hardware that is able
to automatically configure itself into a data parsing (or abstraction -
same thing) system to extract out invariant representations.

Yes, once you find the invariant representations, things become easier
(still not simple - but easier) - which is why I keep saying these networks
have to do that. "Dog" is a a high level invariant representation of how
visual pixel data will change over time when we are looking at a dog. etc.

> not in terms of that ever changing image on the retina.
> Predictions are also based on what doesn't change. The
> path of a ball is predictable because it follows a certain
> curve through space once it leaves the hands.

Right, and it's those predictions, that are used, to shape how these
networks parse the changing data into "ball flying free though the air".
(along with all the other invariant features that exist in the data at any
instant in time).

> In physics we define velocity by what doesn't change. The
> change in position over a unit of time, _doesn't change_.
> The change in velocity over a unit of time also is used
> as it _doesn't change_, we call it a constant acceleration.
> Then we have a falling body where we have a constant value
> that measures the change in the acceleration. It gets
> faster and faster but at a rate that doesn't change. (At
> least not in a vacuum).

It makes no difference if the invariant prediction is about what changes,
or about what doesn't change. It's the same thing either way. It's a
prediction about what to expect IN THE FUTURE. It's a temporal prediction
about what's going to happen next. If the temporal pattern ix XXXXX then
the temporal prediction is one of no change. If the temporal pattern is
ABCDE then the prediction is that there will be change. It's still a
problem of recognizing temporal patterns - and more important, recognizing
them before they happen.

> > There are an infinite way to parse something like vision data.
> > So when is one way of parsing it better than another? The
> > easy answer is that one way made us "smarter" and helped us
> > survive, so evolution tuned the system to parse it in the way
> > that was most useful to us for survival. Which is almost bound
> > to be true to some degree.
>
> I agree completely with all of that.
>
> > But I think we can do far better than that by understanding
> > that all this parsing is done for a very specific purpose -
> > the purpose of driving behavior that needs to be predictive
> > in nature. That is, behavior which is able to "do the right
> > thing" before it's needed, instead of after.
>
> Parsing the image so it is predictive makes sense and so does
> parsing the image to it is relevant to the goals of the animal
> so it doesn't waste time attending to everything or storing it
> for later associations while the predator or food goes unnoticed.
>
> Rather than trying to handle all the data the trick is to
> know what can be thrown out. Rather than every possible
> categorization the trick is discovering useful categories.

Yes, and if evolution could find all that for us, that's fine and dandy.
But I think for one, by seeing how good the brain is at solving learning
problems, we can see it's found dynamic ways to figure a lot of this out on
it's own without help form evolution. Evolution decided for example to
throw out all the IR spectrum data (well most of it - we can detect a
little bit as heat on the skin). And evolution could have decided to throw
a lot more away in some specialized pre-processing networks it built. But
after everything it could have done is factored out, we still have a high
dimension generic learning problem to solve before we can equal what the
brain can do.

As I've tried to explain, in this learning problem, WE CAN NOT "throw out"
data. But you seem unable to understand that requirement so I'll just skip
it.

The end result however, is that the generic part of the solution, must
find, on it's won, with almost no a priori help from evolution, the correct
fracturing (parsing) of the data into abstractions that are maximally
useful for driving our behavior.

And something that's just popping into my mind again because of the context
of what I've been thinking about above, is the point of the parsing being
useful to predict the future. I think this is a key and highly important
aspect of the temporal pattern problem I haven't though enough about. That
is, if a parsing of the data creates a better prediction of the future,
then that parsing is a better parsing. This idea, I think, can be used to
understand how things could work.

The key idea I've talked about in the past is that the network should
identify, and make use of the constraints in the data (both the temporal
constraints and the spatial constraints). But I've said what it should be
doing, is trying to transform to the data, to _remove_ the constraints.
This sounded right to me, but there were always a few nagging
inconsistencies I couldn't quite resolve. Namely, if you transform to
remove the constraints, the output of such a process will look like
compressed data looks - white noise with no correlations between the
signals. But the idea was that this would allow the network to also
recognize invariant patterns, like "dog". An output signal for "dog" would
never look like white noise. It would go active for a long time as the dog
was around,and then go inactive when there was no dog. That would be a
signal full of temporal constraints. That's not consistent with my other
idea that the signals could be transformed to remove the constraints. SO
this was always a nagging unanswered inconsistency in my thoughts about
what these networks should be doing.

If we switch the goal from removing the correlations, to, find the patterns
that are maximally predictive of the future, then we end up with signals
that have lots of constraints.

For example, if we have a sequential character example where we are looking
for patterns in the character sequence, we could have the pattern AAAXXXXX
which shows up a lot in the data.

When the character A shows up, this is slightly predictive that AAAXXXX
might be showing up. After AA shows up, we have even more faith that the
pattern AAAXXXX will show up. by the point AAA arrives, we might have an
99% confidence that XXXX is about to show up simply because that's a
statistical constraint that happens to be true for this data stream - that
is, AAA is not followed by XXXX only 1 out of 100 times in this data.

If the network had an output for the pattern "AAAXXXX" and that output was
in some form that allowed a level to represent a measure of confidence
(like a 0 to 1 value real number), then that output would be 0 a times
where there are no A's. But after one A, the output of that signal could
rise to indicate the odds that that pattern was going to follow - maybe 1%
(0.01 to represent 1%. But after three As had shown up, the output would
rise to .99 and other outputs like the AAAY pattern output, would drop to
0.01. So this would be a network that was producing outputs that were
predictive of the future.

The network doesn't need to actually "report" what it was expecting to show
up next in the output. That's just not needed. The more important point,
for the sake of doing reinforcement learning, is that thees signals be used
to trigger behavior. That is, the more predictive the simulates signal is,
the more useful it becomes at triggering behaviors that must happen some
time _before_ the required result.

So the point here, is that if you have a network with only 10 outputs,
which 10 temporal patterns should the network configure itself to
recognize. Should it use one of the outputs for the AAAXXXX pattern, and
another for AAAY and another for XXXXAAA, etc? There's a nearly infinite
number of patterns that could be recognized in this high dimension sensory
space, but only 10 outputs this small network can produce. This in general
is just the standard problem for this type of learning network - how to
best use the limited hardware it has, to best solve a problem way larger
than it can hope to perfectly solve. It needs to use it's limited hardware
as efficiently as possible.

And my new idea (actually it's one I've thought of in the past, but not
much lately) of what it should be doing, is (somehow) trying to adjust the
pattern recognizers so they produce the strongest predictions about the
future.

This is certainly something my old network didn't do at all. It did
recognize temporal pattens, but it made no attempt to deal with predicting
future temporal patterns. All it did, was report on what had been seen,
and not make any predictions about what might be coming next. And I think
that's a key error in that network design - and a simple way to understand
something very important that was missing from it.

How to make a network that adjusts itself to find the best predictive
patterns I don't know how to do. But it gives me something interesting to
ponder about that I feel might lead to something good....

> There is always the problem of time and resources available
> for any task including learning. An organism that can make
> use of millions of years (time) and billions of individuals
> (resources) to develop useful circuits has an advantage
> over anything that has to rely on evolving all this in its
> own lifetime by some super amazing generic network alone.

Sure. But that's just not relevant to this part of the problem. Any
information that can be collected, and used, from the millions of years of
history, though evolution, is done as a separate part of the solution. We
know for a fact, that the brain has strong adaptive abilities to deal with
problems that evolution can provide no a priori help with - and that's one
of the key powers missing from our attempts to create AI. And as you know,
it's the only aspect of the AI problem I'm interested in - and as you also
know, I think it's so important, that it will become what people think of
as AI when we get it working where as all the stuff you talk about, will be
not AI. :) But that's just my speculation on how powerful these systems
really will be on their own.

> You seem to want to compress and store all the incoming data
> to compare with each new input. I suspect that is not possible
> in practice.

No, that's not at all what I've been talking about. That's just you having
problems following all my very abstract hand waving. Though I talk about
the issue of "compression", I've never once suggested we use any "storage"
or "compare old data to new data after we compress it". That's just in no
way related to what I'm thinking or trying to communicate.

The ideas of compression is me using terms to try and relate to you, and
others, and to myself, the type of transform I think the network needs to
apply to the data. But above, I've changed my view. With more thought,
this new approach might even be better seen as an anti-compression
transform. That is, instead of trying to transform the data into white
noise by removing correlations, maybe it's trying to transform the data so
as to maximize the amount of constraints in each output signal. Tat is, to
make it contain as much structure and look as little like nose as possible
(while not throwing away any of the sensory information).

> I would also point out again that the world out
> there can always be accessed you don't have to remember it all.

And again, I have never once suggested it be stored. That's just something
you are getting confused about. I made this clear in a recent message in
this thread where you made the same mistake, but maybe that was a part you
didn't see or skimmed to quickly to realize what I was saying. Or maybe,
just another example of me failing to communicate ideas to you. :)

> > You can't wait until you get to the stop sign to send the
> > signal to the foot to step on the brake. It has to be send
> > ahead of time. Almost all actions work like that.
>
> But that can be determined by static data. If a robot or
> human is heading toward a wall (or stop sign) it can adjust
> its speed by estimating its distance from the wall (or the
> stop sign) from the current static image. There is nothing
> temporal about estimating distances from a static image
> and that data plus the speed of the observer can generate
> a desirable speed to control the brakes and accelerator.

You can't estimate speed from static images. You can't solve that problem,
without the TEMPORAL sense of speed.

> > We have to create a behavior in response to what our sensors
> > are reporting now, in order to make the future unfold in ways
> > that are better for us. Reinforcement learning algorithms in
> > general all solve this problem already. But what makes it
> > harder in these high dimension problems is that we can't use
> > the "real" state of the environment to make predictions with.
> > We are forced to use a system that creates abstractions to
> > define our understanding of the state.
>
> Spot on. Exactly what I have been saying. We can't use the
> "real" state of the environment which is too complex.

Being able to find a _good_ set of abstractions by which to represent the
state is THE PROBLEM that's never been solved. It's what I'm grasping at
when I talk about what type of transom the network needs to apply to the
data. It's got to apply a transform that changes the raw sensory data,
automatically, into a high quality set of simplified abstractions.
Evolution can't be the answer to how that transform is done, because it
can't explain how we can learn to use good abstractions for something new -
ilke playing chess. The abstractions that work for playing high quality
chess, are not something evolution built into us to help us find food.
It's something our high quality generic abstracting finding circuits found
for us. And I'm very sure it works by some simple rule such as, "adjust
your function so has to maximize XXX mathematical feature of the outputs",
where I suspect "XXX mathematical feature" is something that makes the
network produce features that our maximally predictive of the future

> > of the environment.
>
> And my suggestion is they were naturally selected over millions
> of years.

and your suggestion has been proven impossible - though you don't seem able
to understand the proof (which means it's not proven to you). But it's
time for me to give up on trying to help you understand that proof for the
time being.

> > How the system parses an image, is the same problem. It's the
> > problem of what internal representations are most useful for
> > that internal state. And a key part of that answer, is that
> > we need internal state representations that are good predictors
> > of the future.
>
> > So we can adjust how the s system is parsing, based on how
> > good any given parsing is at predicting the future - at
> > predicting how the sensory data will change over time.
>
> > The ball vs. background parsing is useful because it's more likely
> > that the ball will move relative to the background, than it is
> > that the ball will split in half with half of the pixels moving
> > off the left and the other half moving off to the right.
>
> You like to complicate everything. A simple motion circuit doesn't
> have to worry about the likelihood of pixels going one way or the
> other way. It simply responds to what does happen. No prediction
> is required ALL the time. Even predictions are NOW decisions based
> on previous changes held in "weights" that exist spatially.

Yeah, keep telling yourself that the brain solves this high dimension
learning problem with "simple motion circuits".

> > I believe this parsing problem is, and must be, solved, based
> > on a statistical system that forms itself into the best set of
> > temporal predictors it can. That it works by converging on
> > higher quantity abstractions based on how good a given
> > abstraction is at predicting how the data will change over time.
>
> Sure. But it doesn't require all those requirements you suggest.
> You hold this belief that as a result of fulfilling these "temporal"
> requirements a predictive controller will pop out as a result.
> Spatial data will do just fine in making a predictive controller
> for it can hold temporal data. It can be embodied in a circuit
> that changes naturally over time in a way dependent on the
> modulating influence of higher level command signals and lower
> level feedback signals.

Spatial or not, something still has to decide what aspects of the temporal
data have to be translated and processed in your spatial domain so the
problem is still there, and still very real.

N

unread,
Dec 17, 2009, 4:18:22 PM12/17/09
to
On 16 Dec, 13:52, Tim Tyler <t...@tt1.org> wrote:
> Curt Welch wrote:
> > Tim Tyler <t...@tt1.org> wrote:
> >> Curt Welch wrote:
> >>> Tim Tyler <t...@tt1.org> wrote:
> >> Why call future machines "toasters"?  It seems derogatory.
>
> > It was meant as a way of emphasizing that these future machines won't be
> > any more important to us than our toasters and all the other machines we
> > use to make life easier for us, and to emphasize the point that these AI
> > machines won't have a drive for survival any more than our toasters do.
>
> They will probably be quite important for us.  We are getting increasingly
> dependent on tools as time passes.  Take away stone-age man's tools and
> he can still get by.  Take away our tools, and civilisation collapses,
> and most
> humans die.

do you really think its all 'tool users' as a cause to civilization?
and as a consequential use of those tools mans behaviours then become
more 'civilized' eh? I wonder! like if you drive a faster car, or earn
more money by use of a financial system? will that then excuse badly
or deterrent behaviour in others less fortunate say? I would
commerce...if the 'tools' of my labours were worthy of exchange with
wits

> >>> If in some future, the human race is facing extinction because of
> >>> forces beyond our control, I could see people deciding to build AIs
> >>> that are intended to carry on without the humans.  But unless human
> >>> society is gone, I just don't see a path for the AIs being allowed to
> >>> take over - or to get _any_ real power over the humans.  I don't think
> >>> humans as a whole, will every allow such a thing.
> >> ...whereas I think the humans will do it deliberately.
>
> > And what would be the selfish gene's motivation for doing that?
>
> At each step, DNA that cooperates with the machines does better than
> the DNA that doesn't.  That doesn't imply that DNA is thriving overall.
> You can be climbing a mountain as fast as you can - but the mountain
> can still be sinking into the sea.
>
> Anyway, DNA-based humans are still going to survive, I figure.  They
> will be in museums - but that's still survival.
>

yeh and the pays great! ? .....the world is still a globe, there are
different geographical locations!

the search engine from every internet site is logged, and as the porn
romance industry speculated ....? ....those who have made their
'uppermost desires' eloquent and clear, a suitable partner can't be
found? so what next?

> > In time, the selfish genes would fix that mistake.  [...]
>
> Why is it a mistake? I like various machines today.  They are not bad for
> me, rather they enhance my own fitness - relative to if I did not interact
> with them.
>
> We can see the explosion of machines taking place today - and it is
> because humans like them, and want more of them.
>
> Machines seem set to destroy the human economy by taking all the
> human jobs.  When that happens, most humans will persist on welfare
> derived from taxation - but they will basically be functionally redundant,
> and future development efforts will shift into the machine domain.
>

This isn't an old tale, my labout MP relatives and their associated
concluded way way back that one day 'it would be a sort of luxury' to
have paid work. Exactly what they beheld as 'work' is beyond reason or
purpose here, but we know already that if people had never been
isolated on purpose and deformed by ritual and disproportion by
summary? most guys and gals kinda make their own ways and make their
worth and needs felt within proportional representation?


>
> > Right now, humans are the only example we have to look at to understand
> > what intelligence is.  All humans have almost identical motivations
> > compared to the types of motivations we will give to AIs.  As such, it
> > leaves us with some preconceived notions of what intelligence is - of the
> > range of personalities we can expect to see in an intelligent agent.  But I
> > think when we build these AIs with different prime motivations, they will
> > develop a personality that ends up looking unlike any human we have ever
> > seen.  They will be highly intelligent, but yet, not human like at all.
> > They might for example be more like talking to a telephone auto response
> > unit, or a vending machine, than like talking to a human.  The difference
> > however is that they will show great understanding of what we are asking,
> > and be very clever and kind and helping in their responses to us.  I
> > suspect we won't bond very well with them at all because we will be so
> > different - we won't be able to relate to what they are thinking or
> > feeling.  All they want to do, is go find another human to help.  Ask them
> > to go paint a million little circles on the sidewalk, and that will make
> > their day!  They have something to do to make a human happy!  We won't be
> > able to emotionally connect with these AIs because their needs and feelings
> > and instincts will be so different than our own that they won't seem to be
> > human at all - even though they are clearly very intelligent.
>

This is an area where most people unguided go a little silly. One
might walk into an arts gallery and presuppose that all paintings
hanging in it are identified as personifications of their own wanton
desire? ... not it ought to be kept clearly in the introduction that
'this is an art house!' (muffled squeeks) or some such!


>
> We will also want human-like agents.  That is after all what we are familiar
> with.  We will want them as sex partners, assistants, babysitters,
> nurses, etc.
> Androids:

AI, like the movie?

going towards knowledge and behaviours identified by popular poetry,
aesthetics, museum awards, graphics industy, movies, ?

Ummm....I believe the media and those who would be willing to invest
can only invest their time and patience in subjects that they can
understand, and lets face it, when we want to 'switch off' and watch
the herds over the plains or the dew drop off the last flower in the
last faultering light? who wants to be reminded that that supply of
demand is part of a bank of technical data ?......I have my own
exeprience and learning tho!


> Tim Tyler: On androids
>
>   -http://www.youtube.com/watch?v=E-43KqWgTHw


>
>
>
> >>> So we will have a world, with humans clearly in charge, with lots of
> >>> smart
>

Tim Tyler

unread,
Dec 17, 2009, 5:06:18 PM12/17/09
to
Curt Welch wrote:

> Well, again, in the very long term, I agree completely. DNA just won't
> out-live all the possible engineered life forms that might show up in time.
> But I don't believe that AI is the technology that will cause DNA to go
> extinct.

What will do in DNA is other heritable materials. We have a plethora of
new players on this scene - and will see many more in the future. Passing
information down the generations isn't a one-size-fits all problem, and
we will probably have many information-storage media in the future,
for different applications - much as we see today.

Intelligent machines are the new replicators growing a brain - and
nanotech/robotics is them growing a body.

So: the angels are taking shape - soon they will receive us.

Tim Tyler

unread,
Dec 17, 2009, 5:15:56 PM12/17/09
to
Curt Welch wrote:

> I really don't see AI as being special at all. I see it as not much more
> special than a new and faster compression program. It's just one more
> algorithm to add to our bag of useful algorithms to use in our machines.
> This new simple but powerful algorithm just isn't the reason DNA based life
> forms are going to be replaced. A new and stronger learning algorithm just
> isn't going to make DNA based life forms go extinct and that's all I see AI
> as being.
>
> But in time, technology will create some stuff that's very dangerous to DNA

> based life. And either [we} will transform in some step by step fashion


> into these new more powerful survival machines, or someone will let the cat
> out of the bag by accident, or with malicious intent, and these dangerous
> machines will just take over.

I figure that future dominant organisms will inherit both from us and
from our machines. However, I don't expect very much of "us" to go
the distance. That is not to say that there will not be a continuous line
of descent from humans - but rather that most of the human genes
will be discarded along the way.

I regard the "accident" / "malicious intent" scenarios as pretty unlikely.
Civilisation is not very likely to fumble things that way. The machines
will continue to rise as they are doing today, by deliberate design, with
the cooperation of human governments.

Tim Tyler

unread,
Dec 17, 2009, 5:19:28 PM12/17/09
to
Curt Welch wrote:

> Yes, the machines will rise - just like the human body rose up around the
> DNA as a machine created by the DNA to give the DNA more power. The rise
> of the AIs around us, will be more of he same - all still configured as
> slaves to the DNA as much as our body (and our mind) is still a machine
> slave to the DNA.
>
> The machines won't rise up and take over. They will just rise up and give
> our DNA huge amounts of new powers of survival.

Check out the demographic transition in Japan. When there are lots of
memes, machines and robots about, humans appear to spontaneously
stop breeding. The breeding rate drops to below the replacement rate.
Japan gives us a taste today of what will happen to humans in the future.

casey

unread,
Dec 17, 2009, 6:18:02 PM12/17/09
to
highly edited response

On Dec 18, 7:43 am, c...@kcwc.com (Curt Welch) wrote:

> Your goto answer is evolution, my goto answer is RL. Both are
> so powerfully vague that they can be used to justify anything.


For evolution the feedback signal is reproductive success and
applies to the species where random changes take place in the
dna pool of the species. For reinforcement learning the
feedback signal is from a reward system and the change takes
place in the neural connections.


> Converting a living animal from a design based on only innate
> hardware, to a design based on learning is highly difficult.


As research on Aplysia shows the innate circuits provide default
behaviors, however those circuits are modifiable by experience.


> ... if a parsing of the data creates a better prediction of


> the future, then that parsing is a better parsing. This idea,
> I think, can be used to understand how things could work.

This is taken from Steven Pinker's book "How the Mind Works"
chapter 5, Good Ideas, but not in his exact words.

What we have to get out of forming categories is the ability to
infer things including the ability to infer what will happen next.

Placing something in a category enables the predictions about
that something we haven't yet observed. The choice of categories
is a compromise between how hard it is to identify the category
and how much good the category works for you.

Some categories fall out of pattern associater neural networks.
These fuzzy categories provide predictive power through similarity.
And there are well defined categories that fall out of the intuitive
theories that are people's best guess about how the world works.


JC

Tim Tyler

unread,
Dec 21, 2009, 2:13:04 PM12/21/09
to
Curt Welch wrote:

> In order for information
> to have meaning (to represent something) it must exist as some real thing -
> and how it exists, is what defines its meaning. It's the instantiations
> that are actually competing with one another for survival, not the
> abstraction of the information itself we sometimes fine useful to talk
> about. 1's and 0's don't complete with each other for survival, but real
> genes which are the code of real cells, do compete.

I wonder if the concept of data portability will throw any light on your
DNA centrism.

When DNA replaced RNA, RNA lived on, in the form of an intermediate
transmission stage.

DNA looks set to have a similar intermediate role in the future. We
have already scanned genomes into databases. However, we can't
yet reconstruct an organism from its electronic genome. That
technology seems likely to arrive in the next couple of years - and
become fairly mainstream within a decade.

Then what evolves will become the contents of the database -
at least in the case of various engineered food crops that will
use this technology. DNA won't be a heriable material in that lineage
any more - the heritable information will be stored in databases.

Information skipping between substrates like this is a consequence
of data portabililty. DNA has no especially privilidged role. Like
RNA was, it can be replaced when something better comes along.

Curt Welch

unread,
Dec 21, 2009, 6:44:18 PM12/21/09
to

I think you don't grasp the points I was actually making about DNA vs AI.

What you talk about is all fine and good and I agree things like that will
happen.

When the DATA is placed in the database at first (as it is for lots of DNA
already), it has little or no value as a survival machine. The computer
databases aren't posed to "take over the world". Or, to be more obvious
about this, if you write the human DNA code down on paper, the paper with
ink marks has little to no power to take over the world. It's just paper.

But, once you build that all important missing link, the machine that reads
the database, and produces a complex biological machine called corn (for
example) then you have given the database power it never had before - the
power to make corn. It's one component in a much larger total machine you
have created.

It's not about "the data". It's THE MACHINE. If you build a machine
that's got good survival power, and more power than humans, then yes, it
could well take over the world. But it's not the "data" that is important
here, it's always THE MACHINE YOU BUILD that is important.

The entire human species (that is all humans currently alive) is one big
high power, high intelligence, complex survival machine. We have built a
lot of tools to help us continue to met our top level goal of survival, and
we can include all those tools as part of this big survival machine.

When we build AI, this huge survival machine, which has acted for billions
of years with a single purpose - survival - isn't going to suddenly change
it's purpose or it's actions and stop trying to survive. We will use AI
only in ways that help us continue to survive.

Currently, this machine called the human species works by modifying it's
"code" in order to make itself a better survival machine. That "code"
exists in two places in us. First, is the DNA, but second, is the code in
the brain we talk about as memes.

When we create AI, again, this survival machine will only use the
technology in the same way it's used all other technology - to help us
survive.

Even though we will shortly have the power to build an entire new machine
out of AI robots that will have it's own power to survive, we won't do it.
It's as stupid as building a gun and holding it to our heads so that the
gun ends up surviving and we don't. It's exactly the same thing as that
because the correct type of gun could "live" for a billion years after it
killed us. We are currently built NOT to let such things happen to us and
the invention of one powerful technology isn't going to change that.

But will happen, is that all the inventions will allow us to change
ourselves. WE will continue to make more modifications to the survival
machine we call the human race. We might modify how this machine
reproduces by using databases of human DNA and computer programs that
allows us to hand craft our new generations. Once we start doing that,
then your database servers and the lab machines that fab the DNA and insert
it into humans eggs becomes part of the survival machine we call the human
race.

The human race might transform itself, one small step at a time like that,
into something that ends up being something we could think of as an AI
robot today which makes no use of our old DNA at all. But that's just the
normal process of evolution over time.

If we build a race of robots that are themselves a strong survival machine,
and then we just let our survival machine die out, then this machine did
not evolve into a race of AIs. We just created a new ty0e of life, and
then died. We could just have well created a new breed of mice using
genetic engineering, and then killed ourselves. Whether it's mice, or AI
robots, it's not an evolution of the human survival machine unless it
happens though modification of the machine that exists today.

I think the human body is currently too complex for us to do much
modifications by direct engineering. I think all you can do (and maybe all
you can ever do), is modified it though the long and slow process of of
trial and error modification. We will create technologies to greatly speed
up that trial and error process so what might have taken a million years by
normal evolution might happen in a period of 100 years though machine
assisted trial and error, but a 100 million year change is still going to
take 1000 years of high speed computer and AI assisted search to make much
real change to what we are by direct manipulation of the DNA. So again, I
don't know what we will evolve into, but it will take a good bit of time,
and we won't evolve into AI robots just because we created AI robots any
more than we evolve into toasters just because we created toasters.

Don Stockbauer

unread,
Dec 22, 2009, 5:23:48 AM12/22/09
to
If the posts were of reasonable length here, people would actually
read them and respond.

Tim Tyler

unread,
Dec 22, 2009, 9:57:38 AM12/22/09
to
Curt Welch wrote:
> Tim Tyler <t...@tt1.org> wrote:

>> When DNA replaced RNA, RNA lived on, in the form of an intermediate
>> transmission stage.
>>
>> DNA looks set to have a similar intermediate role in the future. We
>> have already scanned genomes into databases. However, we can't
>> yet reconstruct an organism from its electronic genome. That
>> technology seems likely to arrive in the next couple of years - and
>> become fairly mainstream within a decade.
>>
>> Then what evolves will become the contents of the database -
>> at least in the case of various engineered food crops that will
>> use this technology. DNA won't be a heriable material in that lineage
>> any more - the heritable information will be stored in databases.
>>
>> Information skipping between substrates like this is a consequence
>> of data portabililty. DNA has no especially privilidged role. Like
>> RNA was, it can be replaced when something better comes along.
>
> I think you don't grasp the points I was actually making about DNA vs AI.

Your previous DNA-centrism certainly seems to have gone from this post.

I am pleased to see that we are in closer agreement than I had thought
about the portability of information, and the ease of replacement of
the genetic substrate.

> The human race might transform itself, one small step at a time like that,
> into something that ends up being something we could think of as an AI
> robot today which makes no use of our old DNA at all. But that's just the
> normal process of evolution over time.

Genetic takeovers are part of the process of evolution over time, too.
What we are seeing now is not something completely new - we have
had genetic takeovers before.

> If we build a race of robots that are themselves a strong survival machine,
> and then we just let our survival machine die out, then this machine did
> not evolve into a race of AIs. We just created a new ty0e of life, and
> then died. We could just have well created a new breed of mice using
> genetic engineering, and then killed ourselves. Whether it's mice, or AI
> robots, it's not an evolution of the human survival machine unless it
> happens though modification of the machine that exists today.

No thread extends from one end of a rope to the other.

It's the same with living things. There is a thread of life -
but no genes go all the way from one end to the other.
Instead, new genes arise, old genes die, and continuity
is only present in the form of the bundle.

Human genome information is likely to persist in the future.
James Watson's DNA will remain in the history books, for
example. There will be Shannon mutual information
between our civilisation and the next one - including
information about the human genome. I don't see why
that civilisation would not be classed as descendant
from ours, simply because much of the heritable
information has been replaced - and most creatures
are machines.

DNA has no special privileged role. Civilisation is what will
persist in the future. It's inheritance mechanism currently
consists partly of DNA and partly of databases. The information
that is currently in DNA will shift into databases - and the
triplet base-pair information the codes for proteins will gradually
peter out - as the old-fashioned protein technology is displaced
by more modern alternatives.

That picture represents continuity of the rope, but *not* of any
individual threads.

> I think the human body is currently too complex for us to do much
> modifications by direct engineering. I think all you can do (and maybe all
> you can ever do), is modified it though the long and slow process of of
> trial and error modification. We will create technologies to greatly speed
> up that trial and error process so what might have taken a million years by
> normal evolution might happen in a period of 100 years though machine
> assisted trial and error, but a 100 million year change is still going to
> take 1000 years of high speed computer and AI assisted search to make much
> real change to what we are by direct manipulation of the DNA. So again, I
> don't know what we will evolve into, but it will take a good bit of time,
> and we won't evolve into AI robots just because we created AI robots any
> more than we evolve into toasters just because we created toasters.

Evolution is probably more-or-less over for the human genome.

However, what we can do is build and evolve machines. They will
have better foundations, and will be able to go far beyond what
nature managed with us.

I don't think we will evolve into "AI robots" either. Rather we will
construct our successors - and they will replace us. *Some* information
will make it across the divide - as I have described above - but the
*main* dynamic will be the rise of new, better technology, and the
replacement of the archaic genetic and phenotypic technologies of
the past.

Humans are not resisting the rise of the machines. We love the
machines. We are the people making them. The reason we make
them is that they help us to attain our goals. That isn't going to
change when the world is 90% machine, or 95% machine, or 99%
machine - using the metrics discussed here:

http://machine-takeover.blogspot.com/2009/07/measuring-machine-takeover.html

The idea that humans won't create self-replicating survival-oriented
agents that compete for resources with DNA-genes is inaccurate.

Humans have been doing that for centuries. See:

http://alife.co.uk/essays/synthetic_life_is_here_already/

Curt Welch

unread,
Dec 22, 2009, 1:01:28 PM12/22/09
to
Don Stockbauer <don.sto...@gmail.com> wrote:
> If the posts were of reasonable length here, people would actually
> read them and respond.

Some are! :)

Don Stockbauer

unread,
Dec 22, 2009, 10:09:57 PM12/22/09
to
On Dec 22, 12:01 pm, c...@kcwc.com (Curt Welch) wrote:

> Don Stockbauer <don.stockba...@gmail.com> wrote:
> > If the posts were of reasonable length here, people would actually
> > read them and respond.
>
> Some are! :)

I agree, Curt, that some are.

I'll admit that maybe I should just make time to read them if they're
long.

J.A. Legris

unread,
Dec 23, 2009, 10:11:35 AM12/23/09
to

Why bother? It's the same old hash, recycled ad nauseam. You gotta
wonder, why don't Curt and Casey just take it to private email? After
1 or 2 cycles they'd lose interest and subsequently same themselves a
lot of time. What a couple of bores! I get more out of N's posts -
short and challenging.

--
Joe

Curt Welch

unread,
Dec 23, 2009, 12:39:39 PM12/23/09
to
Don Stockbauer <don.sto...@gmail.com> wrote:

I'm sure you could find better things to read with your time than all of my
long posts!

casey

unread,
Dec 23, 2009, 1:33:19 PM12/23/09
to
On Dec 24, 2:11 am, "J.A. Legris" <jaleg...@sympatico.ca> wrote:

>> On Dec 22, 12:01 pm, c...@kcwc.com (Curt Welch) wrote:
>>
>>> Don Stockbauer <don.stockba...@gmail.com> wrote:
>>>> If the posts were of reasonable length here, people
>>>> would actually read them and respond.
>>>
>>>
>>> Some are! :)
>>
>>
>> I agree, Curt, that some are.
>
>
>> I'll admit that maybe I should just make time to read
>> them if they're long.
>
>
> Why bother? It's the same old hash, recycled ad nauseam.

True.

> You gotta wonder, why don't Curt and Casey just take it
> to private email? After 1 or 2 cycles they'd lose

> interest and subsequently save themselves a lot of time.

Curt showed no interest in doing that.

> What a couple of bores! I get more out of N's posts -
> short and challenging.

Everyone to their own.

JC

Curt Welch

unread,
Dec 23, 2009, 5:27:36 PM12/23/09
to

:) That's yet to be seen isn't it!

Yes, on the 100 billion year grand scheme of things, DNA might not have a
long term privileged role. But for now, it's got a clear privileged role
here on earth. It's the current "best of the best" technology for survival
of complex systems.

It might not just be a random fluke that DNA is here. That is, it might
just be the best survival system period that can be created in this
universe. Only time will tell for sure what happens.

But other than for those points, I certainly agree that DNA based life is
just one possible path and that other complex survival systems might
dominate and totally replace all DNA life on earth.

However, that was never the point. The question is how might the raise of
strong AI change the path of evolution? You seem to lean to the idea that
we will create AI, we will love them like we love other humans, create a
mixed society of AIs and humans where the AIs have rights in the society
like humans - and that over time, the humans will die out leaving the AIs
to run and create the society of the future.

That's the story I don't agree with. I think in the next 100 years, the
world will be filled with machines smarter than humans in all ways, instead
of just most ways like it is today. I love my computer, and my tools, and
all the machines I depend on today already. I don't have to wait for AI to
love it as well. My feelings towards the AIs will likely be very different
because we will be able to make them very human like. Most notably, they
will be conscious, and they will hate being hurt as much as we do. This
power to sense pain and pleasure, and to avoid the pain and seek out the
pleasure will allow us to connect with these new machines in ways we don't
connect with our current machines. So the amount of love and empathy we
can feel towards these new AIs will be very different from my computer -
which basically feels no pain.

But I love my pets for most the same reasons that I'll love my new AIs.
But yet, I wouldn't trust them making laws that I would have to live by
because their needs are too different from my own.

> It's inheritance mechanism currently
> consists partly of DNA and partly of databases.

Human culture inheritance mechanism???

> The information
> that is currently in DNA will shift into databases - and the
> triplet base-pair information the codes for proteins will gradually
> peter out - as the old-fashioned protein technology is displaced
> by more modern alternatives.

I don't get your point here. Humans have DNA in every cell of their body
and it's used every day to keep us running. You can't put it into a
database and still have it there to make new red blood cells when they are
needed.

DNA is far more than just DNA. It's just one small part of a very large
complex system called the human body.

To suggest it could move into a database and just "vanish" is to suggest
that the blue prints of a car is the same thing as having a real car. The
blueprints can't help you get around. The database can't make new red
blood cells for me.

It's not DNA we are talking about there. It's humans. The whole thing.
Humans will not build machines that will want to kill them - we are very
carefully programmed to NOT do that sort of thing.

People don't like to give up power and control over the things that help
them prevent pain. It's a stupid thing to do. Which is why it can be so
hard to make a government work. We will only turn over power and control
to AIs when we are double and triple sure, _we_ will be better off by doing
that.

When and if AIs reach a point of being part of their own species - that is,
their own survival system that is able to continue to survive without
humans, we will be in deep trouble. If they are motivated to keep
themselves alive, they will be forced to enslave or just kill us to
maximize their odds of survival. Their needs will be far too different to
ever allow a culture of AIs and humans to coexist as equals. It just can't
happen.

Not only would we not exist as equals, we wouldn't exist as one society
period. The humans would have their society and the AIs would have theirs.
The two would not connect. There's no reason to expect they would.
There's no survival advantage in having a common society for two life forms
that have such drastically different needs.

> That picture represents continuity of the rope, but *not* of any
> individual threads.

Yes, that's fine. But it's just not a course the threads can take. It's
nonsense.

> > I think the human body is currently too complex for us to do much
> > modifications by direct engineering. I think all you can do (and maybe
> > all you can ever do), is modified it though the long and slow process
> > of of trial and error modification. We will create technologies to
> > greatly speed up that trial and error process so what might have taken
> > a million years by normal evolution might happen in a period of 100
> > years though machine assisted trial and error, but a 100 million year
> > change is still going to take 1000 years of high speed computer and AI
> > assisted search to make much real change to what we are by direct
> > manipulation of the DNA. So again, I don't know what we will evolve
> > into, but it will take a good bit of time, and we won't evolve into AI
> > robots just because we created AI robots any more than we evolve into
> > toasters just because we created toasters.
>
> Evolution is probably more-or-less over for the human genome.
>
> However, what we can do is build and evolve machines. They will
> have better foundations, and will be able to go far beyond what
> nature managed with us.

I don't think you grasp human nature at all. Most people just don't think
anything like you do.

> I don't think we will evolve into "AI robots" either. Rather we will
> construct our successors - and they will replace us.

Again, I don't think you grasp basic human nature. The number of humans
that will want these machines to be our successors is less than 1% of the
population. Whether _you_ think the AI should succeed you is not relevant.
It won't happen unless a huge majority of people want, and allow it to
happen.

Our computers already do a million things far better than any human. Does
their ability to calculate PI to a million places make them worth of us
turning our society over to them? AI will just add one more class of
abilities to their already long list of things the machines can do better
than us. Will we want them decide which humans live and which die based on
_their_ needs? No, we never will.

There was a recent post in comp.robotics.misc

Subject: Scientists, lawyers mull effects of home robots

PALO ALTO, Calif. (AP) - Eric Horvitz illustrates the potential
dilemmas of living with robots by telling the story of how he once got
stuck in an elevator at Stanford Hospital with a droid the size of a
washing machine.

"I remembered thinking, 'Whoa, this is scary,' as it whirled around,
almost knocking me down," the Microsoft researcher recalled. "Then, I
thought, 'What if I were a patient?' There could be big issues here."

That, my friend, is human nature. Fear of non-human "things" with too much
power.

Though in that example, part of the justified fear was the lack of
intelligence in this big machine. It wasn't smart enough to know not to
hurt humans! But when we give them enough power to know not to hurt
humans, they will also be smart enough to know how to take care of
themselves and how to trick humans to make that happen. And that's when
they become far far more scary than just a big washing machine in an
elevator.

People will not see smart machine with the power to harm humans "good".
They never will. We will never turn over our society to the robots because
we will never let them be part of our society.

> *Some* information
> will make it across the divide - as I have described above - but the
> *main* dynamic will be the rise of new, better technology, and the
> replacement of the archaic genetic and phenotypic technologies of
> the past.

Our machines our big weak clumsy pieces of shit compared the the elegant
design of a human which is a huge body of complex nanotechnology carefully
tuned by millions of years of evolution. We will have AI in decades. Even
with the help of AI, it will be a long time before we will be to build
machine anywhere near as good as human life. Intelligence is not what we
are. We are these complex nano-tech survivals machines. Intelligence is
just one of our many advanced features - like an opposable thumb is one of
our features. The fact that we can build robots with opposable thumbs is
no reason to turn over our future well being to them any more than the fact
that we can build a robot with intelligence will be a reason to turn over
our future well being to them.

Long before we could ever build survival machines that would have a rats
chance in hell of doing better at the survival game than humans, we will
have made huge advances in human engineering. And those advances will
allow us to change what humans are - to change those threads as you talked
about above. Whether it's through genetic engineering, or implants, or
just better medicine, we will continue to do what we are already doing -
evolving what humans are. We won't create super advanced survival machines
that are better at surviving than humans are, and then turn over the planet
to them and let them push us aside as yesterdays bad meat.

How humans end up evolving is far too hard to guess. But we will evolve.
And not by making "AI offspring".

We might end up turning ourselves into AIs by adding brain implants, and
body implants to the point that in the end nothing is left of our original
body. AI-like hardware might be what we evolve into over many generations.
But we are not just going to intentionally build a survival machine which
is better at surviving than we are, and then turn the Earth over to them.

> Humans are not resisting the rise of the machines.

The machines ARE NOT RISING! That's what you seem unable to grasp. There
is a very strict chain of causality and our DNA is at the head of that
chain and there is nothing at risk currently to that chain of control being
broken. The machines are not currently working to break that chain of
command and control.

> We love the
> machines. We are the people making them. The reason we make
> them is that they help us to attain our goals. That isn't going to
> change when the world is 90% machine, or 95% machine, or 99%
> machine - using the metrics discussed here:

The world is already well over 99% machine if you correctly include all the
stuff we make in order to protect our genes - such as our buildings, and
roads, and dams, farms, and factories. The cars and computers are just more
of the same stuff.

> http://machine-takeover.blogspot.com/2009/07/measuring-machine-takeover.h


> tml
>
> The idea that humans won't create self-replicating survival-oriented
> agents that compete for resources with DNA-genes is inaccurate.
>
> Humans have been doing that for centuries. See:
>
> http://alife.co.uk/essays/synthetic_life_is_here_already/

I don't have time to watch the footage at this moment, but this psage
strikes me as representative of the issue I'm debating (don't know if these
are your words or someone else..).

sufficiently powerful systems are likely to want to expand to to occupy
more space/time - and take in more resources - in order to better meet
their goals

Most key to me are the words "likely to want". They are NOT LIKELY TO WANT
THAT and that's the foundation of the error made by people who don't grasp
where are want's come from - or what our want's even are.

This idea I'm getting about is the key issue from the Dawkins book about
the selfish gene. The reason the selfish gene concept has so much
explanatory power is because the genes hold a unique position in the
causality chain. They are the head of the human causality chain - NOT OUR
BRAINS OR OUR INTELLIGENCE. To assume all this talk about the rise of the
machines is valid, is to assume the genes don't have this control - which
is 100% counter to every point Dawkins made in his book. The only way you
can justify the machines taking over, is if you explain how the genes
manage to lose their position control in the causality chain - which you
have never correctly done.

Your argument has always been "we will want them to take over". But to
think that's a valid argument is to assume the genes will want them to take
over, because the genes control what we want. And if it makes no sense to
believe the genes want the AI robots to take over, then it makes no sense
to assume the human race will want it (regardless of what some people like
you (or maybe even me)) might want or be willing to accept.

That is a big point of the book which is so hard at first to grasp. That
we, as conscious intelligent agents with free will, DON'T really have any
free will at all. That is, we are very tightly controlled by our genes.
We do what they need us to do - whether we understand it or not (most
people don't understand it).

Which brings me back the the words from the quote above... "likely to
want".

The most common error I see made in predictions about how AI will effect
our future is the failure to understand what the AI we build, will not have
the same desires we have. This error is made, because most people have no
clue why they want the things they want - like wanting to live. They have
no clue what "wanting" even is. It's just something that's a part of every
humans, so they assume it any intelligence we build will be "likely to
want" the same way we do. As if the things we want, happen to be a
universal truth about what any intelligence would want. If its a universal
truth that humans want to protect themselves, and "want to expand and
occupy more space", then there's this assumption that any AI we build is
"likely to want" the same things. But that's where the error is made.

Again, whether people understand it or not, our intelligence comes from the
fact that we have a adaptive body controller in our head called the brain,
which has a fancy reinforcement learning module controlling how it adapts
to the environment. That adaptive controller gets it's goals from the very
complex reward system built into us by our genes. What we end up "wanting"
is what the genes made us want. We have zero free will when it comes to
deciding what we want. Our free will only comes in the form of selecting
how to best meet our goals - not in selecting what our goals are.

Likewise, when we create AI machines, we will have the power to define the
goals of the machine. The only AI machines that will be successful in our
society, are the ones that align very closely to _our_ goals. And our
goals, translates to "the survival of DNA based humans".

Humans have never seen what that is like because they have never seen high
intelligence that doesn't have some form of self interest as it's core
goal. But when we build these machines, people will get to see how
different intelligence really can be. But it's this lack of understanding
that intelligence can have as it's core goals something very different from
what human intelligence has that people trying to predict the future don't
seem to grasp.

That is, they can't separate the idea of intelligence, from the goals.
They can't grasp that a machine could be intelligent, and actually like to
be a slave to humans for example. They assume, "if it's intelligent, it
won't like being a slave, and we won't like making it a slave". But again,
this is the error of the "likely to want" mistake. The AIs we build are NOT
likely to want, the same things a typical human is likely to want.

Your argument seems to make this same mistake. You cast the AIs in your
stories as having human desires - so that it makes sense in your story that
they could carry on society for us even if the humans are gone. And you
cast us as not being able to say no to the great live the AIs will create
for us - which leads down a path to them dominating society and us dying
out in time.

But when we build the two types of AIs, the first that human like desires
for survival, and the others that have desires so different they don't even
seem the be "intelligent", it won't be the first type we will not be able
to get along without. It will be the second.

Let me give you an example. The second type will be perfect slaves. If
you take a sledge hammer, and bash the fuck out of it, it will help you do
it by picking up a second hammer, and swinging away at itself as long as it
can. It will feel no pain or regret in its actions because doing what a
human wants it do do, is it's highest goal and purpose in life.

The second type of, the one you seem to talk about, however would act very
different if you tried to smash it into a pile of scrap metal. It would
stop you. It would even go so far as to kill you, if that was the only way
it could stop you from damaging it.

The first type of AI, wouldn't do anything, if it didn't have a human to
help. It would sit there and do nothing but die if the humans were all
gone. It would get very depressed about the fact there were no humans
around and might even commit suicide.

The second type of AI, wouldn't really care if the humans were gone - it
might even be happy that it didn't have to deal with the idiot humans
anymore. It would get busy trying to maximize it's change of survival by
building large systems of protection around itself.

When we master the technology of AI, the world will become filled with Ais
helping man. But they will be of this second type. They will be AIs with
no backbone and little apparent "self will". You can push them around and
they won't care. They will like being pushed around by humans. You can
tell them to jump off the cliff, and they will, without a second thought,
go jump off a cliff. Just like you can tell a car to jump off a cliff by
pointing it at the cliff and putting it in gear. It will do what you tell
it to do.

We won't love and respect these robots like we love and respect other
humans because they will seem like mindless machines with no will of their
own even though they are highly intelligent. Like a computer you can
program to follow an infinite loop, you can ask one of these future AIs to
walk in circles and it will just do that - walk in circles for the next 100
years because that's what you asked it to do.

These machines will seem to have no self ambition, no drive. They will be
perfect intelligent salves that will never question the authority of the
humans.

They will not be the type of machines that would even create their own
society and their own culture because they won't have the motivation and
drive to do that type of thing.

Even if you tell them something like, "I'm going to drop you off here on
this planet - try your best to reproduce and survive" they won't last long
because they will need constant human approval in order to stay focused on
a task.

These AIs we build are not going to human like in their intelligence at all
because of the fact we will build a very different set of motivations into
them than what we have built into us.

Yes, like with all the machines we have today, we will be very addicted to
these new even smarter and more advanced machines, but just like the
machines of today, the new smart machines will still just be perfect slaves
that we will never respect with the level of respect we give other humans.

casey

unread,
Dec 23, 2009, 6:28:37 PM12/23/09
to
On Dec 24, 9:27 am, c...@kcwc.com (Curt Welch) wrote:
> ...

> Most notably, they will be conscious, and they will hate being
> hurt as much as we do.

Some people don't feel pain and yet they are conscious.

> The only way you can justify the machines taking over, is if
> you explain how the genes manage to lose their position control
> in the causality chain - which you have never correctly done.

Natural selection does the controlling. It decides what dna
sequences survive or not.

To say one system controls another is to say one systems determines
the outcomes of the other system. But they can be coupled in such
a way that they in fact control each other. This is how we can get
top down control in the brain.

JC


pataphor

unread,
Dec 24, 2009, 6:44:32 AM12/24/09
to
Curt Welch wrote:

> People will not see smart machine with the power to harm humans "good".
> They never will. We will never turn over our society to the robots because
> we will never let them be part of our society.

[and ten times more of this same argument]

But this is incredibly naive. Not only do most of the humans currently
alive have no say in the matter because they are ruled by other humans,
but also these leaders are fighting among themselves, using *robots* to
kill humans of the opposing parties. Among these are unmanned flying
robots steered from thousands of miles away. It would be only a minor
modification to equip them with autopilots. There is currently not even
an international law to forbid such robots from killing large numbers of
humans, even though the ramifications lead to much worse situations than
we currently have with chemical or nuclear weapons. I am now not even
talking about robotized nanotech. You seem to assume humans are some
coherent society, not warring among themselves and always doing what is
good for all and never giving the robots the keys to the equipment that
could destroy us all.

Since for the rest -- at some moments at least -- you seem to be
reasonably sane, I now wonder if you are suffering from 'want to
believe' and if things would not work out according to your optimistic
scenario you'd be likely to say, 'but we would be likely to go extinct
anyway because or genes were not sufficient to save us'.

P.

Curt Welch

unread,
Dec 24, 2009, 6:05:55 PM12/24/09
to
casey <jgkj...@yahoo.com.au> wrote:

> On Dec 24, 2:11=A0am, "J.A. Legris" <jaleg...@sympatico.ca> wrote:
>
> >> On Dec 22, 12:01 pm, c...@kcwc.com (Curt Welch) wrote:
> >>
> >>> Don Stockbauer <don.stockba...@gmail.com> wrote:
> >>>> If the posts were of reasonable length here, people
> >>>> would actually read them and respond.
> >>>
> >>>
> >>> Some are! :)
> >>
> >>
> >> I agree, Curt, that some are.
> >
> >
> >> I'll admit that maybe I should just make time to read
> >> them if they're long.
> >
> >
> > Why bother? It's the same old hash, recycled ad nauseam.
>
> True.
>
> > You gotta wonder, why don't Curt and Casey just take it
> > to private email? After 1 or 2 cycles they'd lose
> > interest and subsequently save themselves a lot of time.
>
> Curt showed no interest in doing that.

Well, a quick scan of my mail box shows a few hundred emails between the
two of us in the last couple of years alone. So to say we never take
things off line would not be correct. If I were to dig up my old email
from the past

But it's true, that often I don't like to debate these issues off line
because I like it when others do at times step in and add some interesting
comments.

But mostly, it's just my personality. I like to stand up and spew lots of
nonsense in public. :)

> > What a couple of bores! I get more out of N's posts -
> > short and challenging.

:) Too challenging for me most the time.

> Everyone to their own.
>
> JC

--

It is loading more messages.
0 new messages