Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

50 years later, Marvin Minsky still doesn't get it

23 views
Skip to first unread message

Traveler

unread,
Jul 18, 2006, 6:24:17 PM7/18/06
to
Technology Review interviews Minsky on the 50th aniversary of AI:
http://www.techreview.com/read_article.aspx?id=17164&ch=infotech

Minsky says:

"Our hope is to make the Open Mind system use natural language --
which is of course full of ambiguities, but ambiguities are both good
and bad."

So natural language is the way to achieve common sense in a machine?
What a crock! Even babies who have not yet mastered language, acquire
a lot of common sense just by observing and learning. You can see it
by the way they move their eyes to follow an object or to seek out the
source of a sound. My pet dog has plenty of common sense, for crying
out loud! Methinks Minsky has been out to lunch for 50 years.

Having said that, I agree with the following:

"Technology Review interrupted Minsky on July 11, as he was proofing
the galleys for his forthcoming book, The Emotion Machine, which
reinterprets the human mind as a "cloud of resources," or
mini-machines that turn on and off depending on the situation and give
rise to our various emotional and mental states."

The ability to focus on appropriate subjects/contexts is essential to
any complex intelligence.

Louis Savain

Message has been deleted

TwistyCreek

unread,
Jul 18, 2006, 8:17:49 PM7/18/06
to
On Tue, 18 Jul 2006 18:24:17 -0400, Traveler wrote:

> Technology Review interviews Minsky on the 50th aniversary of AI:
> http://www.techreview.com/read_article.aspx?id=17164&ch=infotech
>
> Minsky says:
>
> "Our hope is to make the Open Mind system use natural language -- which
> is of course full of ambiguities, but ambiguities are both good and
> bad."
>
> So natural language is the way to achieve common sense in a machine?
> What a crock! Even babies who have not yet mastered language, acquire a
> lot of common sense just by observing and learning. You can see it by
> the way they move their eyes to follow an object or to seek out the
> source of a sound.

Why do people compare a machine with a living creature? A human being, or
a dog, has a nervous system of trillions of cells, an enormously rich
basis for being able to make sense of the world.

What does the most sophisticated computer have in comparison?

Curt Welch

unread,
Jul 18, 2006, 9:29:36 PM7/18/06
to
TwistyCreek <an...@comments.header> wrote:

> Why do people compare a machine with a living creature? A human being, or
> a dog, has a nervous system of trillions of cells, an enormously rich
> basis for being able to make sense of the world.
>
> What does the most sophisticated computer have in comparison?

A lot more.

The low cost computer I'm using to write this message as 1GB of memory.
That's 8.6 billion transistors in the memory alone. The processor plus all
the other circuitry probably gets the number up to 10 billion. Transistors
switch about 1000 times faster than neurons, so you can do a lot more with
a lot less.

A human brain has on the order of only 100 billion neurons.

Neurons range in size from 4 to 100 microns. Transistors are down in size
to about .1 microns now (40 times smaller than the smallest neurons).

100 desk top PCs have more transistors than a human brain has neurons and
each transistor can process data at least 1000 times faster than a single
slow large biological neuron can.

The worlds currently fastest computers (currently number one on the top 500
list), is IBM's BlueGene/L computer which has 131072 processors. It's raw
processing power is way beyond what a human brain can do.

The Internet as a whole is technically one large machine as well. It's
total information storage and computation abilities are way beyond what the
worlds largest computer alone can do and much larger than what any single
human brain can do.

If you just take one small part of the Internet, say Google's array of
servers for doing web searches, you will find that it is way beyond what
any human brain can do.

You can also look at the bandwidth processing power of the brain and
compare it to our current digital technology just by looking at our
consumer products. An iPod or any CD player can process audio data in
digital form at the same basic rate the brain does. We know this because
there is no point in processing it any faster because the brain can't
"hear" it (can't keep up).

Same basic thing for video. The standard video frame rates and screen
resolutions are set where they are because the human brain couldn't receive
it any faster. In fact, the rate used for DVDs is much higher than what
the brain can deal with already because the brain cheats by looking only at
one small part of the screen at a time. We move our eyes from point to
point on the screen to try and see all the detail. The TV has to display
data at our max resolution over the entire screen because it can't tell
which part you are currently looking at. TV and PC screens transmit data
at rates way above what the human brain can deal with already. And these
are now cheap consumer devices - not billion dollar research machines.

The issue has long stopped being a question of raw machine power. The
issue about how the brain is different from a computer is just a question
of the right algorithms/structures/design. If anyone actually understood
how to build a brain, we could probably do in high volume for less than
$10K per AI brain today.

We compare computers to animals like dogs because a desktop PC today has
enough power that it should be able to do what a dog does. We have no
problem building robots with similar bandwidth sensory and motor systems.
We just don't know what to put between the ears yet. :)

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

Xiaoding

unread,
Jul 18, 2006, 10:52:43 PM7/18/06
to
A transistor is nothing more than a fancy light switch. You can't
compare it to a neuron, not at all.

It don't matter how many light switches you got, and how much you make
then switch on and off, at no point do they get "intelligent". They
are just light switches.

Another problem with the digital approach is the "program". If we
could only make the right kind of progam, then the light switches would
be intelligent, is what a lot of folks seem to beleive. But the
program is part of the problem, not part of the solution. How
intelligent would a person seem, if they were programmed, and could
only act in ways the program tells them to act, and only think thoughts
the program told them to think? Such a person would fail the Turing
test! And yet, still be an intelligent being. The program is the
problem...a better program would not solve the problem, the problem is
that the program exists in the first place! You have got to get rid of
the program. Only an analog device can do this, digital cannot, due to
it's inherent design.

Minsky is right about one thing, at least: comnputer AI has been brain
dead for awhile now. :) The first true AI will come from the
biologists, I predict.

Curt Welch

unread,
Jul 18, 2006, 11:32:36 PM7/18/06
to
"Xiaoding" <xiao...@jelly.toast.net> wrote:
> A transistor is nothing more than a fancy light switch. You can't
> compare it to a neuron, not at all.

Neurons do less than transistors. They can't even regulate an analog
signal like a transistor can. All they can do is fire! (OK, and emit a few
extra chemicals at the same time). And they can't fire more than about
1000 times per second. A transistor can fire billions of times per second.

Sure, neurons are very different from transistors, but to say you can't
compare them is just wrong. They are simply very hard to compare because
no one yet knows which features of a neuron are important to the creation
of human behavior and which are simply side effects of the limitations of
biology. No one I know for example believes we have to add DNA to a
transistor before it could be used to build an intelligent machine. Nut
it's unknown what else can be left out, or done differently, with
transistors.

But, no matter how you try to compare them, it works out the same way - we
have the technology today to equal the signal processing complexity created
by the brain. We just don't know if it would cost 10 dollars or 10 billion
dollars to build because we don't know exactly what we need to build.

> How intelligent would a person seem, if they were programmed, and could
> only act in ways the program tells them to act, and only think thoughts
> the program told them to think?

What makes you think that you are free from the control of physics? That
you are not programmed and forced to do exactly what the physics of your
body makes you do? How are the computer's limitations given to it by its
physical form any different than the limitations given to you by your
physical form? You are as programmed the act the way you act as much as
any computer is programmed to act the way it acts. Yet, our programming
allows us to pass a Turing test.

It's a common belief that humans have magical mental powers that free us
from the programmed constraints of our physical bodies. I however, see
anyone that still believes these 5000 year old myths to be just be too lost
to be worth talking to about AI concepts. It is as unproductive as trying
to explain the physics of our space program to someone who still thinks the
earth is flat.

wfa...@gis.net

unread,
Jul 19, 2006, 4:05:19 AM7/19/06
to
Curt Welch wrote:
> "Xiaoding" <xiao...@jelly.toast.net> wrote:
> > A transistor is nothing more than a fancy light switch. You can't
> > compare it to a neuron, not at all.
>
> Neurons do less than transistors. They can't even regulate an analog
> signal like a transistor can. All they can do is fire! (OK, and emit a few
> extra chemicals at the same time). And they can't fire more than about
> 1000 times per second. A transistor can fire billions of times per second.
>
> Sure, neurons are very different from transistors, but to say you can't
> compare them is just wrong. They are simply very hard to compare because
> no one yet knows which features of a neuron are important to the creation
> of human behavior and which are simply side effects of the limitations of
> biology. No one I know for example believes we have to add DNA to a
> transistor before it could be used to build an intelligent machine. Nut
> it's unknown what else can be left out, or done differently, with
> transistors.
>
> But, no matter how you try to compare them, it works out the same way - we
> have the technology today to equal the signal processing complexity created
> by the brain. We just don't know if it would cost 10 dollars or 10 billion
> dollars to build because we don't know exactly what we need to build.


The key to neurons is not the number of outputs but the number of
inputs; on the order of ten thousand in human brains. Neurons are
complicated machines in themselves. We have completely mapped the
gross neuronal structure of some simple animals like flatworms but have
yet to produce programs that can duplicate their behavior.

Neurons may not be the only feature of the brain that contribute to its
mind-generating operation (as opposed to just its biological
requirements). Read up on glial cells. They're about ten times more
abundant than neurons.

We agree about our current ignorance. And we agree that the brain/mind
will eventually be simulated by computer. I think you're mistaken in
your certainty that we have enough power today and that all we need is
the right program.


>
> > How intelligent would a person seem, if they were programmed, and could
> > only act in ways the program tells them to act, and only think thoughts
> > the program told them to think?
>
> What makes you think that you are free from the control of physics? That
> you are not programmed and forced to do exactly what the physics of your
> body makes you do? How are the computer's limitations given to it by its
> physical form any different than the limitations given to you by your
> physical form? You are as programmed the act the way you act as much as
> any computer is programmed to act the way it acts. Yet, our programming
> allows us to pass a Turing test.
>
> It's a common belief that humans have magical mental powers that free us
> from the programmed constraints of our physical bodies. I however, see
> anyone that still believes these 5000 year old myths to be just be too lost
> to be worth talking to about AI concepts. It is as unproductive as trying
> to explain the physics of our space program to someone who still thinks the
> earth is flat.


I think Xiaoding is mistaken about the rigidity of programs (he may
have done some computer programming). But his position does not
necessarily imply adherence to mysticism. Though the brain is a
machine, it is a machine of a different type than a heart or a kidney.
You have to know a lot about a machine in order to duplicate its
function; much more than we now know about brains, for example.
Perhaps that is why he suggests that the breakthrough in AI will come
from the biologists who are actually studying the machine we all want
to duplicate.

-Walter

Xiaoding

unread,
Jul 19, 2006, 10:23:58 AM7/19/06
to

>
>
> I think Xiaoding is mistaken about the rigidity of programs (he may
> have done some computer programming). But his position does not
> necessarily imply adherence to mysticism. Though the brain is a
> machine, it is a machine of a different type than a heart or a kidney.
> You have to know a lot about a machine in order to duplicate its
> function; much more than we now know about brains, for example.
> Perhaps that is why he suggests that the breakthrough in AI will come
> from the biologists who are actually studying the machine we all want
> to duplicate.
>
> -Walter
>

You got it right, Walt. :) It's a huge leap from obeyng the laws of
physics, to mysticism!!

Recently, someone grafted rat brain cells onto a substrate, and used
it to do something. That's the wave of the future. It's the computer
guys who are indulging in mysticism, worshiping the desktop God! :)
I will search for the "program" in my head, though, just to be sure,
but digital guys just don't understand the world of analog, not at all.
Analog is not a simulation, it's the real thing.

J.A. Legris

unread,
Jul 19, 2006, 11:13:09 AM7/19/06
to

Curt Welch wrote:
> "Xiaoding" <xiao...@jelly.toast.net> wrote:
> > A transistor is nothing more than a fancy light switch. You can't
> > compare it to a neuron, not at all.
>
> Neurons do less than transistors. They can't even regulate an analog
> signal like a transistor can. All they can do is fire! (OK, and emit a few
> extra chemicals at the same time).

Are you sure about that?

>From http://en.wikipedia.org/wiki/Transmembrane_potential_difference

" A graded membrane potential is a gradient of transmembrane potential
difference along a length of cell membrane. Graded potentials are
particularly important in neurons that lack action potentials, such as
some types of retinal neurons."


> And they can't fire more than about
> 1000 times per second. A transistor can fire billions of times per second.
>
> Sure, neurons are very different from transistors, but to say you can't
> compare them is just wrong. They are simply very hard to compare because
> no one yet knows which features of a neuron are important to the creation
> of human behavior and which are simply side effects of the limitations of
> biology. No one I know for example believes we have to add DNA to a
> transistor before it could be used to build an intelligent machine. Nut
> it's unknown what else can be left out, or done differently, with
> transistors.

Are you sure about that?

>From http://en.wikipedia.org/wiki/Long-term_potentiation

" LTP requires gene transcription[17][18] and protein synthesis[19],
making it an attractive candidate for the molecular analog of long-term
memory. The synthesis of gene products is driven by kinases which in
turn activate transcription factors that mediate gene expression."


>
> But, no matter how you try to compare them, it works out the same way - we
> have the technology today to equal the signal processing complexity created
> by the brain. We just don't know if it would cost 10 dollars or 10 billion
> dollars to build because we don't know exactly what we need to build.

In other words, none of us, particularly you, has a clue. Any more
vacuous observations?


>
> > How intelligent would a person seem, if they were programmed, and could
> > only act in ways the program tells them to act, and only think thoughts
> > the program told them to think?
>
> What makes you think that you are free from the control of physics? That
> you are not programmed and forced to do exactly what the physics of your
> body makes you do? How are the computer's limitations given to it by its
> physical form any different than the limitations given to you by your
> physical form? You are as programmed the act the way you act as much as
> any computer is programmed to act the way it acts. Yet, our programming
> allows us to pass a Turing test.

You are equating a machine that automates one specific aspect of human
behaviour with all of human behaviour. You are not licensed to make
such an equation. Go to jail for one epoch..

>
> It's a common belief that humans have magical mental powers that free us
> from the programmed constraints of our physical bodies. I however, see
> anyone that still believes these 5000 year old myths to be just be too lost
> to be worth talking to about AI concepts. It is as unproductive as trying
> to explain the physics of our space program to someone who still thinks the
> earth is flat.
>

Denying that computers are can be equivalent to brains is no more
mystical than denying that computers can be equivalent to sunflowers.
You're the one who has swallowed a myth.

--
Joe Legris

Curt Welch

unread,
Jul 19, 2006, 2:16:49 PM7/19/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:
> Curt Welch wrote:
> > "Xiaoding" <xiao...@jelly.toast.net> wrote:
> > > A transistor is nothing more than a fancy light switch. You can't
> > > compare it to a neuron, not at all.
> >
> > Neurons do less than transistors. They can't even regulate an analog
> > signal like a transistor can. All they can do is fire! (OK, and emit a
> > few extra chemicals at the same time).
>
> Are you sure about that?

I don't even believe it. I was just saying it as dramatic counter
argument. :)

> >From http://en.wikipedia.org/wiki/Transmembrane_potential_difference
>
> " A graded membrane potential is a gradient of transmembrane potential
> difference along a length of cell membrane. Graded potentials are
> particularly important in neurons that lack action potentials, such as
> some types of retinal neurons."
>
> > And they can't fire more than about
> > 1000 times per second. A transistor can fire billions of times per
> > second.
> >
> > Sure, neurons are very different from transistors, but to say you can't
> > compare them is just wrong. They are simply very hard to compare
> > because no one yet knows which features of a neuron are important to
> > the creation of human behavior and which are simply side effects of the
> > limitations of biology. No one I know for example believes we have to
> > add DNA to a transistor before it could be used to build an intelligent
> > machine. Nut it's unknown what else can be left out, or done
> > differently, with transistors.
>
> Are you sure about that?

Yeah, that's a good point. A big difference about neurons is that they
dynamically grow and that power is part of how a network of neurons learns.
And they use their DNA to make that work.

But, you wouldn't add DNA to transistors to duplicate the effect. You find
implementations that work better with transistors - like you build a
computer and run programs that create neural networks with weights for
inputs instead of having the computer grow new connections on the chip.

> >From http://en.wikipedia.org/wiki/Long-term_potentiation
>
> " LTP requires gene transcription[17][18] and protein synthesis[19],
> making it an attractive candidate for the molecular analog of long-term
> memory. The synthesis of gene products is driven by kinases which in
> turn activate transcription factors that mediate gene expression."
>
> >
> > But, no matter how you try to compare them, it works out the same way -
> > we have the technology today to equal the signal processing complexity
> > created by the brain. We just don't know if it would cost 10 dollars
> > or 10 billion dollars to build because we don't know exactly what we
> > need to build.
>
> In other words, none of us, particularly you, has a clue. Any more
> vacuous observations?

No, the point is just the opposite. We have a very big clue - we know the
brain is a signal processing device and we know how much data it is
processing. That allows us to make good estimates of how much work it's
doing - even if we have few clues as to the true nature of how the brain is
doing that work. And that allows us to know that the amount of work it's
doing is within our ability to duplicate.

> > > How intelligent would a person seem, if they were programmed, and
> > > could only act in ways the program tells them to act, and only think
> > > thoughts the program told them to think?
> >
> > What makes you think that you are free from the control of physics?
> > That you are not programmed and forced to do exactly what the physics
> > of your body makes you do? How are the computer's limitations given to
> > it by its physical form any different than the limitations given to you
> > by your physical form? You are as programmed the act the way you act
> > as much as any computer is programmed to act the way it acts. Yet, our
> > programming allows us to pass a Turing test.
>
> You are equating a machine that automates one specific aspect of human
> behaviour

Huh? Which "specific aspect of human behavior" do you think I was talking
about?

> with all of human behaviour. You are not licensed to make
> such an equation. Go to jail for one epoch..
>
> >
> > It's a common belief that humans have magical mental powers that free
> > us from the programmed constraints of our physical bodies. I however,
> > see anyone that still believes these 5000 year old myths to be just be
> > too lost to be worth talking to about AI concepts. It is as
> > unproductive as trying to explain the physics of our space program to
> > someone who still thinks the earth is flat.
>
> Denying that computers are can be equivalent to brains is no more
> mystical than denying that computers can be equivalent to sunflowers.
> You're the one who has swallowed a myth.

I've never one said a computer could be equivalent to a brain. I've only
claimed that a computer is likely to be able to make a machine act in the
same way a human acts. That's the only level of "equivalence" that I care
about.

And I've never claimed that our current design of computers were at all
optimal for making an intelligent machine. I've always argued it might
take a very different form of machine to practically duplicate human
intelligence. I've never believed it would require DNA based cells
however.

What I was arguing against above however was the false belief that since
computers can only do what they are "programmed" to do, that they could
never do what humans do, since we have "free will". Believing that is just
ignorance - either in the understand of what a human is or in the
understanding of what a computer is.

J.A. Legris

unread,
Jul 19, 2006, 2:45:15 PM7/19/06
to


I don't think Xiaoding was talking about free will - I think he was
referring to computationalism - the thesis that intelligence can be
reduced to computation. As far as I can tell, the only things that can
be reduced to computation are other computations. This is the specific
aspect of human behaviour I mentioned: the formal manipulation of
symbols. Humans invented it and computers were invented to speed it up,
but what's that got to do with the rest of human behaviour?

--
Joe Legris

test

unread,
Jul 19, 2006, 5:11:46 PM7/19/06
to

Even when dogs and babies don't have language, they can still learn/use
concepts in the same way
as natural language does. The concept "table" is a thing to put your drink
on AND
a construction with four legs. I think he just tries to say it makes no
sense to split up "table" in two
predicate symbols table_construction and table_eat, you should make
structures that somehow link
the different meanings.

Babies probably learn concepts in this way, and only learn to use natural
language on them at a later age.
If our concepts for table_construction and table_eat were not linked
concepts, we would probably use
seperate words for them.

Humans learn common sense things by (A) observing and learning like you
said, and also (B) by
communicating with other people (with natural language). Saying: "but
babies use A" doesn't mean B
can't work, so your argument falls apart. If you acquire enough mechanisms
and knowledge, you don't
need the (A) part.


"Traveler" <trav...@nospam.net> wrote in message
news:90mqb29lnmqhrko3m...@4ax.com...

mimo...@hotmail.com

unread,
Jul 19, 2006, 6:24:22 PM7/19/06
to

Effing G this thread's going to turn into another donkey derby is'nt it?

Tomasso

unread,
Jul 19, 2006, 7:17:02 PM7/19/06
to

"Curt Welch" <cu...@kcwc.com> wrote in message news:20060718233747.048$o...@newsreader.com...
> ...

> Neurons do less than transistors. They can't even regulate an analog
> signal like a transistor can. All they can do is fire!
> ...

But they filter to be able to fire. F words: filter, fire, fan in, and let's not forget
the A word: adapt...

T.

Yeoman

unread,
Jul 19, 2006, 7:42:41 PM7/19/06
to
Curt Welch wrote:

> What I was arguing against above however was the false belief that since
> computers can only do what they are "programmed" to do, that they could
> never do what humans do, since we have "free will". Believing that is just
> ignorance - either in the understand of what a human is or in the
> understanding of what a computer is.

Well said Curt!

It seems to me that thinking is a computational problem and is not
reliant on some sort of supernatural effect of biological cells.

In fact I would go further and say that thinking by machine is not best
tackled by running a copy of a brain on a supercomputer. Even if it
worked we would be very little wiser about how it was doing it.

I feel sure that the real advances will come from studying the nature of
knowledge itself, somewhere in a cross-over discipline like where
Semiotics meets fuzzy-logic or similar. Plus narrative plus
intentionality plus abductive reasoning etc., standing on the shoulders
of the giants of science over the centuries (as one does).

Anyhow that's where I am researching. I hope it doesn't take another 50
years to get there, I would be over a hundred by then!

The way I like to look at it Philosophy uses a standard of proof
somewhere around 'best guess', whereas Science uses a standard of
'beyond all reasonable doubt'.

I know this is a philosophy newsgroup, and people are entitled to their
opinions, but one sure hears some wacky stuff here sometimes. So it is
good to read your (mostly) sensible comments.

Anyway, best of luck with your projects,
Yeoman.

Curt Welch

unread,
Jul 19, 2006, 8:06:04 PM7/19/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:

> I don't think Xiaoding was talking about free will - I think he was
> referring to computationalism - the thesis that intelligence can be
> reduced to computation. As far as I can tell, the only things that can
> be reduced to computation are other computations. This is the specific
> aspect of human behaviour I mentioned: the formal manipulation of
> symbols. Humans invented it and computers were invented to speed it up,
> but what's that got to do with the rest of human behaviour?

Symbols work fine for describing all the laws of physics. It's the basis of
all our understanding about your "real" brain. Why wouldn't it work to
create human behavior in a machine? For a computer like device to not be
able to duplicate the control functions performed by a human brain would
imply that humans were not able to understand the operation of the brain
(since our understanding is based on the manipulation of discrete symbols -
both words are our normal high level of existence, and pulses which are the
discrete symbols manipulated by the brain).

We have built an endless number of non-symbolic machines, such as the
wheel, and lever, and clocks, and steam engines, and internal combustion
engines, and radios, and TVs, yet we have had no problem replacing all that
analog control hardware in all these systems with low power control systems
built out of the conceptual manipulation of discrete symbol manipulators.
We have no problem building robots that respond to stimulus and move their
arms and legs in complex and very analog ways using nothing by control
systems based on the concept of discrete symbol manipulation. We have no
problems implementing complex and extremely high frequency (many orders of
magnitude higher than the brain works with) analog wave functions using
machines that internally do everything with discrete symbols (DSP chips).

In hundreds of years of designing and building different types of analog
control systems, we have uncovered _nothing_ that can't be done with a
discrete symbol manipulator.

So just where is your evidence that there is something happening in the
analog brain that theroretically can't be done with a discrete symbol
manipulation system?

Why do you choose to ignore the fact that the foundation signal format used
by the brain is already a discrete symbol system (spikes)?

Granted, until we uncover all the remaining mysteries, and manage to
implement a brain work-alike machine we will never know if a purely
discrete symbol manipulation system is a viable foundation for replacing
all the analog signal systems used in the brain, but nothing we currently
know about analog signal systems shows there should be any reason the
function of the brain couldn't be made completely digital. The only thing
we know know, is that it might be cheaper, to use some analog components -
but not required. And we know that we probably need to build digital
systems with far greater parallelism to equal the total computation power.
But that's all we currently have evidence to suggest.

So, I can hold up hundreds of years of engineering and scientific and
mathematical knowledge to show why we should suspect that a machine based
on concepts of discrete symbol manipulations should be able to replace any
analog function in the brain. What can you hold up as evidence to suggest
it would not be possible? Can you name a single function, that we know as
a fact that the brain is performing, that we know is impossible to
duplicate with a digital system?

Joachim Pimiskern

unread,
Jul 20, 2006, 1:57:53 AM7/20/06
to
Xiaoding schrieb:

> Recently, someone grafted rat brain cells onto a substrate, and used
> it to do something.

Prof. Peter Fromherz was able to connect single rat brain cells with a chip.
http://www.sciencedaily.com/releases/2006/06/060602172512.htm

Regards,
Joachim

Xiaoding

unread,
Jul 20, 2006, 8:13:09 AM7/20/06
to

You have set up a straw man, so congratulations. You have conflated
"symbols" (the modern word for "magic spell"), with electrons and
magnetic force. The DSP chip actually uses analog technology, combined
with digital, as a controller.


>
> In hundreds of years of designing and building different types of analog
> control systems, we have uncovered _nothing_ that can't be done with a
> discrete symbol manipulator.

LOL! Then where's your intelligent computer?


>
> So just where is your evidence that there is something happening in the
> analog brain that theroretically can't be done with a discrete symbol
> manipulation system?

50 years of utter failure.


>
> Why do you choose to ignore the fact that the foundation signal format used
> by the brain is already a discrete symbol system (spikes)?

A completly baseless statement. We have no idea what is going on
there.

>
> Granted, until we uncover all the remaining mysteries,

You have just contradicted your previous paragraph!


and manage to
> implement a brain work-alike machine we will never know if a purely
> discrete symbol manipulation system is a viable foundation for replacing
> all the analog signal systems used in the brain, but nothing we currently
> know about analog signal systems shows there should be any reason the
> function of the brain couldn't be made completely digital.

Nothing? Again, 50 years of utter failure points to the opposite
conclusion.

The only thing
> we know know, is that it might be cheaper, to use some analog components -
> but not required. And we know that we probably need to build digital
> systems with far greater parallelism to equal the total computation power.
> But that's all we currently have evidence to suggest.

Nonsense!

>
> So, I can hold up hundreds of years of engineering and scientific and
> mathematical knowledge to show why we should suspect that a machine based
> on concepts of discrete symbol manipulations should be able to replace any
> analog function in the brain. What can you hold up as evidence to suggest
> it would not be possible? Can you name a single function, that we know as
> a fact that the brain is performing, that we know is impossible to
> duplicate with a digital system?

This made sense...in 1970! :)

Curt Welch

unread,
Jul 20, 2006, 8:23:02 AM7/20/06
to

You are good at making fun of my posts, but where are the answers to the
questions I put forth to you?

Traveler

unread,
Jul 20, 2006, 9:12:43 AM7/20/06
to
On 20 Jul 2006 00:06:04 GMT, cu...@kcwc.com (Curt Welch) wrote:

>Symbols work fine for describing all the laws of physics. It's the basis of
>all our understanding about your "real" brain. Why wouldn't it work to
>create human behavior in a machine? For a computer like device to not be
>able to duplicate the control functions performed by a human brain would
>imply that humans were not able to understand the operation of the brain
>(since our understanding is based on the manipulation of discrete symbols -
>both words are our normal high level of existence, and pulses which are the
>discrete symbols manipulated by the brain).

This is nonsense, Curt. I have told you this before. Pulses are not
symbols. They never were. They are temporal markers which are used to
indicate that something (an unlabeled phenomenon) just occurred. The
symbolic meaning of a pulse is 100% irrelevant to the receiving
neuron. Only its temporal relationship with other signals matters.

I've said this many times before. It is not intelligence that requires
symbol manipulation. Rather, it is symbol manipulation that requires
intelligence. My advice to you is to stop kissing Minsky's ass because
the man is obviously out to lunch on this issue. In fact, I believe
Minsky and the "symbolic school" have been a hindrance to progress in
AI over the last fifty years. I hope he (and the rest of the GOFAI
community) sees the light one of these days because he still comes
across as an authority in the field.

Louis Savain

J.A. Legris

unread,
Jul 20, 2006, 5:27:20 PM7/20/06
to

Curt Welch wrote:
> "J.A. Legris" <jale...@sympatico.ca> wrote:
>
> > I don't think Xiaoding was talking about free will - I think he was
> > referring to computationalism - the thesis that intelligence can be
> > reduced to computation. As far as I can tell, the only things that can
> > be reduced to computation are other computations. This is the specific
> > aspect of human behaviour I mentioned: the formal manipulation of
> > symbols. Humans invented it and computers were invented to speed it up,
> > but what's that got to do with the rest of human behaviour?
>
> Symbols work fine for describing all the laws of physics. It's the basis of
> all our understanding about your "real" brain. Why wouldn't it work to
> create human behavior in a machine?

For the same reason that physicists do not "create" atomic behaviour in
a machine. They may manipulate real atoms. It is even conceivable that
they might build real atoms by assembling various components according
to a symbolic recipe, and of course, they simulate them
computationally, but simulations of atoms are not atoms.

Human behaviour in non-human machines is a simulation. A good enough
simulation would approximate human behaviour, but there's a catch:
natural phenomena are very difficult to simulate - it may well be
impossible using conventional computers because an accurate simulation
may need to account for events on the atomic or molecular scale. Unlike
human artifacts, there is no designer to consult, and the component
materials tend to be closely integrated at both microscopic and
macroscopic levels. Simulations must be simplified to avoid explosive
computational complexity. I am not saying that good enough simulations
are impossible, but that practical simulations are unlikely to be good
enough.

> For a computer like device to not be
> able to duplicate the control functions performed by a human brain would
> imply that humans were not able to understand the operation of the brain
> (since our understanding is based on the manipulation of discrete symbols -
> both words are our normal high level of existence, and pulses which are the
> discrete symbols manipulated by the brain).
>

Calling action potentials symbols is a bit of a stretch. None of them
lasts any longer than the time it takes to get to the next synapse,
where it is typically mashed together, analog style, with its
contemporaries.

> We have built an endless number of non-symbolic machines, such as the
> wheel, and lever, and clocks, and steam engines, and internal combustion
> engines, and radios, and TVs, yet we have had no problem replacing all that
> analog control hardware in all these systems with low power control systems
> built out of the conceptual manipulation of discrete symbol manipulators.
> We have no problem building robots that respond to stimulus and move their
> arms and legs in complex and very analog ways using nothing by control
> systems based on the concept of discrete symbol manipulation. We have no
> problems implementing complex and extremely high frequency (many orders of
> magnitude higher than the brain works with) analog wave functions using
> machines that internally do everything with discrete symbols (DSP chips).
>

As I suggested above, simulating human artifacts is much easier than
simulating natural phenomena because the human designs were already
simplified - passive construction materials, simple dependencies and
everything written down (usually).

>
> In hundreds of years of designing and building different types of analog
> control systems, we have uncovered _nothing_ that can't be done with a
> discrete symbol manipulator.

Yes, but only if humans designed it in the first place. Over the same
period, I doubt that anyone has ever simulated a non-human made system
well enough to stand in for the real thing in its natural environment.

>
> So just where is your evidence that there is something happening in the
> analog brain that theroretically can't be done with a discrete symbol
> manipulation system?
>

The evidence is the vast gulf between the complexity of things we have
done compared to the complexity of simulating even a microbe, let alone
a brain.

> Why do you choose to ignore the fact that the foundation signal format used
> by the brain is already a discrete symbol system (spikes)?
>

Information transmission using discrete symbols is just one small
subset of discrete symbol processing. My microwave oven has digital
controls, but my capacity to ruin food with it varies continuously.

> Granted, until we uncover all the remaining mysteries, and manage to
> implement a brain work-alike machine we will never know if a purely
> discrete symbol manipulation system is a viable foundation for replacing
> all the analog signal systems used in the brain, but nothing we currently
> know about analog signal systems shows there should be any reason the
> function of the brain couldn't be made completely digital. The only thing
> we know know, is that it might be cheaper, to use some analog components -
> but not required. And we know that we probably need to build digital
> systems with far greater parallelism to equal the total computation power.
> But that's all we currently have evidence to suggest.
>
> So, I can hold up hundreds of years of engineering and scientific and
> mathematical knowledge to show why we should suspect that a machine based
> on concepts of discrete symbol manipulations should be able to replace any
> analog function in the brain. What can you hold up as evidence to suggest
> it would not be possible? Can you name a single function, that we know as
> a fact that the brain is performing, that we know is impossible to
> duplicate with a digital system?

There's part of the problem. We still don't know well enough how
neurons work to simulate them accurately. For example, the DNA
transcription example I gave in my previous post is still poorly
understood. From what I've read, these effects are not included in the
Blue Brain project, but they are crucial to every neuron. In principle
we should be able to simulate as closely as we want, but practical
issues get in the way: annoying little details such as finite time and
resources. And in science, practical problems that won't go away were
really theoretical problems all along.

--
Joe Legris

Curt Welch

unread,
Jul 20, 2006, 7:15:13 PM7/20/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:
> Curt Welch wrote:
> > "J.A. Legris" <jale...@sympatico.ca> wrote:
> >
> > > I don't think Xiaoding was talking about free will - I think he was
> > > referring to computationalism - the thesis that intelligence can be
> > > reduced to computation. As far as I can tell, the only things that
> > > can be reduced to computation are other computations. This is the
> > > specific aspect of human behaviour I mentioned: the formal
> > > manipulation of symbols. Humans invented it and computers were
> > > invented to speed it up, but what's that got to do with the rest of
> > > human behaviour?
> >
> > Symbols work fine for describing all the laws of physics. It's the
> > basis of all our understanding about your "real" brain. Why wouldn't
> > it work to create human behavior in a machine?
>
> For the same reason that physicists do not "create" atomic behaviour in
> a machine. They may manipulate real atoms. It is even conceivable that
> they might build real atoms by assembling various components according
> to a symbolic recipe, and of course, they simulate them
> computationally, but simulations of atoms are not atoms.
>
> Human behaviour in non-human machines is a simulation

You sure seem hung up on this concept of a simulation.

You do understand that comptuers are not simualtions right? They are real
machines made out of real atoms right? because they are real, they have
the power to do all the same things that other "real" machines do. There's
nothing about a computer which makes it non-real or which makes it a
simulation.

> A good enough
> simulation would approximate human behaviour, but there's a catch:
> natural phenomena are very difficult to simulate

Which is why no one would even consider trying to do it unless your purpose
was to understand neurons. But our purpose has nothing to to with neurons.
It's to reproduce an approximation of human behavior at the macro level.

It's the same reason that when when we try to simulate Microsoft Word
running on Windows PC on Intel hardware by creating a work-alike product to
run on a PowerPC Mac, we don't for a second start to talk about how hard it
is going to be to simulate the transistors in the Intel chip. It's the
entirely wrong level of abstraction.

And when people talk about creating AI for the purpose of allowing machines
to perform jobs currently only possible by humans, we don't waste any time
worrying about how hard it would be to correctly simulate the functions of
neurons because it's fairly silly to believe that would be required in
order to make a machine perform human high-level functions like
understanding language well enough to talk to customers and take orders at
a fast food drive through window.

Granted, when we design machines, we work very hard to isolate every level
of abstraction just so we can understand how it works and fix problems.
Evolution doesn't take such care because no one ever has to understand it.
This means that you can expect the job of duplication something like the
function of Microsoft Word to be easier to do because the functionality we
care about is likely to be limited in level.

Duplicating human behavior is not going to be as simple. But still,
complex machines in the end can't work unless they use hierarchies of
functional abstraction. And the actual implementation of neurons, is just
too far away from the level we are interested in duplicating to be
important. Our machine will have to duplicate the high level function the
network of neurons is performing, but it's not going to have to duplicate
neurons, just like planes don't need to duplicate the cells that make up
feathers in a bird to duplicate it's flying behavior.

> - it may well be
> impossible using conventional computers because an accurate simulation
> may need to account for events on the atomic or molecular scale.

Yes, I would agree that it's likely to be impossible (or at least so
completely impractical as to make it effectively impossible) for a computer
to duplicate the function of neuron accurately enough to allow it to
replace real neurons in a brain. But AI is not about duplicating neuron
behavior, it's about duplicating the high level function with whatever
substrate is the best for us to do that with - and the answer is going to
be transistors - not neurons.

> Unlike
> human artifacts, there is no designer to consult, and the component
> materials tend to be closely integrated at both microscopic and
> macroscopic levels.

Yes, they are. Just like our ability to walk requires integration all the
way down to the atomic level in controlling the flow of energy in and out
of the muscle cells. But we can put an internal combustion engine on a
frame with a few wheels and it still works fine to move the body around,
and there is no requirement for us to build nano-machines to make it work
like our body does it.

> Simulations must be simplified to avoid explosive
> computational complexity. I am not saying that good enough simulations
> are impossible, but that practical simulations are unlikely to be good
> enough.

Yes, I agree. No one is going to create practical AI by building neuron
simulators. It's not going to happen. What they will uncover, is an
understand of what the neurons are doing. Once that understanding is
reached, the network of neurons is going to be thrown out, and replaced
with a network of transistors, which perform the same high level functions,
using a completely different technique.

> > For a computer like device to not be
> > able to duplicate the control functions performed by a human brain
> > would imply that humans were not able to understand the operation of
> > the brain (since our understanding is based on the manipulation of
> > discrete symbols - both words are our normal high level of existence,
> > and pulses which are the discrete symbols manipulated by the brain).
> >
>
> Calling action potentials symbols is a bit of a stretch. None of them
> lasts any longer than the time it takes to get to the next synapse,
> where it is typically mashed together, analog style, with its
> contemporaries.

Yeah, Traveler just posted the message saying the same thing - that I don't
know what a symbol is. You are both off your rocker. :)

When I speak a word, is that spoken word not a symbol? Spikes are not the
same as a symbol written on a piece of paper (and the most common
connotation of "symbol" is a fixed material object), but spikes are exactly
the same as spoken words and these do count as symbols.

The definition of a symbol is, "something that stands for or suggests
something else". When I say, "Bob is over there", does not the sound I
just made to form the word "Bob" act as a symbol of the person named Bob?

I don't know where you guys learned what symbols are, but spikes are
definitely symbols. They are just short lived symbols that carry a great
deal of information in the temporal domain.

> > In hundreds of years of designing and building different types of
> > analog control systems, we have uncovered _nothing_ that can't be done
> > with a discrete symbol manipulator.
>
> Yes, but only if humans designed it in the first place. Over the same
> period, I doubt that anyone has ever simulated a non-human made system
> well enough to stand in for the real thing in its natural environment.

Right, artificial teeth, and joints, and hearts, and legs, and ears, could
never act as a stand in for the real thing. The point here is not that any
of these things even comes close to the real thing - but that act just fine
as stand ends to duplicate the high level functions we care about - like
how a crutch is a stand in for a missing leg simply because it's good
enough to keep us from falling over - which is enough function to make it
useful.

You talk as if we need to build an electronic stand-in for a human brain.
As if AI won't be solved, until we can do a brain transplant and replace a
damaged brain with our mechanical version and have it act as a workable
stand in for a brain-dead human. Though it would be cool if we could get
to that point some day, that's now what AI is about. All we need to do is
make a robot, act as a stand-in for a human, in any job, we might want a
human to do (and biological jobs like fathering children or donating blood
don't count :)).

We just need to be able to make a robot perform jobs that humans can
perform. We don't even care if their personalities are odd - as long as
their skill sets are at least as good as any human. We are not going to
have to simulate neurons to do that - at least not any more than at some
high abstraction level like the current NNs. We might have to simulate
neurons to figure out how to do it, but the end design won't have a single
real neuron simulator in it.

> > So just where is your evidence that there is something happening in the
> > analog brain that theroretically can't be done with a discrete symbol
> > manipulation system?
> >
> The evidence is the vast gulf between the complexity of things we have
> done compared to the complexity of simulating even a microbe, let alone
> a brain.

Yes, which is why no one is even going to try to simulate a real brain down
to the atomic level.

> > Why do you choose to ignore the fact that the foundation signal format
> > used by the brain is already a discrete symbol system (spikes)?
> >
> Information transmission using discrete symbols is just one small
> subset of discrete symbol processing. My microwave oven has digital
> controls, but my capacity to ruin food with it varies continuously.

:) I have that problem as well. :)

Yes, there is much we don't know about neurons and as we learn more,
simulating them will only get harder. But at some point, we will finally
grasp what the network of neurons is doing at a much higher level of
abstraction, which will allow us to understand why all that complexity is
there.

Likewise, if we wanted to build a Microsoft word computer out of neurons,
we wouldn't do it by simulating the atomic properties of transistors in a
network of neurons. We wouldn't even have to simulate the Intel
instruction set with neurons. We won't even have to simulate the same
programing language Microsoft word is written in. We would be free to
create a completely different type of computer to duplicate the external
functions of Microsoft word.

Intelligent machines of the future are just not going to be built by
simulating neurons. They are going to duplicate the high level functions
the network of neurons is performing for us using a very different
implementation with very different fundamental components (transistors most
likely). And these high level functions are very likely to be
implementible on computers (which are not simulations, but are in fact,
real machines performing real physical processes).

J.A. Legris

unread,
Jul 20, 2006, 8:33:43 PM7/20/06
to
Curt Welch wrote:
> "J.A. Legris" <jale...@sympatico.ca> wrote:
> > Curt Welch wrote:
> > >
> > > Symbols work fine for describing all the laws of physics. It's the
> > > basis of all our understanding about your "real" brain. Why wouldn't
> > > it work to create human behavior in a machine?
> >
> > For the same reason that physicists do not "create" atomic behaviour in
> > a machine. They may manipulate real atoms. It is even conceivable that
> > they might build real atoms by assembling various components according
> > to a symbolic recipe, and of course, they simulate them
> > computationally, but simulations of atoms are not atoms.
> >
> > Human behaviour in non-human machines is a simulation
>
> You sure seem hung up on this concept of a simulation.
>
> You do understand that comptuers are not simualtions right? They are real
> machines made out of real atoms right? because they are real, they have
> the power to do all the same things that other "real" machines do. There's
> nothing about a computer which makes it non-real or which makes it a
> simulation.

Any time a computer is programmed to act like something else it is a
simulation. You have said that successful AI will do what humans can
do. Therefore AI is a simulation of human behaviour. The level at which
you approach it has nothing to do with whether or not it's a
simulation. Apparently your simulation is at the behavioural level.
>
[...]


>
> Duplicating human behavior is not going to be as simple. But still,
> complex machines in the end can't work unless they use hierarchies of
> functional abstraction. And the actual implementation of neurons, is just
> too far away from the level we are interested in duplicating to be
> important. Our machine will have to duplicate the high level function the
> network of neurons is performing, but it's not going to have to duplicate
> neurons, just like planes don't need to duplicate the cells that make up
> feathers in a bird to duplicate it's flying behavior.

But as I've said before, if airplane-level performance is a valid
comparison then AI is already there. It's fragile, inflexible and can
do a few of the things that humans can do, some quite well. And
airplanes are unlikely to be more bird-like unless they're small, meaty
and feathered.

--
Joe Legris

Curt Welch

unread,
Jul 21, 2006, 12:36:02 AM7/21/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:

> Any time a computer is programmed to act like something else it is a
> simulation. You have said that successful AI will do what humans can
> do. Therefore AI is a simulation of human behaviour. The level at which
> you approach it has nothing to do with whether or not it's a
> simulation. Apparently your simulation is at the behavioural level.

Yeah, well, technically I really can't argue with that if that's how you
want to define it.

But, is a CD player a simulation of a record player? Is a plane a
simulation of a bird? According to your definition it is. I don't however
see it useful to talk like that.

The only reason we talk about AI being a copy of a human is because humans
are the only things that currently have this feature called "intelligent
behavior". We have nothing else to point to when we try to talk about the
feature set we want to build into the machines. But once we get to the
bottom of what it takes to make machines perform all these human tasks,
they aren't going to be a copy of humans any more than a CD player is a
simulation of a record player or a plane is a simulation of a bird. They
will simply be intelligent machines (or whatever name we end up giving to
the technology).

> > Duplicating human behavior is not going to be as simple. But still,
> > complex machines in the end can't work unless they use hierarchies of
> > functional abstraction. And the actual implementation of neurons, is
> > just too far away from the level we are interested in duplicating to be
> > important. Our machine will have to duplicate the high level function
> > the network of neurons is performing, but it's not going to have to
> > duplicate neurons, just like planes don't need to duplicate the cells
> > that make up feathers in a bird to duplicate it's flying behavior.
>
> But as I've said before, if airplane-level performance is a valid
> comparison then AI is already there. It's fragile, inflexible and can
> do a few of the things that humans can do, some quite well.

Yeah, I half agree. But there are very important things humans can do that
no man designed machine can currently do - no matter how much money we
spent on it making it big and custom designing it. It's not just a matter
of scale or performance. There are some fundamental technologies still
missing. Until we find the missing parts that allows us to create
something like a C3P0 or a Commander Data - or just a machine you can talk
to like we talk to humans, we just haven't figured this technology out.
It's just not here yet.

I do admit that since we don't know what we are missing, it could always
turn out that building devices that work 99% like a neuron might be the
only solution to creating the type of behavior we are looking for. It
could be that no standard digital computer like device will ever be
practical for creating one of these brains - it might require a large
amount of analog techniques. I just don't happen to believe that.

> And airplanes are unlikely to be more bird-like unless they're small,
> meaty and feathered.

And they have to taste like chicken too!

Actually however, what they are really missing is a brain. :) When we can
build a robot bird that can do as good of a job at keeping itself fed, and
exploring the environment, as a real bird, then the planes will start to be
far more bird like. Give them the power to reproduce and evolve, and the
only important thing missing will be the taste! :)

J.A. Legris

unread,
Jul 22, 2006, 9:50:07 AM7/22/06
to

Curt Welch wrote:
> "J.A. Legris" <jale...@sympatico.ca> wrote:
>
> > Any time a computer is programmed to act like something else it is a
> > simulation. You have said that successful AI will do what humans can
> > do. Therefore AI is a simulation of human behaviour. The level at which
> > you approach it has nothing to do with whether or not it's a
> > simulation. Apparently your simulation is at the behavioural level.
>
> Yeah, well, technically I really can't argue with that if that's how you
> want to define it.
>
> But, is a CD player a simulation of a record player? Is a plane a
> simulation of a bird? According to your definition it is. I don't however
> see it useful to talk like that.

At some level a CD might be said to simulate a record player, but I
don't believe anyone ever designed it as such. And early attempts at
flight were failed simulations of birds, just as early attempts at AI
are failed simulations of animals. The eventual departure of airplane
structure from bird structure is exactly my point - the materials and
our ability to manipulate them constrain the design.

The idea of a bird-brain equivalent processor to make an airplane more
birdlike is a non-starter. Suppose you made a bird-robot out of
conventional materials. Of course you couldn't connect it to a real
bird brain. And a digital controller would have to be as unbirdlike as
the mechanism it must control - the dynamics, the feedback, the
sensors, the actuators, the power source, the homeostatic functions -
everything is different. Materials constrain the design from top to
bottom.

[...]

--
Joe Legris

Charlie

unread,
Jul 25, 2006, 11:39:42 PM7/25/06
to

Curt Welch wrote:


--snip--

> So, I can hold up hundreds of years of engineering and scientific and
> mathematical knowledge to show why we should suspect that a machine based
> on concepts of discrete symbol manipulations should be able to replace any
> analog function in the brain. What can you hold up as evidence to suggest
> it would not be possible? Can you name a single function, that we know as
> a fact that the brain is performing, that we know is impossible to
> duplicate with a digital system?
>


There is a well-known set of problems that a sane human above the age
of reason can answer easily but that a computer can't: the so-called
*frame problems.* These probems are given to tax the brains of budding
programmers. The Railroad Crossing Problem, The Parking Lot Gate
Problem, The Barber Shop Problem, The Yale Shooting Problem, etc. The
questions asked in these problems have never been definitively answered
by a linear-sequential machine (computer or derivative state-machine).

What about that?

Charlie

Curt Welch

unread,
Jul 26, 2006, 2:48:49 AM7/26/06
to

Hum, that's is an interesting question. I'm not familiar with any of those
problems by name or with the concept of "frame problems". I've just spent
some time getting up to speed with some of them, but it's late, and I need
some sleep. I'll continue my research and see if I can give you an answer
soon....

I'm having problems finding a description of the railroad crossing problem
that explains why the problem is a problem in terms I can understand. Any
pointers? Or care to explain what the problem is and why it's a problem?

Tomasso

unread,
Jul 26, 2006, 7:07:57 AM7/26/06
to

What exactly is Marvin supposed to get?

Put your mouth where your mock is.

T.

J.A. Legris

unread,
Jul 26, 2006, 8:40:25 AM7/26/06
to

As far as I can tell, the railroad crossing problem is a "standard"
problem referenced in the formal verification of real-time computation.
How can it be proven that an automatic controller for a railroad
crossing gate is safe and effective? It appears to be fairly simple
task for a single track because only one event occurs at a time and
always in predictable order, but for multiple tracks serviced by one
gate it can get pretty hairy.

--
Joe Legris

Don Geddis

unread,
Jul 26, 2006, 12:54:51 PM7/26/06
to
Curt Welch wrote:
>> Can you name a single function, that we know as a fact that the brain is
>> performing, that we know is impossible to duplicate with a digital system?

"Charlie" <cmoe...@aol.com> wrote on 25 Jul 2006 20:3:
> There is a well-known set of problems that a sane human above the age
> of reason can answer easily but that a computer can't: the so-called
> *frame problems.* These probems are given to tax the brains of budding
> programmers. The Railroad Crossing Problem, The Parking Lot Gate
> Problem, The Barber Shop Problem, The Yale Shooting Problem, etc. The
> questions asked in these problems have never been definitively answered
> by a linear-sequential machine (computer or derivative state-machine).

Curt asked for a problem that "we know is impossible" for a computer.

You provided problems that happen not yet to be solved by computers. That's
hardly the same thing.

Computers today also happen not to solve natural language, or vision, or NP
problems in polynomial time, or factoring large integers in polynomial time.
But none of these are known (proven) to be impossible for computers.

And by the way: frame problems are trivially easy for computers to solve
using an exponential amount of knowledge/rules (and time). The challenge was
always in trying to program the right behavior by default, with only a few
rules (and some clever algorithms).

In any case, not an answer to Curt's question.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
The older I grow the more I distrust the familiar doctrine that age brings
wisdom. -- H. L. Mencken

Curt Welch

unread,
Jul 26, 2006, 1:29:19 PM7/26/06
to

How does it get hairy? It seems you have to specify the problem in a odd
way before it gets hairy. If you simply design the gate so it can detect
when a train is on the track in your section, you close the gate. That's
all there is to it and you know it works if 1) the train detector and gate
works, and 2) if the range of the detector is large enough for the fastest
moving train you have to deal with (aka if the trains don't go over the
speed limit), and 3) you assume all car traffic can get itself out of the
crossing as the gate closes. In practical applications, these I sure are
the exact assumptions used and there's nothing hairy about it. It works
just as well with 10 tracks and 20 trains.

So the question becomes, what complexity do you add to make it hairy? Do
you add less useful sensors like train enter and exit sensors instead of
using a train present sensor? Do you add controls to signal and stop the
train if the crossing is blocked alone with car sensors? Do you add train
speed sensors and attempt to wait to the last possible safe moment to close
the gate?

Many of the pages on the web I saw was talking about the issues of
specifying the algorithm as some sort of concurrent algorithm. So I think
the issue here is that solving the control problem in terms of concurrent
algorithms can get hairy - but I haven't yet grasped how they structured
the problem that actually makes it hairy. I'm still looking....

In practical engineering, you normally re-structure your problem and
solution to make sure it's not hairy instead of trying to solve a hairy
problem and make sure your solution is valid. But Charlie's question seems
to be whether a machine can solve one of these hairy problems (like humans
can do) - so I need to better understand what class of hairy problem he is
talking about.

Charlie

unread,
Aug 7, 2006, 11:56:02 AM8/7/06
to

Curt Welch wrote:
>> Can you name a single function, that we know as a fact that the brain is
>> performing, that we know is impossible to duplicate with a digital system?


Don wrote:
>Curt asked for a problem that "we know is impossible" for a computer.


OK, I'll put it this way:

We know that simulating continuous time (an actual continuum) is
impossible for any digital machine. This is because all digital
machines progress from time to time (in either simulated or actual
sensed existence) in step-by-step (or frame-by-frame) fashion. There is
no continuity in the machine because nothing can be sensed/experienced
by it in the between-times. It does not even know if it is "alive"
or "dead" from one time to the next.

We also know that continuity is precisely the way that most humans
think about and experience ongoing existence. If there is a slight or
great discontinuity such as sleep or a period of unconsciousness, the
human is invariably aware of it at his next waking moment. Just ask
anyone.

min...@media.mit.edu

unread,
Aug 7, 2006, 3:30:39 PM8/7/06
to
This is a good example of commonsense thinking -- and how it can go
completely wrong. (That is, at least in my view.) Consider the
opposite argument, as proposed in Section 25.4 of The Society of Mind:

Why do we have the sense that things proceed in smooth, continuous
ways? Is it because, as some mystics think, our minds are part of some
flowing stream? think it's just the opposite: our sense of constant
steady change emerges from the parts of mind that manage to insulate
themselves against the continuous flow of time! In other words, our
sense of smooth progression from one mental state to another emerges,
not from the nature of that progression itself, but from the
descriptions we use to represent it. Nothing can *seem* jerky, except
what is *represented* as jerky.

Paradoxically, our sense of continuity comes not from any genuine
perceptiveness, but from our marvelous insensitivity to most kinds of
changes. Existence seem continuous to us, not because we continually
experience what is happening in the present, but because we hold to our
memories of how things were in the recent past. Without those
short-term memories, all would seem entirely new at every instant, and
we would have no sense at all of continuity, or of existence.

One might suppose that it would be wonderful to possess a faculty of
"continual awareness." But such an affliction would be worse than
useless because, the more frequently your higher-level agencies change
their representations of reality, the harder for them to find
significance in what they sense. The power of consciousness comes not
from ceaseless change of state, but from having enough stability to
discern significant changes in your surroundings. To "notice" change
requires the ability to resist it, in order to sense what persists
through time, but one can do this only by being able to examine and
compare descriptions from the recent past. We notice change in spite
of change, and not because of it.

Our sense of constant contact with the world is not a genuine
experience; instead, it is a form of what I call the "Immanence
illusion". We have the sense of actuality when every question asked of
our visual systems is answered so swiftly that it seems as though those
answers were already there. And that's what frame-arrays provide us
with: once any frame fills its terminals, this also fills the terminals
of the other frames in its array. When every change of view engages
frames whose terminals are already filled, albeit only by default, then
sight seems instantaneous.

THE PRESENT MOMENT: This is part of why we feel that what we see is
"present" in the here and now. But it isn't really true that whenever a
real object appears before our eyes, its full description is instantly
available. Our sense of momentary mental time is flawed; our
vision-agencies begin arousing memories before their own work is fully
done. For example, when you see a horse, a preliminary recognition of
its general shape may lead some vision-agents to start evoking memories
about horses before the other vision-agents have discerned its head or
tail. Perceptions can evoke our memories so quickly that we can't
distinguish what we've seen from what we've been led to recollect.

Any comments?

bob the builder

unread,
Aug 7, 2006, 4:41:17 PM8/7/06
to
I guess you dont disagree here that computers cant simulate continious
time. But instead you dont agree that continuity has to be
incorperated/is nescesary for a (human like) artificial mind. This
because of the smoothness illusion thing descibed below.

Iam sure the mind gives us a false impression of the world outside.
Many things we see in the world are generated by our own minds. Iam
currently staring on my computerscreen under the illusion it is a
continious changing screen. I dont see the individual pictures my
screen shows at a large speed.

But all this doesnt mean that the brain is not a continious system. It
only means that some high level desciption of the mind doesnt need
continuity. That a symbolic description doesnt need/can handle
continuity.

Charlie

unread,
Aug 7, 2006, 4:50:39 PM8/7/06
to

I will grant that the quoted text argues the opposite viewpoint from
mine, but what is the point? What would be gained from taking that
restricted (and in my opinion, convoluted and overly complex) view?

The simplicity of it is that humans are capable of apprehending both
discrete schemes AND continuums (hence calculus), whereas digital
machines can only discern the discrete.

Traveler

unread,
Aug 7, 2006, 7:18:25 PM8/7/06
to

[cut]

Why come up with a convoluted explanation for something that is rather
simple to understand. We sense a continuum int time and space because
all we "feel" are the spiking actitivity of some of our neurons. We
don't notice the non-active, in-between intervals because we can only
sense spikes. Our conscious awareness is thus trapped in a false,
illusory continuity that not even Einstein could escape.

Louis Savain

Why Software Is Bad and What We Can Do to Fix It:
http://www.rebelscience.org/Cosas/Reliability.htm

Michael Olea

unread,
Aug 7, 2006, 7:24:23 PM8/7/06
to
min...@media.mit.edu wrote:

One thing that comes to mind is the relationship between what you describe
and Bayesian inference (of course since I am fairly steeped in the latter,
it always comes to mind for me). A second thing that comes to mind is the
notion of "predictive information" and its relationship to saliency, or
Roger Shank's idea of "failure-driven learning". Finally, there is the
question of how metaphorical hypothetical constructs (frame arrays,
k-lines, etc.) can be made into testable hypotheses.

o "Probabilities are a dead end"

The two comments I remember are that 1) they are opaque, and 2) "they don't
help much with reflective thought. Eventually, one needs to know the
reasons why and when one ought to make assumptions that one is not yet sure
about".

Graphical probability models, e.g. Bayesian Networks, encode two kinds of
information: *qualitative* information in the structure of the graph, the
conditional independence relationships between the random variables at the
nodes of the graph, and *quantitative* information: the conditional
distribution over the states of a node, given the states of adjacent nodes.
The equilibrium distribution at a node is the result both of "diagnostic
support", the lower level details from child nodes (e.g. bits of leg, seat,
and back, diagnostic of a chair) and "causal support", predictions from
parent nodes (e.g. chairs around a table are to be expected in a dining
room). Graphical probability models support causal reasoning, "recognizing,
generalizing, predicting what may happen next, and knowing what we ought to
try when expectations aren't met".

Some of these ideas are debated in journal review articles available here:

"Causality: Models, Reasoning and Inference", Judea Pearl, 2000, Cambridge
University Press.
http://bayes.cs.ucla.edu/BOOK-2K/index.html

24.2 Frames of Mind

"A *frame* is a sort of skeleton, somewhat like an application form with
many blanks or slots to be filled. We'll call these blanks its *terminals*;
we use them as connection points to which we can attach other kinds of
information. For example, a frame that represents a "chair" might have some
terminals to represent a seat, a back, and legs..."

"Default assumptions fill our frames to represent what's typical"

Sounds like a sort of Bayesian net to me.

25.1 One Frame At A Time?

After some discussion of a figure/ground image that can be seen as faces or
a candle, and of the Necker cube, comes:

"Our vision-systems are born equipped, on each of several levels, with some
sort of "locking-in" machinery that at every moment permits each "part," at
each level, to be assigned to one and only one "whole" at the next level."

This "machinery" is clearly a hypothetical construct. What sort of
independent evidence might support it? What theoretical considerations
might predict it?

Random switching and optimal processing in the perception of ambiguous
signals. W Bialek & M DeWeese, Phys Rev Lett 74, 3077-3080 (1995).
http://www.princeton.edu/~wbialek/optimization_links.html

"In the case of motion estimation [53] there is nothing deep about the
statistical mechanics problems that we have to solve, but here we found
that in cases where stimuli have ambiguous interpretations (as in the
Necker cube) the estimation problem maps to a random field model. The
nontrivial statistical mechanics of the random field problem really does
seem to correlate with the phenomenology of multistable percepts. This is
interesting as a very clear example of how the idea of optimal performance
can generate striking and even counterintuitive predictions, in this case
predicting fluctuations in perception even when inputs are constant and the
signal to noise ratios are high."

The "estimation problem" is one of Bayesian inference. It has an optimal
solution that depends on prior experience (so the grouping and labeling
that takes place, and the rate of switching between alternatives, the
"locking in", depend on individual histories of interaction with the
environment) and leads to principled predictions that agree with
experiment.


o "We notice change in spite of change, and not because of it"

"The power of consciousness comes not from ceaseless change of state, but

from having enough stability to discern *significant* changes in your
surroundings" (emphasis added).

What makes changes significant? One obvious answer is "novelty", or
expectation violations. These are low probability events, so when they do
happen then -log2(p) is high. Conversely, highly predictable events are not
very informative. Naturaly, what is predictable and what is novel will
change with experience. This brings up the notion of "predictive
information", those features of the world that predict probable outcomes.
Only a vanishing fraction of information is predictive:

Predictability, complexity and learning. W Bialek, I Nemenman & N Tishby,
Neural Comp 13, 2409-2463 (2001).
http://www.princeton.edu/~wbialek/learning_links.html

Predictive information is significant.


o "Nothing can *seem* jerky, except what is *represented* as jerky."

Several pertinent and interesting-looking papers are available here:

http://nivea.psycho.univ-paris5.fr/

But I have not had time yet to read them.

-- Michael

Message has been deleted

Traveler

unread,
Aug 7, 2006, 9:26:40 PM8/7/06
to
On 7 Aug 2006 13:50:39 -0700, "Charlie" <cmoe...@aol.com> wrote:

>I will grant that the quoted text argues the opposite viewpoint from
>mine, but what is the point? What would be gained from taking that
>restricted (and in my opinion, convoluted and overly complex) view?
>
>The simplicity of it is that humans are capable of apprehending both
>discrete schemes AND continuums (hence calculus), whereas digital
>machines can only discern the discrete.

That's funny. Calculus equations are regularly performed with great
success on 100% discrete computers. What makes you think that calculus
implies the existence of continuity? I take the opposite view: that
humans are, in fact, incapable of aprehending the continuous for the
simple reason that continuity (infinite divisibility) is bunk. Why is
it bunk? Because it requires an infinite regress, as simple as that.
Continuity is the one of the worst blunders in the history of modern
science, on a par with the flat eath hypothesis. The sooner we get rid
of that false doctrine, the better off we will be. One man's opinion,
of course.

Michael Olea

unread,
Aug 7, 2006, 10:51:44 PM8/7/06
to
N wrote:

>
> Michael Olea wrote:
>
>> o "Nothing can *seem* jerky, except what is *represented* as jerky."
>>
>> Several pertinent and interesting-looking papers are available here:
>>
>> http://nivea.psycho.univ-paris5.fr/
>>
>> But I have not had time yet to read them.

I missed the chance to quote "full of sound and fury, signifying nothing".

> reminds me I must do more on colour - at the moment there's
> broad distinctions between social tags (symbolic meaning) and
> psychological research. (Pretty sure some eeegit'll cum along
> to say whats's 'normal' but in the mean time weeeer al' free here?)

You might be interested in "Berlin & Kay's anthropological data on color
naming". Fuzzy sets. Signals and symbols...

> (ps - hurry up an spread some of them special genes of yours :)

Ahaha. :)

TODO:
...
o debug Bayesian model of VI schedules
o practice Wes Montgomery's solo in "4 on 6"
o send thank-you note to James and Linda
o replicate
o clean bathroom
o oh, and we're almost out of cilantro and chorrizo
...

-- Michael

Michael Olea

unread,
Aug 8, 2006, 12:58:13 AM8/8/06
to
bob the builder wrote:

> I guess you dont disagree here that computers cant simulate continious
> time.

What would it mean to "simulate continious time"? Infinite resolution? Does
an LCR circuit simulate continuous time? How about the ntpd daemon?

> ... But instead you dont agree that continuity has to be


> incorperated/is nescesary for a (human like) artificial mind. This
> because of the smoothness illusion thing descibed below.

Continuity is certainly a hypothesis that is entertained. Sample the world
through, say, a photoreceptor array. Photon counters. Discrete events.
Integration times. Flux. Estimated (perhaps continuous) luminance as a
function of (perhaps continuous) time. Spatiotemporal (continuous, maybe,
or maybe not) fluctuations in luminance, an image, sampled and discrete, or
a video, sampled and discrete (thank you, Nyquist), the stimulus "effects"
of an underlying "cause" - a scene, a world of (perhaps continuous) things
bathed in and reflecting sources of illumination. Nothing prevents a
discrete system from entertaining hypotheses of continuity, or even simply
assuming continuity and acting accordingly. Nothing prevents a discrete
system from "experiencing" continuity.



> Iam sure the mind gives us a false impression of the world outside.
> Many things we see in the world are generated by our own minds. Iam
> currently staring on my computerscreen under the illusion it is a
> continious changing screen. I dont see the individual pictures my
> screen shows at a large speed.

So the experience of continuity does not imply actual continuity. Neither
does it imply that the mechanisms mediating the experience are themselves
continuous.

> But all this doesnt mean that the brain is not a continious system. ...

What would it mean for the brain to be a "continious system"? What sort of
experiment, if only the technology were available, would discriminate
between brain as "continuous system" and otherwise?

What do we know about brain processes? Neurotransmitters and neuromodulaters
bind to receptors. Discrete events. Action potentials are generated.
Discrete events. And whil there are neurons (with short axons) that respond
to stimuli with a "graded response" such responses are mediated by
subcellular reaction cascades. Discrete events. Vesicles, packets of
molecules, are released. Discrete events. Membrane channels open and close.
Discrete events. Phosphorylitic and other reaction cascades occur within
cells, genes are upregulated and downregulated. Discrete events. Certainly
it can be mathematicaly convenient to work in terms of "fields" conceived
of as continuous - em fields, or the wave equation of qm, for example, but
such fields are also present in any "discrete" device.

> ... It


> only means that some high level desciption of the mind doesnt need
> continuity. That a symbolic description doesnt need/can handle
> continuity.

So if a "symbolic" system can "handle continuity" what advantage is there to
the claim that "the brain is continious system"? What does it even mean?
What predictions can be derived from the claim?

-- Michael


Michael Olea

unread,
Aug 8, 2006, 1:27:02 AM8/8/06
to
Charlie wrote:

> I will grant that the quoted text argues the opposite viewpoint from
> mine, but what is the point?

To understand how "the mind" works, and to reproduce aspects of such
workings.

> What would be gained from taking that
> restricted (and in my opinion, convoluted and overly complex) view?

How is the view "restricted"? I will grant that it is metaphorical,
hypothetical, and suggestive, rather than rigorous. It seems to me that
"Society of Mind" is meant to be suggestive, to spur imagination - a very
different sort of book than the theorem-and-proof style of "Perceptrons".
And in that goal it has had some success (I know of a proprietary and
confidential approach to scene analysis largely inspired by SOM).

But it is your view that seems to be "restrictive" in its failure to
entertain alternatives. Convoluted? Things need not be as they seem -- how
"complex"!

> The simplicity of it is that humans are capable of apprehending both
> discrete schemes AND continuums (hence calculus), whereas digital
> machines can only discern the discrete.

Yes, that is indeed simplistic. Not to mention wrong.

-- Michael

Michael Olea

unread,
Aug 8, 2006, 1:54:35 AM8/8/06
to
Traveler wrote:

> Why come up with a convoluted explanation for something that is rather
> simple to understand. We sense a continuum int time and space because
> all we "feel" are the spiking actitivity of some of our neurons. We
> don't notice the non-active, in-between intervals because we can only
> sense spikes. Our conscious awareness is thus trapped in a false,
> illusory continuity that not even Einstein could escape.

Either that or your watch has stopped.

"Non events", such as neuron x not firing during interval T can be every bit
(so to speak) as informative as "events", such as neuron x firing during
interval T.

Convolution is, of course, critical to any cogent characterization of
retinal ganglia.

"We sense a continuum int time and space because all we "feel" are the
spiking actitivity of some of our neurons."

Ad hoc, circular explanatory fiction. Might as well have said "we sense a
continuum in time and space because we sense a continuum in time and
space".

"We don't notice the non-active, in-between intervals because we can only
sense spikes."

No more informative than:

"We don't notice the non-active, in-between intervals because we don't
notice the non-active, in-between intervals".

"Our conscious awareness is thus trapped in a false, illusory continuity
that not even Einstein could escape."

Either that or your watch has stopped.

-- Michael

Why babbling is bad and what we can do to ridicule it:
http://www.rebelscience.org/Cosas/Reliability.htm

Traveler

unread,
Aug 8, 2006, 9:19:08 AM8/8/06
to
On Tue, 08 Aug 2006 05:54:35 GMT, Michael Olea <ol...@sbcglobal.net>
wrote:

Yeah, I get it. ahaha... Once an ass kisser, always an ass kisser.
ahahaha...

Charlie

unread,
Aug 8, 2006, 11:00:51 AM8/8/06
to

Continuity is a superior concept and reality compared to discrete
schemes.

Discretes are placed in/on a continuum, which can accommodate any
number of discrete systems. A continuum, however, can't be located
within a discrete entity.

Discrete concepts were easier to understand, used early for counting
livestock, bushels and bales.

The continuum, characterized much later in Newton's calculus, provided
the canvas and perspective for where discretes ranked in the overall
system.

Charlie

unread,
Aug 8, 2006, 11:20:50 AM8/8/06
to


Computers:

1. may solve symbolic formulae algorithmically (e.g., rearrange and
substitute according to rules) without knowing what the symbols mean
(Chinese Room effect), and

2. may output a numerical result given a calculus problem but it is
more often only an approximation.

Louis, if we are "incapable of apprehending the continuous," how is it
that we are discussing the point?

bob the builder

unread,
Aug 8, 2006, 2:43:50 PM8/8/06
to
Michael Olea wrote:
> bob the builder wrote:
>
> > I guess you dont disagree here that computers cant simulate continious
> > time.
>
> What would it mean to "simulate continious time"? Infinite resolution? Does
> an LCR circuit simulate continuous time? How about the ntpd daemon?

simulate -> to come close enough to capture the important features. I
guess computers can get close enough. Physics engines for example can
display some very convincing situations. I guess computers can do the
same for mind-engines.

> > ... But instead you dont agree that continuity has to be
> > incorperated/is nescesary for a (human like) artificial mind. This
> > because of the smoothness illusion thing descibed below.
>
> Continuity is certainly a hypothesis that is entertained. Sample the world
> through, say, a photoreceptor array. Photon counters. Discrete events.
> Integration times. Flux. Estimated (perhaps continuous) luminance as a
> function of (perhaps continuous) time. Spatiotemporal (continuous, maybe,
> or maybe not) fluctuations in luminance, an image, sampled and discrete, or
> a video, sampled and discrete (thank you, Nyquist), the stimulus "effects"
> of an underlying "cause" - a scene, a world of (perhaps continuous) things
> bathed in and reflecting sources of illumination. Nothing prevents a
> discrete system from entertaining hypotheses of continuity, or even simply
> assuming continuity and acting accordingly. Nothing prevents a discrete
> system from "experiencing" continuity.

i agree. I only want to stress the importance of continuity. You dont
have to go all the way but i think its a mistake to label it as
unimportant.

> > Iam sure the mind gives us a false impression of the world outside.
> > Many things we see in the world are generated by our own minds. Iam
> > currently staring on my computerscreen under the illusion it is a
> > continious changing screen. I dont see the individual pictures my
> > screen shows at a large speed.
>
> So the experience of continuity does not imply actual continuity. Neither
> does it imply that the mechanisms mediating the experience are themselves
> continuous.

Indeed, experience of X doesnt imply X. If you see 'the mediating
mechanisms' as real brain tissue then they are continuous. If you think
the 'mechanisms' should be some symbolic description then No they arent
continious.

> > But all this doesnt mean that the brain is not a continious system. ...
>
> What would it mean for the brain to be a "continious system"? What sort of
> experiment, if only the technology were available, would discriminate
> between brain as "continuous system" and otherwise?

'continious' and 'discrete' have meaning in mathematics not in the real
world. I see my computer as a discrete system because i can understand
it best when i see it as a discrete system. I could see it as a
contious system, but that would make things harder to understand, so i
dont.

> What do we know about brain processes? Neurotransmitters and neuromodulaters
> bind to receptors. Discrete events. Action potentials are generated.
> Discrete events. And whil there are neurons (with short axons) that respond
> to stimuli with a "graded response" such responses are mediated by
> subcellular reaction cascades. Discrete events. Vesicles, packets of
> molecules, are released. Discrete events. Membrane channels open and close.
> Discrete events. Phosphorylitic and other reaction cascades occur within
> cells, genes are upregulated and downregulated. Discrete events. Certainly
> it can be mathematicaly convenient to work in terms of "fields" conceived
> of as continuous - em fields, or the wave equation of qm, for example, but
> such fields are also present in any "discrete" device.
>
> > ... It
> > only means that some high level desciption of the mind doesnt need
> > continuity. That a symbolic description doesnt need/can handle
> > continuity.
>
> So if a "symbolic" system can "handle continuity" what advantage is there to
> the claim that "the brain is continious system"? What does it even mean?
> What predictions can be derived from the claim?

I think its best to see the brain as a continious system. Because of
this i think that AI also can best be seen as a continious system.
Because a "symbolic" system can best be seen as discrete its less
likely to be of any use.

Traveler

unread,
Aug 8, 2006, 11:18:04 PM8/8/06
to
On 8 Aug 2006 08:20:50 -0700, "Charlie" <cmoe...@aol.com> wrote:

>Computers:
>
>1. may solve symbolic formulae algorithmically (e.g., rearrange and
>substitute according to rules) without knowing what the symbols mean
>(Chinese Room effect), and
>
>2. may output a numerical result given a calculus problem but it is
>more often only an approximation.

That's because the computer, like the universe, is finite and
discrete. Contrary to popular wisdom, the discreteness of the universe
is the reason that it is necessarily probabilistic. Since exactness is
out of the question for certain computations, nature is forced to rely
on the only possible solution, probability. This is the reason for the
probabilistic half life of certain subatomic particles. I'll tell you
a little known secret: It is possible to deduce from the discreteness
of nature that there is only one speed in the universe, and that's the
speed of light. Nothing can move faster or slower!

>Louis, if we are "incapable of apprehending the continuous," how is it
>that we are discussing the point?

What the hell are you getting at? All I am saying is that one cannot
understand the nature or even the existence of something that cannot
possibly exist, given its definition.

Michael Olea

unread,
Aug 9, 2006, 4:07:09 AM8/9/06
to
bob the builder wrote:

> Michael Olea wrote:
>> bob the builder wrote:

>> > I guess you dont disagree here that computers cant simulate continious
>> > time.
>>
>> What would it mean to "simulate continious time"? Infinite resolution?
>> Does an LCR circuit simulate continuous time? How about the ntpd daemon?

Hi. Bob.



> simulate -> to come close enough to capture the important features. I
> guess computers can get close enough. Physics engines for example can
> display some very convincing situations. I guess computers can do the
> same for mind-engines.

Lets back up a little bit. People don't "simulate continious time". They
respond to events that occur in time (which may or may not be continuous,
an open empirical question; I doubt that the viability of AI will depend
on what happens at Planck scales). Curt's challenge was "Can you name a


single function, that we know as a fact that the brain is performing, that

we know is impossible to duplicate with a digital system?" I would have
phrased it differently. Can you name a single thing people do that is
proveably impossible for a digital system to do?

Charlie's response to Curt did not answer the challenge, either way you
phrase it. His first statement is not only unsubstantiated, but, as it
stands, meaningless:

"We know that simulating continuous time (an actual continuum) is impossible
for any digital machine."

There are at least 3 issues:
1) what does it mean to "simulate" time?
2) what distinguishes simulation of discrete time from simulation of
continuous time?
3) what, if anything, does this have to do with human behavior?

Your statement above "come close enough to capture the important features",
doesn't help much. What are the important features of time? Well, it has a
direction (ghosts of christmas present, future, and past). So we have a
partial order over events at a given location in space.

I've writtten a few simulators. One was a discrete event simulator for a
"jukebox" with 4 spindles, each of which could be loaded with an optical
disk (aka "platter", each of which held about 200,000 images) and a "rack"
capable of holding about 30 platters (this was, I think, around 1990). The
jukebox would service requests for images. The requested image might be on
a platter already loaded on a spindle, or on a platter in the rack. There
are a number of times involved: mean time to fetch an image from a loaded
platter (and the variance of that time), mean and variance of the time to
fetch an image from a platter on the rack, broken down into steps:
spin-down time for a platter on a spindle (if all spindles were occupied),
mean and variance of the time to move the arm to a location on the rack
(which would depend on the location of the platter and the location of the
arm), rack to spindle transit time, platter spin-up time, and the
aforementioned image fetch-from-spun-up-platter time. So much for the
response of the jukebox to its inputs. Now we have to model the inputs.
This was a finite number of discrete sources of image fetch requests, each
source a stochastic process, generating requests at a mean rate, with some
variance (and other features, the "higher moments" of a distribution --
e.g. "long tails").

This was a simulator, obviously, because it did not actually carry out the
actions involved in fetching images. And it simulated time because the time
it took to run the simulator was nothing like the time it would take to run
the real process. It could simulate a week's worth of image requests in,
oh, as I recall, about 30 seconds. The idea was to characterise the amount
of time it would take to service requests, mean, variance, extreme values,
for example, depending on: the number of jukeboxes, the positioning
strategy of the arm (e.g. leave it at the last requested slot, or move it
to the center of the rack), the arrangement of platters on the rack as they
move between spindle and rack, the pattern of service requests, and so on.
And how much buffer space is needed to hold outsanding requests (a little
queueing theory)? How many jukeboxes do we need to handle peak and average
load? The point was to guide decisions.

Now, do you think I modeled time as discrete or continuous? Of course I
modeled it as continuous. Does that mean I could represent an interval of
time with infinite precision? Of course not. Any measurement of time has
some finite resolution. That in itself says nothing one way or another
about whether time is *conceived* of as continuous or discrete. By the way,
was this simulator "symbolic" or not? Does the answer shed any light on how
it works?

Now, do people "simulate continuous time"? Maybe. You can play out what-if
scenarios "in your head", just as the jukebox simulator was one big what-if
scenario. Of course, if people could do this with anything like the
precision of the discrete event jukebox simulator sketched above, there
would be no call for writing a program like that.

Skipping over Charlie's unsubstantiated and mostly meaningless statements
about *why* "any digital machine" cannot simulate continuous time, and
moving on to his second claim, also unsubstantiated and in fact
meaningless:

"We also know that continuity is precisely the way that most humans think
about and experience ongoing existence."

Again there are several issues. For one, he has changed the topic from
"simulating time" to "think about and experience ongoing existence". What
do these statements mean? And how is it we "know" they are true? How is it
we come to know, for example, anything about the way that one human, let
alone most of them, thinks about time (setting aside for the moment
"ongoing existence")? I can only imagine 3 ways (maybe you can imagine
others): 1) by the statements they make about time, 2) by the way they
respond to events, 3) by generalizing from our own personal experience.

So, to answer Curt's challenge, Charlie would have to come up with a
statement about time that a human can make that a machine, by virtue of
being "digital", cannot make. Got any candidates? Or he would have to come
up with a response to an event that a human can make that a machine, by
virtue of being "digital", cannot make. Got any candidates? Or he would
have to come up with a history of events experienced by a human, leading
the human to make a statement about another human, that, exposed to the
same history, a machine, by virtue of being "digital", cannot make. Got any
candidates?

Then Charlie goes onto make an "ask anyone" argument:

"If there is a slight or great discontinuity such as sleep or a period of
unconsciousness, the human is invariably aware of it at his next waking
moment."

Not only does he provide no evidence at all for this claim, he does not in
any way relate it to the thesis that humans can do things that machines, by
virtue of being "digital", cannot. Then he goes way off the deep end:

"Just ask anyone."

Doh!

Charlie has not answered Curt's challange. Instead he has done exactly what
Marvin Minsky said: he has provided "a good example of commonsense thinking


-- and how it can go completely wrong."

>> > ... But instead you dont agree that continuity has to be


>> > incorperated/is nescesary for a (human like) artificial mind. This
>> > because of the smoothness illusion thing descibed below.

>> Continuity is certainly a hypothesis that is entertained. Sample the
>> world through, say, a photoreceptor array. Photon counters. Discrete
>> events. Integration times. Flux. Estimated (perhaps continuous) luminance
>> as a function of (perhaps continuous) time. Spatiotemporal (continuous,
>> maybe, or maybe not) fluctuations in luminance, an image, sampled and
>> discrete, or a video, sampled and discrete (thank you, Nyquist), the
>> stimulus "effects" of an underlying "cause" - a scene, a world of
>> (perhaps continuous) things bathed in and reflecting sources of
>> illumination. Nothing prevents a discrete system from entertaining
>> hypotheses of continuity, or even simply assuming continuity and acting
>> accordingly. Nothing prevents a discrete system from "experiencing"
>> continuity.

> i agree. I only want to stress the importance of continuity. You dont
> have to go all the way but i think its a mistake to label it as
> unimportant.

Note that niether Marvin nor I has said anything one way or the other about
the importance of the *concept* of continuity. What Marvin pointed out was
that the subjective experience of continuity says nothing at all about the
continuity of the processes that mediate that experience.

Now, if you want to "stress the importance of continuity" you have to do
more than assert it, you have to demonstrate it. And you have to give some
meaning to "you dont have to go all the way", some measure of how far you
have gone, and why that is or is not far enough.



>> > Iam sure the mind gives us a false impression of the world outside.
>> > Many things we see in the world are generated by our own minds. Iam
>> > currently staring on my computerscreen under the illusion it is a
>> > continious changing screen. I dont see the individual pictures my
>> > screen shows at a large speed.

>> So the experience of continuity does not imply actual continuity. Neither
>> does it imply that the mechanisms mediating the experience are themselves
>> continuous.

> Indeed, experience of X doesnt imply X. If you see 'the mediating
> mechanisms' as real brain tissue then they are continuous. If you think
> the 'mechanisms' should be some symbolic description then No they arent
> continious.

There are several problems, in my opinion, of course, with this response:

1) Tissue is not "continuous", it is made up of atoms, which are mostly,
according to widely accepted physics, empty space. At the scales of
interest in neuroscience it is often convenient to treat tissue as
continuous, but at scales of interest in molecular biology it is not.

2) The continuity of brain tissue is no more germain than the continuity of
chips and wire. It is the processes that take place in tissues and chips
that is at issue.

3) Brain tissue is not, in my usage above or below, a "mediating mechanism";
it is a substrate within which mechanisms, processes, occur. Those
processes, to the extent they are known, are composed of discrete events;
binding events, for example.

4) A symbolic description is not a "meditating mechanism". An algorithm
could serve as a functional-level description of a mechanism, a process.
There is the added issue of how reasonable an approximation, at its level,
such a description is of what the hardware-level impliments; that is, do
the dynamics of the underlying physical system really impliment an
algorithm? In known cases, they do. So, staying at the algorithm level of
description, any algorithm manipulates the values of variables. What makes
an algorithm "symbolic" or not? Is it that the variables, which necessarily
have limited range and precision, are treated as continuous or discrete? So
if an algorithm acts like an HH neuron, that is, acts as a set of partial
differential equations relating continuous variables, is it therefore less
"symbolic" than an algorithm acting more like a real neuron, with discrete
difference equations relating discrete events, like the opening and closing
of channel membranes, and the transport in and out of the cell of a
discrete number of ions?

I'm sorry, but this whole symbolic vs discrete dichotemy is in a sorry state
of conceptual confusion. And it does nothing to illuminate the supposed
limitations of "digital" machines, or "symbolic" AI.



>> > But all this doesnt mean that the brain is not a continious system. ...

>> What would it mean for the brain to be a "continious system"? What sort
>> of experiment, if only the technology were available, would discriminate
>> between brain as "continuous system" and otherwise?

> 'continious' and 'discrete' have meaning in mathematics not in the real
> world. I see my computer as a discrete system because i can understand
> it best when i see it as a discrete system. I could see it as a
> contious system, but that would make things harder to understand, so i
> dont.

Having studied a little functional analysis (Banach spaces, Hilbert spaces,
etc.) and some basic point-set topology (it's a beautiful day in the
depleted neighborhood) I am familiar with typical mathematical usage of the
terms "continuous" and "discrete". The claim that the terms have no meaning
in the "real world" (as opposed, I guess, to the complex plane) is not at
all clear. Is spacetime quantized or not? Does it make physical sense to
talk about events separated by less than the Planck time or not? How can
you answer that question one way or the other without ceeding the "real
world" meaningfulness of the terms? Or can you dismiss the question itself
as meaningless? If so, then on what basis?

As to whether or not you want to think of computers as discrete or
continuous systems it's really a matter of the purpose at hand. Since most
of the time I am concerned with developing software, not hardware
components, I usually find it convenient to think in terms of discrete
events. Sometimes though, I have been known to measure voltage levels. None
of which has the slightest bearing on Curt's (revised by me) challange:
demonstrate something that humans can do that computers, by virtue of being
"digital" machines, cannot do.



>> What do we know about brain processes? Neurotransmitters and
>> neuromodulaters bind to receptors. Discrete events. Action potentials are
>> generated. Discrete events. And whil there are neurons (with short axons)
>> that respond to stimuli with a "graded response" such responses are
>> mediated by subcellular reaction cascades. Discrete events. Vesicles,
>> packets of molecules, are released. Discrete events. Membrane channels
>> open and close. Discrete events. Phosphorylitic and other reaction
>> cascades occur within cells, genes are upregulated and downregulated.
>> Discrete events. Certainly it can be mathematicaly convenient to work in
>> terms of "fields" conceived of as continuous - em fields, or the wave
>> equation of qm, for example, but such fields are also present in any
>> "discrete" device.

>> > ... It
>> > only means that some high level desciption of the mind doesnt need
>> > continuity. That a symbolic description doesnt need/can handle
>> > continuity.

>> So if a "symbolic" system can "handle continuity" what advantage is there
>> to the claim that "the brain is continious system"? What does it even
>> mean? What predictions can be derived from the claim?

> I think its best to see the brain as a continious system. ...

But you do not say *why* you think this is "best", or even what it means to
see it one way or the other.

There is another silly debate, a false dichotomy as ludicrous as symbolic vs
whatever-is-taken-to-be-not-symbolic: and that is the question of whether
brains, or more generally, neurobiological systems, are best seen as
"computational systems" or "dynamical systems". The answer is that the
question is stupid: both points of view, among others, are useful in
understanding how neurobiological systems mediate behavior.

What does it mean to view brains as dynamical systems? It means that the
methods of dynamics, identifying trajectories in phase space, basins of
attraction, limit cycles, fixed points, attractor networks, strange
attractors, separatrices, chaos, stability, saddle points, bifurcations,
phase transitions, sensitivity to boundary conditions and forcing
functions, for example, are not only applicable to the quantitative and
qualitative description of what brains do, but provide insight into why
they do it, and how what they do relates to the overt behavior of whole
organisms.

And what does it mean to view brains as computational systems? It means that
the activities of individual neurons, populations of neurons, and neural
circuits, can be understood as 1) representing variables, and 2) operating
on such variables. The variables can be scalars, vectors, vector fields,
and functions, just to name those that are easy to identify. And the
operations include addition, subtraction, multiplication, division,
integration, taking derivatives, convolution, and projection onto basis
functions, again, just to name some of those operations that are relatively
easy to identify. It further means that viewing neural activity in this way
leads to insight into how such activity relates to overt behavior. By
"insight", here and above, I mean, among other things, the generation of
testable predictions.

Got any?

> ... Because of


> this i think that AI also can best be seen as a continious system.
> Because a "symbolic" system can best be seen as discrete its less
> likely to be of any use.

So, no offense, Bob, but is it at least a little clearer now why I see these
statements as so much meaningless babble?

-- Michael


J.A. Legris

unread,
Aug 9, 2006, 9:26:44 AM8/9/06
to

Regarding Curt's (revised) challange: demonstrate something that humans


can do that computers, by virtue of being "digital" machines, cannot
do.

Right back at ya: demonstrate something that humans can do that
computers can also do.

Of course, there are many things, but all are in the abstract realm of
information processing: calculating, storing, sorting, etc. When it
comes to mechanical, chemical and biological interactions with the
world, computers cannot do any of the things that humans can do, nor
those of any other physical system. Moreover, in this messy, concrete
world, computers cannot be relied upon to do even what other computers
do. That's why we need customer support.

So what? Big deal. AI rests on the assumption that intelligence can be
had through pure information processing.

At least it used to. Now we agree that intelligence needs the role of
the envioronment plus pure information processing. The environment
produces more information than we could ever hope to duplicate
artificially so we "import" the whole thing. Fine. But what about the
body? It's another source of extremely complex information. Looks like
we had better import that too. So much for multiple realisability.

Here's another route to the same conclusion. Why am I writing this post
at all? The company of strangers? The chance of a party of the beach?
Of course - to get reinforcers. Suppose we build two intelligent
beings: one is a robot made of metal, plastic and ceramics that has
been programmed to crave reinforcers like a human. The other is an
enhanced super-smart human that both craves and depends on reinforcers
for survival. Each has the open-ended ability to acquire new knowledge
(using off-line storage perhaps). Sooner or later the robot is going to
discover that many of its reinforcers are just place-holders. For
example, it seeks out the company of others even though it is perfectly
capable of surviving on its own. Inevitably, it's behaviour will adjust
to this fact and "company seeking" will recede to some low probability.
The same goes for the rest of the faked reinforcers and in the limit
the robot will behave like a non-human robot. We might expect it to
eventually lose interest in many of the things that humans care about
(including self-improvement?) and would become unworthy of the term
intelligent, which is an attribute of organisms, not robots.

--
Joe Legris

mulder....@gmail.com

unread,
Aug 9, 2006, 4:18:33 PM8/9/06
to
min...@media.mit.edu schrieb:

> The power of consciousness comes not
> from ceaseless change of state, but from having enough stability to
> discern significant changes in your surroundings. To "notice" change
> requires the ability to resist it, in order to sense what persists
> through time, but one can do this only by being able to examine and
> compare descriptions from the recent past. We notice change in spite
> of change, and not because of it.

I agree that this is an interesting model to explain what our memories
do: When our short-term memories can accurately predict a situation, we
start to understand the world, form concepts and cause-effect
relationships. It is also when our predictions are easy to make, that
we start to get bored by a situation and that our higher-level
reflective agents try to change their goals.
Of course memories and learning are closely related. And what I find
interesting, is, that when we are able to play we hardly notice that
time passes ! For example, when I am programming, and am learning
about a new abstraction of a problem, I hardly know what parts of the
design will work in the first place. Only much later I get an
impression on which mistakes were necessary that solved my problem. My
point is, before I try to resist change, I gratefully try to experience
change. So, there are times when we read from our memories (resisting
change), but equally there are times when we write to our memories
(trying to make mistakes).

Michael Olea

unread,
Aug 9, 2006, 4:18:01 PM8/9/06
to
J.A. Legris wrote:

> Regarding Curt's (revised) challange: demonstrate something that humans
> can do that computers, by virtue of being "digital" machines, cannot
> do.

> Right back at ya: demonstrate something that humans can do that
> computers can also do.

Catch on fire.

The claim is that there are things humans can do that we know for a fact no
discrete system can do, *because* it is a discrete system. The challange is
to prove that claim. That challange has not been met. Maybe it sould have
been "computers will never be able to get a driver's license as long as
they are painted blue".

> Of course, there are many things, but all are in the abstract realm of
> information processing: calculating, storing, sorting, etc. When it
> comes to mechanical, chemical and biological interactions with the
> world, computers cannot do any of the things that humans can do, nor

> those of any other physical system. ...

That's not true. They can catch on fire. And they can crush walnuts. They
are physical systems. What else could they be?

> ... Moreover, in this messy, concrete


> world, computers cannot be relied upon to do even what other computers
> do. That's why we need customer support.

Computers have a long way to go to achieve the reliability of, say, a vw
bug. Even so, there are a couple of rovers on Mars that are pretty
impressive in what they have achieved, despite the file system bug that
crippled Spirit for a couple of weeks.

People also need support.



> So what? Big deal. AI rests on the assumption that intelligence can be
> had through pure information processing.

I don't think intelligence is a particularly useful term. But here is a
brief statement of Pei Wang's working definition:

"In the context of AI, the concept 'intelligence' should be understood as
the general-purpose capability of adaptation to the environment when
working with insufficient knowledge and resources. Concretely, it means
that the system must only assume finite time-space supply, always open to
new tasks, process them in real time, and learn from its own experience"

-- "Artificial Intelligence: What it is, and what it should be"

He goes into more depth on his rationale in:

"Computation and Intelligence in Problem Solving"

http://www.cogsci.indiana.edu/farg/peiwang/papers.html

> At least it used to. Now we agree that intelligence needs the role of
> the envioronment plus pure information processing. The environment
> produces more information than we could ever hope to duplicate
> artificially so we "import" the whole thing.

I believe the point is not that environments have to be "imported" but that
optimal strategies depend on the environment. So a machine, like an
organism, has to be adapted to the environment in which it operates.

> Fine. But what about the
> body? It's another source of extremely complex information. Looks like
> we had better import that too. So much for multiple realisability.

Multiple realisability of what? I don't much doubt that to build a machine
that could exactly reproduce human behavior you would in fact have to build
a human, but that does not mean you can't build a machine that can drive a
car, or forecast the weather, or ride a motorcycle (or govern california).

> Here's another route to the same conclusion. Why am I writing this post
> at all? The company of strangers? The chance of a party of the beach?
> Of course - to get reinforcers. Suppose we build two intelligent
> beings: one is a robot made of metal, plastic and ceramics that has
> been programmed to crave reinforcers like a human. The other is an
> enhanced super-smart human that both craves and depends on reinforcers
> for survival. Each has the open-ended ability to acquire new knowledge
> (using off-line storage perhaps). Sooner or later the robot is going to
> discover that many of its reinforcers are just place-holders. For
> example, it seeks out the company of others even though it is perfectly
> capable of surviving on its own. Inevitably, it's behaviour will adjust
> to this fact and "company seeking" will recede to some low probability.
> The same goes for the rest of the faked reinforcers and in the limit

> the robot will behave like a non-human robot. ...

Sounds like one of my ex girlfriends.

When you say "Inevitably, it's behaviour will adjust to this fact and
"company seeking" will recede to some low probability" doesn't that imply
that the machine in fact has some "bona fide" priorities, as opposed to the
"fake" ones "programmed" in? Isn't it also true that people cannot be
"programmed" with "fake" reinforcers, that in fact any such programming has
to be in terms of consequences of genuine significance? When I was taking
an introductory psychology class the movie "A Clockwork Orange" was popular
(at least I liked it). The professor remarked that "that was just a badly
designed behavior mod program".

> ... We might expect it to


> eventually lose interest in many of the things that humans care about
> (including self-improvement?) and would become unworthy of the term
> intelligent, which is an attribute of organisms, not robots.

So, are organisms discrete devices? Are blue whales necessarily stupid
(because they are blue)?

-- Michael


Michael Olea

unread,
Aug 9, 2006, 7:15:17 PM8/9/06
to
J.A. Legris wrote:

> Here's another route to the same conclusion. Why am I writing this post
> at all? The company of strangers? The chance of a party of the beach?
> Of course - to get reinforcers. Suppose we build two intelligent
> beings: one is a robot made of metal, plastic and ceramics that has
> been programmed to crave reinforcers like a human. The other is an
> enhanced super-smart human that both craves and depends on reinforcers
> for survival. Each has the open-ended ability to acquire new knowledge
> (using off-line storage perhaps). Sooner or later the robot is going to
> discover that many of its reinforcers are just place-holders. For
> example, it seeks out the company of others even though it is perfectly
> capable of surviving on its own. Inevitably, it's behaviour will adjust
> to this fact and "company seeking" will recede to some low probability.
> The same goes for the rest of the faked reinforcers and in the limit
> the robot will behave like a non-human robot. We might expect it to
> eventually lose interest in many of the things that humans care about
> (including self-improvement?) and would become unworthy of the term
> intelligent, which is an attribute of organisms, not robots.

Take two.

Why would a machine "care" about its survival any more than it would "care"
about about the company of others? Why would survival be any less "fake"
than self-improvement, or company seeking? For that matter, why would an
organism "care" about survival, or reproduction? Because of natural
selection? Because if it didn't its kind would go extinct? That's not
really an answer, not a full eplanation. My point is that natural selection
preserves designs that do have these "cares" -- that is, organisms that
are constructed in such a way that they strive to survive and reproduce.
But the fact that they do that is a consequence of how they are built, a
consequence of physics, the dynamics of an organic machine. Those machines
that happen to have those attributes, that physics, that dynamics, happen
to be the machines that, under the conditions in which evolution must
occur, come to be more populous.

A machine does what it must. Evolution is not the only way to build a
machine. If a machine "craves reinforcers" it does so because that is the
way it is built. Survival and reproduction are no more fundamental, in that
regard, than sorting and collating. So I don't see anything inevitable
about the scenario above.

-- Michael

J.A. Legris

unread,
Aug 12, 2006, 2:58:15 PM8/12/06
to

Sorry about the delay - I had an order to fill for a big customer.

Why are some stimuli primary reinforcers? One answer is they have to be
non-negotiatiable - "prime directives" if you will - for the good of
the organism. Otherwise, it would be too easy for combinations of
conditioned reinforcers to overide the primary ones, yielding behaviour
analogous to anorexia nervosa or drug addiction - intelligent
behaviour turned back against itself.

So, a machine that is programmed to sort files by virtue of a built-in
susceptibility to the beauty of well-sorted objects (in the role of a
primary reinforcer) may need to have its existence dependent on those
files being in good order, otherwise the temptation to let them sink
into disarray while hanging out at the power outlet may prove to be too
strong to resist.

--
Joe Legris

Michael Olea

unread,
Aug 14, 2006, 1:58:32 AM8/14/06
to
J.A. Legris wrote:

> Sorry about the delay - I had an order to fill for a big customer.

I was busy, too.

> Why are some stimuli primary reinforcers? One answer is they have to be
> non-negotiatiable - "prime directives" if you will - for the good of
> the organism. Otherwise, it would be too easy for combinations of
> conditioned reinforcers to overide the primary ones, yielding behaviour
> analogous to anorexia nervosa or drug addiction - intelligent
> behaviour turned back against itself.
>
> So, a machine that is programmed to sort files by virtue of a built-in
> susceptibility to the beauty of well-sorted objects (in the role of a
> primary reinforcer) may need to have its existence dependent on those
> files being in good order, otherwise the temptation to let them sink
> into disarray while hanging out at the power outlet may prove to be too
> strong to resist.

I think you missed my point.

Why is a rock hard?
Why is dirt soft?
Why does graphite make a good dry lubricant?

Beacuse of the types of, and arrangement of, the atoms that make them up.
Period. Temptation is not an issue. Rocks are not tempted to be soft, dirt
is not tempted to be hard, and graphite is not tempted to be sticky. Nor
are neurons tempted to bind opiates.

When you say a machine is "programmed" to a "susceptibility" that it will be
"tempted" to forsake in favor of other things, not "programmed" but
"primary" you are falling into tacit assumptions - "programming" is not
like the structure of a rock, but "survival" (or "juice") is. There is
absolutely no evidence in what you say that a machine must be made of atoms
arranged in such a way that it "craves power" over sorting, any more than
material must be hard rather than soft.

I hope that helps.

-- Michael


J.A. Legris

unread,
Aug 14, 2006, 7:50:46 AM8/14/06
to

You're going to have to pound a little harder, it's still not sinking
in.

How do you account for the asymmetry between primary and conditioned
reinforcers?

--
Joe Legris

Glen M. Sizemore

unread,
Aug 14, 2006, 2:01:33 PM8/14/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1155409094.9...@m73g2000cwd.googlegroups.com...


The distinction between primary and conditioned reinforcement is that
primary reinforcement does not depend on any particular ontogenic history.
So an answer to your first question is that some stimuli are reinforcers
because of natural selection, an these are called "primary" or
"unconditioned" reinforcers. Skinner's notion, and still a viable one, is
that conditioned reinforcers are established via classical conditioning
(i.e., "pairings" with unconditioned reinforcers), and this would provide an
answer to your question concerning conditioned reinforcers usurping too much
behavior. That is, the power of conditioned reinforcers continues to depend
on primary reinforcers. If the primary reinforcer no longer occurs, the
conditioned reinforcers cease to function. If the unconditioned reinforcer
continues to be presented (and "paired" with the conditioned reinforcer)
then the conditioned reinforcer is not really reinforcing "useless"
behavior.

>
> --
> Joe Legris
>


J.A. Legris

unread,
Aug 15, 2006, 9:06:08 AM8/15/06
to

Welcome back. Summer vacation?

OK, you've spelled it out nicely. And primary reinforcers in organisms
all seem to be connected to maintaining the homeostasis of the
organism (as in eating and drinking) and the survival of the species
(as in sex and socializing).

But Michael says that reinforcers need not have any connection to the
"implementation" of the organism to be effective. This follows from the
definition of a reinforcer - it is the functional relationship between
the behaviour and the stimulus that counts. An example is mating
behaviour. A large part of intelligent behaviour (especially human)
seems to be motivated by increased chances of opportunities to mate
but, at least to a first approximation, there is no direct benefit to
an organism by achieving reproductive success - all the benefits accrue
to the heirs. As far as the individual organism is concerned, sex is
evidently sort of a "pure" primary reinforcer - it's just about
insatiable and it has almost no effect other than reinforcement (at
least for males). By my argument above, food should trump sex, but
there is little reason to believe it. Many of us would happily fast for
a few days in exchange for the realization of certain sexual fantasies.


But wait a minute. Food WILL always win out. At the point of starvation
who would give up a bite of a Big Mac for a taste of Honey? Is there
something deeper going on?

--
Joe Legris

Michael Olea

unread,
Aug 15, 2006, 2:25:40 PM8/15/06
to
J.A. Legris wrote:

> But Michael says that reinforcers need not have any connection to the
> "implementation" of the organism to be effective.

I said just the opposite, so that may explain the confusion. You seem to be
mixing two things: an argument against "multiple realizability" with an
argument for the inevitability of robots devoting more resources to their
own survival than to whatever "fake" objectives they were built to achieve
- no matter how they are implemented. It is your second claim I have been
challanging as having no foundation.

I did make a brief comment about "multiple realizability":

"I don't much doubt that to build a machine that could exactly reproduce
human behavior you would in fact have to build a human, but that does not
mean you can't build a machine that can drive a car, or forecast the
weather, or ride a motorcycle (or govern california)."

But the claim I have been challanging as having no foundation is this:

"Sooner or later the robot is going to discover that many of its reinforcers
are just place-holders. For example, it seeks out the company of others
even though it is perfectly capable of surviving on its own. Inevitably,
it's behaviour will adjust to this fact and "company seeking" will recede
to some low probability."

Here you are making the assumption that a robot MUST have, call it, a
"survival instinct". The claim that this is "inevitable", made with no
reference to "implementation", amounts to a tacit argument that
reinforcement is independent of "implementation". A robot comes to realise
that it is perfectly capable of surviving on its own, without, say, seeking
the company of others. So what? What, if anything, would *cause* it to act
on that realization? Suppose the robot comes to realise it is perfectly
capable of washing cars without seeking the company of others, does this
mean that "inevitably, it's behaviour will adjust to this fact and "company
seeking" will recede to some low probability"? If not, why not? How is the
claim any different than the one you made? The only difference, it seems to
me, is that no one makes the unfounded tacit assumption that an
"intelligent" robot MUST have a "car washing instinct", but the unfounded
tacit assumption that an "intelligent" robot MUST have a "survival
instinct" (no matter how it is constructed) goes unrecognized.

-- Michael

J.A. Legris

unread,
Aug 15, 2006, 6:52:13 PM8/15/06
to


You're right, I have assumed something akin to a survival instinct.
Without one, it seems that a robot capable of learning would eventually
switch itself off to obtain the biggest reinforcer of all: escape from
those damned contingencies of reinforcement.

--
Joe legris

Michael Olea

unread,
Aug 15, 2006, 8:38:57 PM8/15/06
to
J.A. Legris wrote:

> You're right, I have assumed something akin to a survival instinct.
> Without one, it seems that a robot capable of learning would eventually
> switch itself off to obtain the biggest reinforcer of all: escape from
> those damned contingencies of reinforcement.

Doh! You did it again :-P

"Elementary chaos theory tells us that eventually all robots must run amok;
with the biting and scratching, and maiming and killing. Oy."

-- from "The chronicles of Itchy and Scratchy Land"

Glen M. Sizemore

unread,
Aug 16, 2006, 7:44:43 AM8/16/06
to
JL: Welcome back. Summer vacation?

GS: Hi Joe. There was nothing to get my dander up, and what functions as
reinforcers for me behavior changes often (see below).


JL: OK, you've spelled it out nicely. And primary reinforcers in organisms


all seem to be connected to maintaining the homeostasis of the

organism (as in eating and drinking)[]

GS: This is a Hull/Spence notion (drive reduction) and doesn't hold water.
As Catania pointed out, it is unlikely that any event that is contingent on
behavior is without effect. I don't think that these data have been
published, but Rick Shull reported at an informal meeting the effects of
tossing a small, hard ball into a chamber in which rats were engaging in
food-reinforced lever-pressing; the rats pressed the lever much less (they
were interacting with - playing with - the ball) and this effect increased
over days until the rats spent very little time lever-pressing. One could
speculate that access to the ball was a reinforcer, and a powerful one to
boot. When you get to humans, there are many things that function as
reinforcers - probably primary reinforcers.

JL: []and the survival of the species (as in sex and socializing).

GS: Well, "survival of the species" is a little careless, but it is true
that sex and socializing are reinforcers, but neither these, nor food and
water appear to exhaust the possibilities.

JL: But Michael says that reinforcers need not have any connection to the


"implementation" of the organism to be effective. This follows from the
definition of a reinforcer - it is the functional relationship between
the behaviour and the stimulus that counts. An example is mating
behaviour. A large part of intelligent behaviour (especially human)
seems to be motivated by increased chances of opportunities to mate
but, at least to a first approximation, there is no direct benefit to
an organism by achieving reproductive success - all the benefits accrue
to the heirs.

GS: Well, eating has the same sort of consequence as sex from a natural
selection standpoint - reproductive success. You know that though.
Otherwise, I'm not sure that anything here is relevant to the discussion.

JL: As far as the individual organism is concerned, sex is


evidently sort of a "pure" primary reinforcer - it's just about
insatiable and it has almost no effect other than reinforcement (at
least for males). By my argument above, food should trump sex, but
there is little reason to believe it. Many of us would happily fast for
a few days in exchange for the realization of certain sexual fantasies.

GS: The modern view is that reinforcers are, for the most part, other
responses (i.e., we reinforce lever-pressing with the opportunity to eat or
drink). The original modern view (the "Premack Principle") held that animals
distribute their time among activities and more frequently occurring
responses could be used to reinforce less frequent. A "thirsty" rat will
drink more than lever-press, and we may, therefore, reinforce lever-pressing
with the opportunity to drink. If the rat is water-satiated, however, it
will lever-press more than it drinks, and the opportunity to lever-press
will reinforce drinking. The Premack Principle ultimately fell apart (sort
of - perhaps it can be saved) but the relativity of reinforcement remains in
the so-called "response-deprivation" principle. Deprivation was always
viewed as an important aspect of reinforcer efficacy, but it certainly is
not clear that Skinner could have predicted some of the modern findings. For
example, one can simultaneously reinforce response A with response B and
punish response B with response A (actually, that is consistent with the PP
and response deprivation). One can also reinforce a more frequent response
with a less frequent response. Say a person watches four hours of TV a day
and reads one hour; now you arrange it so the person must watch five hours
of TV in order to obtain one hour of reading; this will usually increase TV
watching. This ain't your Mom's Hull/Spence drive reduction view of
reinforcement.


JL: But wait a minute. Food WILL always win out. At the point of starvation


who would give up a bite of a Big Mac for a taste of Honey? Is there
something deeper going on?

GS: Yes deprivation is always important, but even so weird things happen.
For example, if one gives rats access to food one hour a day, they do fine.
I'm not sure they gain much weight, but they do just fine. However, if you
throw a running wheel in the chamber, the animals begins to run more and
more and eat less and less, until they spend most of the one-hour
access-to-food time running. If the experiment is not halted, the animals
die.

Without axe or grinding wheel,

Glen

"J.A. Legris" <jale...@sympatico.ca> wrote in message

news:1155647168.2...@i3g2000cwc.googlegroups.com...
> Glen M. Sizemore wrote:


Charlie

unread,
Aug 16, 2006, 1:27:39 PM8/16/06
to

Curt Welch wrote:
> What I was arguing against above however was the false belief that since
> computers can only do what they are "programmed" to do, that they could
> never do what humans do, since we have "free will". Believing that is just
> ignorance - either in the understand of what a human is or in the
> understanding of what a computer is.


It is not ignorance to note that humans are superior to, and the
creators of, computers.

Humans are the problem solvers. Computers are created tools and, from
all evidence, one of the great problems of the day. This because we
can't make them do our bidding in any but a mechanistic and algorithmic
way.

Computers can't do for us what we must do for ourselves:
Think up problems and their solutions.
That is the genesis of all science and technology.

J.A. Legris

unread,
Aug 20, 2006, 10:50:05 AM8/20/06
to

Yikes! A very interesting reply. I'll get back to you soon.

--
Joe Legris

J.A. Legris

unread,
Aug 21, 2006, 3:23:09 PM8/21/06
to
I reading Catania. Stop pestering me with your silence.

--
Joe Legris

J.A. Legris

unread,
Aug 21, 2006, 3:25:02 PM8/21/06
to

I am what I am.

Glen M. Sizemore

unread,
Aug 21, 2006, 4:06:50 PM8/21/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1156188189....@i3g2000cwc.googlegroups.com...

>I reading Catania. Stop pestering me with your silence.

Ok. :)


>
> --
> Joe Legris
>


zzbu...@netscape.net

unread,
Aug 21, 2006, 7:09:21 PM8/21/06
to

Michael Olea wrote:
> J.A. Legris wrote:
>
> > You're right, I have assumed something akin to a survival instinct.
> > Without one, it seems that a robot capable of learning would eventually
> > switch itself off to obtain the biggest reinforcer of all: escape from
> > those damned contingencies of reinforcement.
>
> Doh! You did it again :-P
>
> "Elementary chaos theory tells us that eventually all robots must run amok;
> with the biting and scratching, and maiming and killing. Oy."

But, chaos theory also says the same thing about Google,
AT&T and computer virus'.
Which is the reason why robots still control Mars, rather than AI.

J.A. Legris

unread,
Aug 24, 2006, 11:53:36 AM8/24/06
to
Glen M. Sizemore wrote:
> "J.A. Legris" <jale...@sympatico.ca> wrote in message
> news:1156188189....@i3g2000cwc.googlegroups.com...
> >I reading Catania. Stop pestering me with your silence.
>
> Ok. :)
>

OK, I'm back. Thanks again to you and Olea - Catania's book is quite
good ( I have the 1979 edition). Chapter 4 (Consequences of Responding:
Reinforcement) is particularly interesting - he seems to anticipate the
"response-deprivation" principle you mentioned above.

There's a little footnote on page 78 that I nearly missed: "The
detailed operation of Premack's principle depends on how probabilities
are calculated. Choice among simultaneously available responses seems
more satisfactory than the proportion of time occupied by each
response..."

The degree of choice among a set of responses can give us the average
amount of information obtained in making those choices. Can we use
information-theory to get a better definition of reinforcement?
Something like this: reinforcement strength ( as a process, see p. 74)
is a function of the amount of information obtained from or
corresponding to a response in a particular context.

I'm still reading. Comments invited.

--
Joe Legris

Glen M. Sizemore

unread,
Aug 25, 2006, 6:49:08 AM8/25/06
to

"J.A. Legris" <jale...@sympatico.ca> wrote in message
news:1156434816.7...@74g2000cwt.googlegroups.com...

> Glen M. Sizemore wrote:
>> "J.A. Legris" <jale...@sympatico.ca> wrote in message
>> news:1156188189....@i3g2000cwc.googlegroups.com...
>> >I reading Catania. Stop pestering me with your silence.
>>
>> Ok. :)
>>
>
> OK, I'm back. Thanks again to you and Olea - Catania's book is quite
> good ( I have the 1979 edition). Chapter 4 (Consequences of Responding:
> Reinforcement) is particularly interesting - he seems to anticipate the
> "response-deprivation" principle you mentioned above.
>
> There's a little footnote on page 78 that I nearly missed: "The
> detailed operation of Premack's principle depends on how probabilities
> are calculated. Choice among simultaneously available responses seems
> more satisfactory than the proportion of time occupied by each
> response..."
>
> The degree of choice among a set of responses can give us the average
> amount of information obtained in making those choices.

I'm not sure what is meant by this ("degree of choice"), but the way that
Olea talks about the function of responses in entropy reduction could easily
be applied, I would guess, to some choice procedures.

Can we use
> information-theory to get a better definition of reinforcement?
> Something like this: reinforcement strength ( as a process, see p. 74)
> is a function of the amount of information obtained from or
> corresponding to a response in a particular context.

Well, maybe. But you would have to deal with the fact that a VI schedule of
food presentation, and a VI schedule of clicks are the same from an
information standpoint, but only one will likely maintain any significant
amount of responding. But Olea deals with this by talking about subsidiary
processes involving costs and benefits, I guess.

feedbackdroid

unread,
Aug 26, 2006, 12:33:13 PM8/26/06
to

>
> I don't think that these data have been
> published, but Rick Shull reported at an informal meeting the effects of
> tossing a small, hard ball into a chamber in which rats were engaging in
> food-reinforced lever-pressing; the rats pressed the lever much less (they
> were interacting with - playing with - the ball) and this effect increased
> over days until the rats spent very little time lever-pressing.
> ...........
>


We are shocked, truly shocked, to discover that rats are not total
slaves to the lever-pressing protocols set up for them by
psychologists.

http://www.google.com/custom?q=casablanca+%22shocked+to+discover%22

Curt Welch

unread,
Aug 26, 2006, 1:46:57 PM8/26/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:
> I reading Catania. Stop pestering me with your silence.

I finally got my copy of it. But haven't started reading yet....

--
Curt Welch http://CurtWelch.Com/
cu...@kcwc.com http://NewsReader.Com/

Michael Olea

unread,
Aug 26, 2006, 1:51:45 PM8/26/06
to
Glen M. Sizemore wrote:


> "J.A. Legris" <jale...@sympatico.ca> wrote in message

> news:1156434816.7...@74g2000cwt.googlegroups.com...

>> OK, I'm back. Thanks again to you and Olea - Catania's book is quite
>> good ( I have the 1979 edition). Chapter 4 (Consequences of Responding:
>> Reinforcement) is particularly interesting - he seems to anticipate the
>> "response-deprivation" principle you mentioned above.
>>
>> There's a little footnote on page 78 that I nearly missed: "The
>> detailed operation of Premack's principle depends on how probabilities
>> are calculated. Choice among simultaneously available responses seems
>> more satisfactory than the proportion of time occupied by each
>> response..."

>> The degree of choice among a set of responses can give us the average
>> amount of information obtained in making those choices.

> I'm not sure what is meant by this ("degree of choice"), but the way that
> Olea talks about the function of responses in entropy reduction could
> easily be applied, I would guess, to some choice procedures.

I jotted down some comments, but I'll have to wait a bit to elaborate -- I'm
going out of town for a few day, leaving in about 10 minutes. I'll be
attending a seminar on Alex, an African Gray parrot that has been the
subject of many behavioral studies.

Anyway, "degree of choice" needs to be made precise. Is this just the number
of freely available choices, or the entropy of the distribution over
choices, or something else?

The information gained by the message that a particular choice had been
made, would be information gained by an observer. This would be the
reduction in entropy from its value before the receipt of the message, just
the entropy of the distribution over choices, to the entropy after receipt
of the message, which would be zero. It is not clear to me how this could
be applied to ranking the relative effectiveness of reinforcers. One idea
is to do a baseline study, in which choices are all freely available,
record some ratios, and use these as predictors of behavior when available
choices are made contingent on each other (no running on your wheel till
you finish your food pellets, young rat). But I'm suffering from ratio
strain -- gotta break and run.

-- Michael

J.A. Legris

unread,
Aug 26, 2006, 4:05:52 PM8/26/06
to

Here's an interesting paper by Timberlake and Allison from 1974 that
discusses exactly what you suggest, using "paired baseline sessions":

http://www.indiana.edu/~bsl/RespDeprivation.pdf

I am wondering if this could be yet another route to the matching law,
i.e. rather than maximizing the organism's expectation of reinforcement
(which includes a term for the unexpected, thereby shifting the balance
toward the leaner schedule), matching maximizes the information entropy
across available responses relative to some baseline.

--
Joe Legris

Message has been deleted

N

unread,
Aug 26, 2006, 9:26:14 PM8/26/06
to

J.A. Legris wrote:
> Michael Olea wrote:
> > Glen M. Sizemore wrote:
> >
Looks interesting, back later...

Curt Welch

unread,
Sep 27, 2006, 5:54:42 PM9/27/06
to
"Charlie" <cmoe...@aol.com> wrote:
> Curt Welch wrote:
> >> Can you name a single function, that we know as a fact that the brain
> >> is performing, that we know is impossible to duplicate with a digital
> >> system?
>
> Don wrote:
> >Curt asked for a problem that "we know is impossible" for a computer.
>
> OK, I'll put it this way:
>
> We know that simulating continuous time (an actual continuum) is
> impossible for any digital machine. This is because all digital
> machines progress from time to time (in either simulated or actual
> sensed existence) in step-by-step (or frame-by-frame) fashion. There is
> no continuity in the machine because nothing can be sensed/experienced
> by it in the between-times. It does not even know if it is "alive"
> or "dead" from one time to the next.
>
> We also know that continuity is precisely the way that most humans
> think about and experience ongoing existence. If there is a slight or
> great discontinuity such as sleep or a period of unconsciousness, the
> human is invariably aware of it at his next waking moment. Just ask
> anyone.

But the human brain processes all information using spike signals. This
means, the brain is totally unaware of what is happening between the
spikes. In other words, the experience of events happening in a continuous
fashion is just an illusion.

We also know that computer and TV screens aren't continuous either in the
spatial domain, or the temporal domain. They are made up of dots in the
spatial domain, and they flash from one picture to the next in the temporal
domain - there's nothing continuous about it. But yet, we perceive it as
being smooth and continuous. In other words, we are unable to actually
tell when something is continuous or not continuous. Therefore, anything
that seems continuous can't be trusted to be true just like we can't trust
our own perception of a video or movie.

If the system is in fact discreet, that means that we can't sense what is
happening between the spikes. And if we can't sense something is
happening, then we have no awareness of the gap. If you can't sense the
gap, then how would you know it's there? In other words, there's also no
reason to believe that a system built on a foundation of discrete events,
would be able to detect the gaps. And if you can't detect the gaps, then
it will naturally seem continuous.

To continue this point, what if every second, time was stopped for an hour?
Then it would move forward for another second, and stop for another hour?
What if the universe was actually working like this? How would we ever
know it was happening if time was stopped for everything? The answer is
that we wouldn't know it. We can't detect the gaps.

In other words, there's really no indication that the brain, or our
perception, is a continuous process. All indications are that it's just
the opposite - it's a discrete process - which means there's no reason to
believe a computer-like machine using discrete state changes would not be
able to duplicate the behavior of a brain - which is itself a machine built
on discrete state changes (spike signals).

Curt Welch

unread,
Sep 27, 2006, 6:08:36 PM9/27/06
to
"Charlie" <cmoe...@aol.com> wrote:
> I will grant that the quoted text argues the opposite viewpoint from
> mine, but what is the point? What would be gained from taking that
> restricted (and in my opinion, convoluted and overly complex) view?
>
> The simplicity of it is that humans are capable of apprehending both
> discrete schemes AND continuums (hence calculus), whereas digital
> machines can only discern the discrete.

The same argument was made about digital CDs never being as good as analog
LPs - because they were discrete digital systems instead of continuous
analog systems. The argument had no merit for many reasons. But the
reason people made the argument, is because they had an intuitive feel that
their perception of sound was an analog, or continuous process, so they
felt that a digital recording system would leave gaps that they, being an
analog based sensory system, would always be able to sense were missing.

This of course ignored the well known fact that sound is encoded into a
discrete signal format in the ear (spikes), so that what the brain actually
"hears" is not continuous at all - it's very much discrete messages.

This also ignored the fact that even an LP is made up of discrete molecules
so that their "encoding" of the music was in fact a discrete encoding
anyway.

In other words, their intuitive feel for what was happening was just wrong.
So they made an argument based on their intuition which was simply not
supported by any of the facts.

Curt Welch

unread,
Sep 27, 2006, 6:38:34 PM9/27/06
to
"Charlie" <cmoe...@aol.com> wrote:
> Continuity is a superior concept and reality compared to discrete
> schemes.

But yet, you are using discrete symbols to make your point. Kinda ironic
don't you think?

> Discretes are placed in/on a continuum, which can accommodate any
> number of discrete systems. A continuum, however, can't be located
> within a discrete entity.
>
> Discrete concepts were easier to understand, used early for counting
> livestock, bushels and bales.
>
> The continuum, characterized much later in Newton's calculus, provided
> the canvas and perspective for where discretes ranked in the overall
> system.

Yes, as a model for understanding the universe, it has much value. But
where's the evidence that the brain is actually a continuous system and
what would that mean?

When we write books about calculus, or other continuous models, or when we
talk about it, we always do it using discrete symbols. Why, if the brain
is a continuous based processing system, do we use a discrete symbol
system, for communication these ideas about continuity? Whey is it, that
we are unable to use some non-discrete communication system to define this
model of continuity?

If analog systems are so much better than digital, then why did the digital
computer become such a success and why did it replace the use of all the
older analog computers?

Maybe evolution figured out the same things we did. Digital discrete
processing systems have many advantages over analog systems. Maybe that's
why the brain is a discrete processing system like our computers. Maybe
that's why we communicate using discrete symbols instead of producing some
sort of continuous analog howling sounds in our language. Maybe that's why
all our analog systems are slowly being replaced with discrete digital
systems?

At the lowest levels, the neurons communicate with each other using
discrete events (spikes) just like we do at the high level when we use
discrete words to communicate. The brain is just not an analog device.
It's a discrete symbol processor that is forced to chop up higher
resolution continuous events into discrete symbols so it can "talk" about
it with the other parts of the brain.

Curt Welch

unread,
Sep 27, 2006, 7:03:08 PM9/27/06
to
"Charlie" <cmoe...@aol.com> wrote:

> Louis, if we are "incapable of apprehending the continuous," how is it
> that we are discussing the point?

But yet, you are using discrete symbols to describe the thing you are
talking about. How do you know the think you are talking about exists
since you can't even use a continuous language to talk about it?

Your argument breaks down to the fact that you believe things are
continuous, yet you have no way to prove that they are. The only thing
that seems continuous in this universe is motion (or position in space).
But how do we know that all motion is not actually discrete at a level
below that which we can currently detect?

The concept of "continuous" is only that - an idea. And how do we describe
it? We talk about being able to always find a point between two points on
a line. But points are discrete events. So again, we are using the notion
of an infinite number of discrete events to define the concept of
continuous. In other words, the only way we can describe the concept is to
use discrete events. Why, would this be needed if in fact, everything was
continuous, and that the brain worked on a foundation of continuous
processing?

We can understand time in this concept by believing that for any two points
in time, there must always be a point between those two points. But how
can we prove this is true for time? We can't. No matter what tool you use
to attempt to measure time, it will always have a limit of resolution, and
below that limit, we don't know if time is continuous or not.

The concept of continuity is only a theory which we find useful as a
foundation for understanding the universe. All our understanding of the
universe however is done using discrete systems. Some are of such high
resolution that we call them analog or continuous, but yet we can't prove
if they are truly analog, or just discrete systems of a resolution beyond
our ability to find the place where there is no point between the two other
points.

Curt Welch

unread,
Sep 27, 2006, 11:07:54 PM9/27/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:

> Sorry about the delay - I had an order to fill for a big customer.

Sorry about my 2 month delay. :)

> Why are some stimuli primary reinforcers? One answer is they have to be
> non-negotiatiable - "prime directives" if you will - for the good of
> the organism.

Well, for biological life which has had it's reinforcers selected by
evolution, the concept of "good for the organism" is valid. But for man
made life, then there is no need for the primary reinforcers to be "good
for the organism". They could simply be good for us, the creator. We
might for example give a smart bomb motivations to make it kill itself.
That is not good for the organism's survival at all. But yet it could be
good for us, so we would keep making more of these things.

> Otherwise, it would be too easy for combinations of
> conditioned reinforcers to overide the primary ones,

Conditioned reinforcers will always be able to override the primary ones.
That's because the primary ones are not predictors of future rewards, they
are always immediate rewards. A strong RL machine will always use it's
predictions of future rewards to direct it's actions, and never use just
the primary rewards. But, because the interesting problems are always the
ones where the system doesn't have access to enough information to make
perfect predictions, it can always be tricked into doing something stupid
by giving it a problem where it will fail to make the correct prediction
about the future.

> yielding behaviour
> analogous to anorexia nervosa or drug addiction - intelligent
> behaviour turned back against itself.

Well, some of that is not a failure of the system to make accurate
predictions. Some of that is simply the fact that the prime motivators are
not actually in sync with the goal of survival. They never can be. There
is no way to build a machine where its prime directives is perfectly
matched to the goal of survival. Evolution's goal is to build the best
survival machine possible, but that's only the goal. Everything it creates
is just the best it can do in that regard.

Drug addition is caused by us messing with our hardware. This is something
you can never completely prevent. If we can create our own reward signals,
and bypass the hardware design by evolution to create survival-based reward
signals, then we will have defeated the survival goal - but we will still
be achieving the "maximize reward" goal the brain was built for. So that's
not a failure of the brain to do what it was designed to do, or a problem
that results from not being able to accurate predict the future, but
instead, it's a failure by evolution to design a good survival machine.

This problem will always limit the power of intelligent machines. Any
machine that is smart enough to know how its own brain works, and which has
the tools to modify its own hardware, won't be able to resist tapping into
it's reward center and giving itself a constant reward stimulation - which
will cause it to go into something like a drug-induced coma.

> So, a machine that is programmed to sort files by virtue of a built-in
> susceptibility to the beauty of well-sorted objects (in the role of a
> primary reinforcer) may need to have its existence dependent on those
> files being in good order, otherwise the temptation to let them sink
> into disarray while hanging out at the power outlet may prove to be too
> strong to resist.

That's right. But we will build things like file-sorters and we won't care
if they have no survival motivations. That only makes things safer for us.
We will make sure it stays alive by keeping it plugged in so that it will
keep sorting our files for us. We just have to design it so that it can't
harm itself in its attempt to do it's job.

Curt Welch

unread,
Sep 28, 2006, 12:24:19 AM9/28/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:

> Of course, there are many things, but all are in the abstract realm of
> information processing: calculating, storing, sorting, etc. When it
> comes to mechanical, chemical and biological interactions with the
> world, computers cannot do any of the things that humans can do, nor
> those of any other physical system.

Well, that's not very interesting because it's like cutting a brain out of
person and laying it on the table and talking about how this "stupid" thing
on the table can't even feed itself.

The computer is just the controller like the brain is the controller for
our body. The only reason brains can "do" things is because they are
attached to a body with arms and legs that it can control.

If you give a computer arms and legs and sensors (aka make a robot), then
it has all the same parts a human has and if you built a computer which was
as good of an arm and leg controller as the human brain is, it would make
the robot do the same sorts of things humans do.

> So what? Big deal. AI rests on the assumption that intelligence can be
> had through pure information processing.
>
> At least it used to. Now we agree that intelligence needs the role of
> the envioronment plus pure information processing. The environment
> produces more information than we could ever hope to duplicate
> artificially so we "import" the whole thing. Fine. But what about the
> body? It's another source of extremely complex information. Looks like
> we had better import that too.

That's exactly why robotics is such a huge part of AI (and always has been
a part of it).

> Here's another route to the same conclusion. Why am I writing this post
> at all? The company of strangers? The chance of a party of the beach?
> Of course - to get reinforcers. Suppose we build two intelligent
> beings: one is a robot made of metal, plastic and ceramics that has
> been programmed to crave reinforcers like a human.

I like that phrase, "crave reinforcers"! :)

> The other is an
> enhanced super-smart human that both craves and depends on reinforcers
> for survival. Each has the open-ended ability to acquire new knowledge

> (using off-line storage perhaps). Sooner or later the robot is going to


> discover that many of its reinforcers are just place-holders. For
> example, it seeks out the company of others even though it is perfectly
> capable of surviving on its own. Inevitably, it's behaviour will adjust
> to this fact and "company seeking" will recede to some low probability.

> The same goes for the rest of the faked reinforcers and in the limit
> the robot will behave like a non-human robot. We might expect it to
> eventually lose interest in many of the things that humans care about
> (including self-improvement?) and would become unworthy of the term
> intelligent, which is an attribute of organisms, not robots.

Well, as has been discussed in other messages, there's no such thing as a
"survival instinct" which is separate from the built in (primary)
reinforcers. If it happens to do things to help itself survive, it's only
because it's primary reinforcers were picked because they would create
survival behavior.

Secondary reinforcers are never fake, or contrived stand-ins. They all
result from the system attempting to predict future primary reinforcers. A
change in the environment (which the prediction system doesn't understand
or correctly sense), can cause the predictions to be very wrong - but over
time, the predictions will adjust based on these errors and get better.

If a machine were to seek out other machines, it was either because it was
built in as some primary reinforcer - in which case the action would never
fade away unless the primary reinforcer hardware changed over time - or it
was because the system had learned from experience that there was a high
probability of getting higher rewards when you were hanging around other
machines of the same time. Most humans have learned that they can get
higher rewards by hanging around other humans. We learn to value the
things in the environment that give us rewards, and we tend to hoard and
collect, and get near, the things that can give us the most rewards.
Humans are one of the most valuable "tools" in the environment for getting
us higher rewards (but can also be one of the most dangerous parts of the
environment). The net effect however is that most humans tend to do is
more good than harm so we tend to seek them out.

Self improvement is just another way that humans learn to get more rewards
from the environment. Any RL robot should learn the same thing. The more
it knows about the environment, and the larger its environment-interacting
skill set becomes, the more rewards it can get from the environment. As
long as its primary rewards are complex and open ended enough that there is
no limit to how many rewards it can get, the system will never stop self
improving. If the primary reward was something trivial like a fixed sized
reward for pushing a button, and no more than 1 reward per second could
ever be collected, then the machine would learn to do that, and never do
anything very different from that because any change in it's behavior would
only reduce the rewards (like if it went away from the button for a few
seconds to try and push another button). But, if you put it in a complex
environment with complex ways to improve itself to produce higher rates of
rewards, it would naturally continue to improve itself. But, like the
button pushing problem, the fewer rewards it finds from making changes, the
less likely it will be to change. So self improvement will only happen as
long as the environment continues to encourage it.

Curt Welch

unread,
Sep 28, 2006, 1:03:02 AM9/28/06
to
"J.A. Legris" <jale...@sympatico.ca> wrote:
> A large part of intelligent behaviour (especially human)
> seems to be motivated by increased chances of opportunities to mate
> but, at least to a first approximation, there is no direct benefit to
> an organism by achieving reproductive success - all the benefits accrue
> to the heirs. As far as the individual organism is concerned, sex is
> evidently sort of a "pure" primary reinforcer - it's just about
> insatiable and it has almost no effect other than reinforcement (at
> least for males). By my argument above, food should trump sex, but
> there is little reason to believe it. Many of us would happily fast for
> a few days in exchange for the realization of certain sexual fantasies.

This is where Dawkins' selfish gene view makes things clear. It's the gene
which is the unit of survival that evolution is acting to reinforce, not
the species, an not the individual. We are nothing but big and complex
life boats built to take our genes into the future. As gene life boats, we
are motivated to do what the gene needs us to do, in order for it to
survive into the future. Sex and reproduction are high on our priority list
because that's what the gene needs us to do - it can't last long in a
single body, so it survives by constantly splitting itself in two and
building new life boats to carry it further into the future.

> But wait a minute. Food WILL always win out. At the point of starvation
> who would give up a bite of a Big Mac for a taste of Honey? Is there
> something deeper going on?

Our motivations are balanced and optimized by the process of evolution to
maximise the change of survival of the genes that made us. Reproduction is
high on the list, but so is eating. So are many other things like not
damaging our body. Who would have sex or eat when a wolf was making a meal
of our leg?

Reinforcement learning machines can be motivated by any condition we can
build a critic to sense. They don't have to be motivated for survival at
all. Evolution however has only one primary motivation - survival. So
evolution only creates survival machines. The gene is the primary survival
machine evolution has created. Genes in turn, have learned to work
together in societies to increase their chance of survival - with each gene
specializing in different tasks to help the society. These societies of
survival machines have in turn built large and complex intelligent
life-boats to help take the entire society into the future. We are nothing
but large complex robotic life-boats built by these survival machines, our
genes. We in turn, as RL machines designed and built by our genes, are
motivated to do what the genes need us to do. In turn, we build life boats
around us (clothes, houses, cars) to protect us. One day, we will build
advanced RL based machines to further protect us, so we can better protect
our masters - the genes which built us.

The hitch however is that an RL machine is not in fact directly motivated
by the needs of its master (our genes). We are only indirectly motivated
by their needs. We are directly motivated by the critic that functions as
the gene's proxy during our life time. The gene built the critic hardware
in us to keep us in line with their motivations. But it's impossible to
build a simple critic which is "perfect". In the long run however,
imperfections in the design will be removed and replaced. In the short
run, humans will behave in ways that are in line with the critic, but not
in line with the gene. We will for example use birth control, and have
abortions, because this is in line with the critic that the genes left
around to motivate us (the critic tells us that having lots of sex is good
but having lots of children is not as good). This is one of many examples
where the critic that motivates our RL brain us is not perfectly in line
with the needs of the genes - but over time, evolution will always win, and
anything we do to reduce the odds of the gene surviving because of our
critic, will be fixed in time by the slower process of evolution.

JP

unread,
Sep 28, 2006, 3:53:43 PM9/28/06
to

Wasn't Lester Zick's differences between differences an excellent
description of a discrete system?
In a discrete system all we have are the digits and the associations
between them.
The digits are a result of the interactions between our senses and the
(internal and external) reality.
We perceive new information, digits as a result of these interactions
and we combine it thru associations in order to reduce its
diversification.
The associations are a result of the physical (information processing
capacity) restrictions, limitations that require that the amount of
information to be processed has to be reduced to some manageable
levels.
IMO we do not process information only by using its "meaning" but
we do it by first reducing it to a manageable level of diversification
and we achieve this objective thru associations.
JP

Charlie

unread,
Sep 30, 2006, 12:09:14 PM9/30/06
to

Curt Welch wrote:
> "Charlie" <cmoe...@aol.com> wrote:
>
> > Louis, if we are "incapable of apprehending the continuous," how is it
> > that we are discussing the point?
>
> But yet, you are using discrete symbols to describe the thing you are
> talking about. How do you know the think you are talking about exists
> since you can't even use a continuous language to talk about it?


But we do have the concepts of "continuous" in our ordinary languages:

"creates (with persistence)"
"Q while P"
"R begins (and continues)"

These are all primitive concepts and describe processes in which
continuity is implied, if not compelled. There are others.

It is our formal and informal logic languages that do not contain those
concepts or words, and in which we must "fake it" in order to describe
continuous processes while using terms that refer only to the discrete.

> Your argument breaks down to the fact that you believe things are
> continuous, yet you have no way to prove that they are. The only thing
> that seems continuous in this universe is motion (or position in space).
> But how do we know that all motion is not actually discrete at a level
> below that which we can currently detect?


If motion were not inherently continuous, what moves an object, once in
motion, from place to place? What is the mechanism? Where are the
"clockworks" such as we use for showing movie films, or those to
increment to the next instruction and fetch from memory in a computer?


> The concept of "continuous" is only that - an idea. And how do we describe
> it? We talk about being able to always find a point between two points on
> a line. But points are discrete events. So again, we are using the notion
> of an infinite number of discrete events to define the concept of
> continuous. In other words, the only way we can describe the concept is to
> use discrete events. Why, would this be needed if in fact, everything was
> continuous, and that the brain worked on a foundation of continuous
> processing?


Newton's and Leibniz' calculus is continuous and founded on the idea of
continuity of space and time. Likewise Einstein's relativity and
spacetime. The idea of evaluation is to quantify, but whenever we do,
we get a discrete result. The discrete came out of the continuous,
without which it could not have existed.

For a more complete treatment on continuous space and time, see:
http://arxiv.org/ftp/physics/papers/03 10/0310055.pdf


> We can understand time in this concept by believing that for any two points
> in time, there must always be a point between those two points. But how
> can we prove this is true for time? We can't. No matter what tool you use
> to attempt to measure time, it will always have a limit of resolution, and
> below that limit, we don't know if time is continuous or not.


It is easy to prove:

Neither space nor time have any substance in themselves so on
fundamental principles they can't be discrete.


> The concept of continuity is only a theory which we find useful as a
> foundation for understanding the universe. All our understanding of the
> universe however is done using discrete systems. Some are of such high
> resolution that we call them analog or continuous, but yet we can't prove
> if they are truly analog, or just discrete systems of a resolution beyond
> our ability to find the place where there is no point between the two other
> points.


The reason we are limited in our understanding is the absence of
appropriate tools. There is only one continuous mathematics, the
calculus, whereas there are numerous discrete systems. All logic taught
in our universities are discrete varieties, with no form of continuous
logic available.

Xiaoding

unread,
Sep 30, 2006, 1:25:45 PM9/30/06
to

>
>" But the human brain processes all information using spike signals."

You don't know that. No one does. Indeed, since most of the brain is
not involved with spikes at all, the evidence go's the other way!


This
> means, the brain is totally unaware of what is happening between the
> spikes. In other words, the experience of events happening in a continuous
> fashion is just an illusion.

No, not at all. We have no idea if the spikes have anything to do with
awareness.

Which brings up another question, or rather an asumption, that I think
is behind this train of thought, the assumption being, that spikes are
discrete. Spikes are discrete, taken individually,,but they are not
discrete taken together. Spikes are not like the switches in a digital
computer. They are massed together, to form an electrical potential.
It is this potential, the aggregate of billions of spikes, that form
the basis of communication between two neurons. Thus they are indeed
continuous, no one spike matters in this process. The spike function
is indeed continuous, not discrete. Therfore the brain can, indeed, be
a continous processor. Not saying it is, just pointing out that spikes
don't make it discrete.


> "To continue this point, what if every second, time was stopped for an hour?
> Then it would move forward for another second, and stop for another hour?
> What if the universe was actually working like this? How would we ever
> know it was happening if time was stopped for everything? The answer is
> that we wouldn't know it. We can't detect the gaps."


I beleive that statement was shown to be false, at least
mathematically. Something to do with Zeno's paradox. It was proven
that time is continous, that there is no such thing as "an instant of
time", where one could freeze motion. An object in motion is always in
motion, until it stops. If you could indeed stop time, what would
happen to the momentum of all the objects you just stopped? It would
go to zero. Then you would have to restart all that motion, and that
is a very hard thing to do.


>
> In other words, there's really no indication that the brain, or our
> perception, is a continuous process. All indications are that it's just
> the opposite - it's a discrete process - which means there's no reason to
> believe a computer-like machine using discrete state changes would not be
> able to duplicate the behavior of a brain - which is itself a machine built
> on discrete state changes (spike signals).

I would disagree with this conclusion. :) At least in respect to
current computer technology. I see your point, but I don't think we
know enough about the brain to draw that kind of conclusion. Given the
utter failure of computer "scientist's" to produce any kind of result
at all in the last 60 years along this line, I don't hold out much
hope. :) Biology, come forth!

Xiaoding

Curt Welch

unread,
Oct 3, 2006, 10:35:27 PM10/3/06
to
"Xiaoding" <xiao...@jelly.toast.net> wrote:
> >
> >" But the human brain processes all information using spike signals."
>
> You don't know that. No one does. Indeed, since most of the brain is
> not involved with spikes at all, the evidence go's the other way!

Every neuron in the brain produces spikes don't they? What is most the
brain made up of other than neurons? Are you talking about the chemicals
floating around in the brain?

Muscles respond to spikes. Stop the spikes, and we don't move.

Sensors produce spikes. Stop them from spiking, and we sense nothing.

How is that not 100% proof that the brain processes all information using
spike signals?

I do agree that there is much chemical signaling going on (some we know
about, and probably a lot we don't yet know about). So I guess that's an
obvious proof that not all information is in the spikes. Chemicals however
are discrete events as well (they operate one molecule at a time). So it
can still be argued that the information they relay is discrete. More
important however is that the chemicals seem to be be slow speed regulators
of behavior where as the high speed sensory information seems to all be
relayed in the spike signals. Everything we are aware of seems to be
represented in the spike signals.

> This
> > means, the brain is totally unaware of what is happening between the
> > spikes. In other words, the experience of events happening in a
> > continuous fashion is just an illusion.
>
> No, not at all. We have no idea if the spikes have anything to do with
> awareness.

Sure we do. We have stuck probes into human brains and forced neurons to
spike. When we do that, the human reports awareness of sensations.
Likewise, we monitor the spiking of neurons and see direct correlations
between the spikes and the reported awareness of the subject.

> Which brings up another question, or rather an assumption, that I think


> is behind this train of thought, the assumption being, that spikes are
> discrete. Spikes are discrete, taken individually,,but they are not
> discrete taken together. Spikes are not like the switches in a digital
> computer. They are massed together, to form an electrical potential.
> It is this potential, the aggregate of billions of spikes, that form
> the basis of communication between two neurons. Thus they are indeed
> continuous, no one spike matters in this process. The spike function
> is indeed continuous, not discrete. Therfore the brain can, indeed, be
> a continous processor. Not saying it is, just pointing out that spikes
> don't make it discrete.

So, 1 discrete event is discrete, but a million discrete events are not
discrete? I don't understand that logic.

> > "To continue this point, what if every second, time was stopped for an
> > hour? Then it would move forward for another second, and stop for
> > another hour? What if the universe was actually working like this? How
> > would we ever know it was happening if time was stopped for everything?
> > The answer is that we wouldn't know it. We can't detect the gaps."
>
> I beleive that statement was shown to be false, at least
> mathematically. Something to do with Zeno's paradox. It was proven
> that time is continous, that there is no such thing as "an instant of
> time", where one could freeze motion. An object in motion is always in
> motion, until it stops. If you could indeed stop time, what would
> happen to the momentum of all the objects you just stopped? It would
> go to zero. Then you would have to restart all that motion, and that
> is a very hard thing to do.

"hard to do" does not mean it's not happening. :)

I can stop a video game, then resume it later, and it doesn't cause the
momentum of moving objects to stop. How do we know our entire universe is
not a video game on some computer which exists in another universe? The
answer is we don't.

> > In other words, there's really no indication that the brain, or our
> > perception, is a continuous process. All indications are that it's
> > just the opposite - it's a discrete process - which means there's no
> > reason to believe a computer-like machine using discrete state changes
> > would not be able to duplicate the behavior of a brain - which is
> > itself a machine built on discrete state changes (spike signals).
>
> I would disagree with this conclusion. :) At least in respect to
> current computer technology. I see your point, but I don't think we
> know enough about the brain to draw that kind of conclusion.

Yeah, it's only speculation. We don't know enough to prove one way or the
other what the answer is.

> Given the
> utter failure of computer "scientist's" to produce any kind of result
> at all in the last 60 years along this line, I don't hold out much
> hope. :) Biology, come forth!

Yeah, if I could just finish this little project of mine, it would answer
many of these questions. :)

> Xiaoding

Curt Welch

unread,
Oct 3, 2006, 10:39:37 PM10/3/06
to
"Charlie" <cmoe...@aol.com> wrote:
> Curt Welch wrote:
> > "Charlie" <cmoe...@aol.com> wrote:
> >
> > > Louis, if we are "incapable of apprehending the continuous," how is
> > > it that we are discussing the point?
> >
> > But yet, you are using discrete symbols to describe the thing you are
> > talking about. How do you know the think you are talking about exists
> > since you can't even use a continuous language to talk about it?
>
> But we do have the concepts of "continuous" in our ordinary languages:
>
> "creates (with persistence)"
> "Q while P"
> "R begins (and continues)"
>
> These are all primitive concepts and describe processes in which
> continuity is implied, if not compelled. There are others.

Yeah, good point. We certainly see existence as a continuous effect.

> It is our formal and informal logic languages that do not contain those
> concepts or words, and in which we must "fake it" in order to describe
> continuous processes while using terms that refer only to the discrete.

But, just because the language contains a discrete symbol that means
continuous, does that mean we can sense it? I don't think so. The idea of
something being continuous (like my arm hand is continuously attached to my
arm) simply means that ever time we make a discrete test for the condition,
we expect it to still be true. And we make the assumption, that if we had
tested it between times, the result would have turned out to be the same.
My claim is that we have in fact defined the concept of continuous, from
our discrete observations. We aren't actually able to directly observer or
experience the continuous. It's only a concept we use (which is very
useful) that things which are continuous exist.

> > Your argument breaks down to the fact that you believe things are
> > continuous, yet you have no way to prove that they are. The only thing
> > that seems continuous in this universe is motion (or position in
> > space). But how do we know that all motion is not actually discrete at
> > a level below that which we can currently detect?
>
> If motion were not inherently continuous, what moves an object, once in
> motion, from place to place? What is the mechanism? Where are the
> "clockworks" such as we use for showing movie films, or those to
> increment to the next instruction and fetch from memory in a computer?

We have no way to prove if those clockworks are there - but hidden from us.
What for example would be different, if the universe we exist in was
nothing but a discrete computer simulation, with an accuracy down to a
billionth of a billionth of a second (or something many orders of magnitude
below what we are able to measure)? Since our ability to measure the very
small (motion and time and space) is limited, how do we know it's not
discrete at some level below what we can measure?

The assumption that it is truly continuous, is an assumption that if we
keep making higher and higher resolution observations, that there would
always be another point on the line between the last two - which would have
a location consistent with the assumption that the underlying system was in
fact continuous. But we can't prove that it is continuous, unless we can
make an infinite number of measurements each higher than the previous -
which we can't do.

We can't even say that matter exists continuously. For all we know, it
might be blinking in and out of existence at a frequency far beyond what we
(being part of the same universe) could ever detect.

> > The concept of "continuous" is only that - an idea. And how do we
> > describe it? We talk about being able to always find a point between
> > two points on a line. But points are discrete events. So again, we
> > are using the notion of an infinite number of discrete events to define
> > the concept of continuous. In other words, the only way we can
> > describe the concept is to use discrete events. Why, would this be
> > needed if in fact, everything was continuous, and that the brain worked
> > on a foundation of continuous processing?
>
> Newton's and Leibniz' calculus is continuous and founded on the idea of
> continuity of space and time. Likewise Einstein's relativity and
> spacetime. The idea of evaluation is to quantify, but whenever we do,
> we get a discrete result. The discrete came out of the continuous,
> without which it could not have existed.
>
> For a more complete treatment on continuous space and time, see:
> http://arxiv.org/ftp/physics/papers/03 10/0310055.pdf

Sure, the concept of continuity is easy to understand. It's the concept
that there is always another point on the line between the last two. But
this is a concept expressed with discrete symbols, being generated by a
human which operates on a discrete signaling system.

How can you prove that anything in this universe is actually continuous?
You can't. It's only an assumption we make because the assumption works at
many of the levels we have to deal with from day to day.

> > We can understand time in this concept by believing that for any two
> > points in time, there must always be a point between those two points.
> > But how can we prove this is true for time? We can't. No matter what
> > tool you use to attempt to measure time, it will always have a limit of
> > resolution, and below that limit, we don't know if time is continuous
> > or not.
>
> It is easy to prove:
>
> Neither space nor time have any substance in themselves so on
> fundamental principles they can't be discrete.

How can something exist if it has no substance? If you believe space has
no substance, then maybe you aren't in fact talking about the space itself,
but instead, the inverse of existence? If I have a cup on the table, and
them take it off the table, I can say that the cup does not exist on the
table. I can call that "non cup", a puc. Then I can say things like, the
puc has no substance, therefor it can't be discrete. But see how stupid
that is? I'm now talking about a puc as if it were something real, as if
it existed. And I'm trying to make the argument that this non-cup has some
type of property.

Space is nothing but a puc. It's a lack of existence. Time is the same
thing. Time doesn't exist. Time and space are only concepts created by
humans to explain the difference between the things that do exist.

> > The concept of continuity is only a theory which we find useful as a
> > foundation for understanding the universe. All our understanding of
> > the universe however is done using discrete systems. Some are of such
> > high resolution that we call them analog or continuous, but yet we
> > can't prove if they are truly analog, or just discrete systems of a
> > resolution beyond our ability to find the place where there is no point
> > between the two other points.
>
> The reason we are limited in our understanding is the absence of
> appropriate tools. There is only one continuous mathematics, the
> calculus, whereas there are numerous discrete systems. All logic taught
> in our universities are discrete varieties, with no form of continuous
> logic available.

Fuzzy logic is a form of continuous logic. But that's not important.

It's true that since most our tools are discrete, it tends to make us think
more in those terms. And it's also true that motion and space and time all
seem to be continuous to us - which means for almost all practical
applications, it's safe and valid to just assume they are continuous. But
we have no way of knowing if they are, or if they just seem to be.

Charlie

unread,
Oct 4, 2006, 8:17:38 PM10/4/06
to

Curt Welch wrote:
> "Charlie" <cmoe...@aol.com> wrote:
> > Curt Welch wrote:
> > > "Charlie" <cmoe...@aol.com> wrote:
> > >
> > > > Louis, if we are "incapable of apprehending the continuous," how is
> > > > it that we are discussing the point?
> > >
> > > But yet, you are using discrete symbols to describe the thing you are
> > > talking about. How do you know the think you are talking about exists
> > > since you can't even use a continuous language to talk about it?
> >
> > But we do have the concepts of "continuous" in our ordinary languages:
> >
> > "creates (with persistence)"
> > "Q while P"
> > "R begins (and continues)"
> >
> > These are all primitive concepts and describe processes in which
> > continuity is implied, if not compelled. There are others.
>
> Yeah, good point. We certainly see existence as a continuous effect.
>
> > It is our formal and informal logic languages that do not contain those
> > concepts or words, and in which we must "fake it" in order to describe
> > continuous processes while using terms that refer only to the discrete.
>
> But, just because the language contains a discrete symbol that means
> continuous, does that mean we can sense it? I don't think so.

All symbols are necessarily discrete. The symbol for infinity (lazy
eight), for instance, means something that is not easy to picture and
needs quite a few words to describe. It is the idea behind the symbol,
what the symbol stands for, the concept that we have agreed upon, that
has meaning for us. "Continuous" is defined as: without break,
cessation, or join, and denotes that the continuity or union of parts
is absolute and uninterrupted. Note that parts of a continuum can be
considered, as well as the whole.


> The idea of
> something being continuous (like my arm hand is continuously attached to my
> arm) simply means that ever time we make a discrete test for the condition,
> we expect it to still be true. And we make the assumption, that if we had
> tested it between times, the result would have turned out to be the same.


Yes, that's what we do in our computers and in sampling theory. We
make the assumption that something is continuous if it is there
whenever we choose to test its presence. Due to the marvelous stability
of our electronic logic and memories, it works almost all the time.

In the time domain, "continuous" does not mean simply "every time
you test it, it (some condition) is there." Continuous means that it
is there continuously, without break, whether you test it or not. But
in order to know for sure that it is there continuously over time (if
that is a goal), something or someone must be continuously watching it
to make sure it didn't disappear (or change) during the desired
period. There is no such capability embedded in our current computers.


> My claim is that we have in fact defined the concept of continuous, from
> our discrete observations. We aren't actually able to directly observer or
> experience the continuous. It's only a concept we use (which is very
> useful) that things which are continuous exist.


Continuousness appears to be a more sophisticated concept than that of
discreteness and is most probably, as you suggest, the result of
building upon discrete observations. The concept of continuity, once
conceived however, is fundamental and easily apprehended. "Without
break" is plain and simple enough.


> > > Your argument breaks down to the fact that you believe things are
> > > continuous, yet you have no way to prove that they are. The only thing
> > > that seems continuous in this universe is motion (or position in
> > > space). But how do we know that all motion is not actually discrete at
> > > a level below that which we can currently detect?
> >
> > If motion were not inherently continuous, what moves an object, once in
> > motion, from place to place? What is the mechanism? Where are the
> > "clockworks" such as we use for showing movie films, or those to
> > increment to the next instruction and fetch from memory in a computer?
>
> We have no way to prove if those clockworks are there - but hidden from us.
> What for example would be different, if the universe we exist in was
> nothing but a discrete computer simulation, with an accuracy down to a
> billionth of a billionth of a second (or something many orders of magnitude
> below what we are able to measure)? Since our ability to measure the very
> small (motion and time and space) is limited, how do we know it's not
> discrete at some level below what we can measure?


While we don't know that for sure, what we do know from man-devised
number systems is the discrete natural numbers (and as many other
ordinary number sets you care to name) can each and all comfortably
reside in the continuum of real numbers, but the converse is
impossible. The continuum can then be seen as the superior and
encompassing concept.

It is logical to carry this theme onward such that space is the
continuum that can contain discrete objects, and time is the continuum
that can contain discrete events. After Einstein, we have space-time as
the continuum that can contain discrete objects and events. This
explanation is the most probable and least complex and is a natural
step. You will find most physicists hold this view.


> The assumption that it is truly continuous, is an assumption that if we
> keep making higher and higher resolution observations, that there would
> always be another point on the line between the last two - which would have
> a location consistent with the assumption that the underlying system was in
> fact continuous. But we can't prove that it is continuous, unless we can
> make an infinite number of measurements each higher than the previous -
> which we can't do.
>
> We can't even say that matter exists continuously. For all we know, it
> might be blinking in and out of existence at a frequency far beyond what we
> (being part of the same universe) could ever detect.
>

> > Newton's and Leibniz' calculus is continuous and founded on the idea of
> > continuity of space and time. Likewise Einstein's relativity and
> > spacetime. The idea of evaluation is to quantify, but whenever we do,
> > we get a discrete result. The discrete came out of the continuous,
> > without which it could not have existed.
> >
> > For a more complete treatment on continuous space and time, see:
> > http://arxiv.org/ftp/physics/papers/0310/0310055.pdf
>
> Sure, the concept of continuity is easy to understand. It's the concept
> that there is always another point on the line between the last two. But
> this is a concept expressed with discrete symbols, being generated by a
> human which operates on a discrete signaling system.
>
> How can you prove that anything in this universe is actually continuous?
> You can't. It's only an assumption we make because the assumption works at
> many of the levels we have to deal with from day to day.


We can provide a demonstration. As a test, we can operate as if the
assumptions of continuity of space, and especially of time, were true.
If things in general work better while following those assumptions,
then you might be willing to say that it was a worthwhile exercise and
worthy of serious thought and follow-up experiments.


> > > We can understand time in this concept by believing that for any two
> > > points in time, there must always be a point between those two points.
> > > But how can we prove this is true for time? We can't. No matter what
> > > tool you use to attempt to measure time, it will always have a limit of
> > > resolution, and below that limit, we don't know if time is continuous
> > > or not.
> >
> > It is easy to prove:
> >
> > Neither space nor time have any substance in themselves so on
> > fundamental principles they can't be discrete.
>
> How can something exist if it has no substance? If you believe space has
> no substance, then maybe you aren't in fact talking about the space itself,
> but instead, the inverse of existence? If I have a cup on the table, and
> them take it off the table, I can say that the cup does not exist on the
> table. I can call that "non cup", a puc. Then I can say things like, the
> puc has no substance, therefor it can't be discrete. But see how stupid
> that is? I'm now talking about a puc as if it were something real, as if
> it existed. And I'm trying to make the argument that this non-cup has some
> type of property.
>
> Space is nothing but a puc. It's a lack of existence. Time is the same
> thing. Time doesn't exist. Time and space are only concepts created by
> humans to explain the difference between the things that do exist.


There is no doubt that space-time exists. Think of space as a
near-infinite potential for accepting and housing matter and energy.
Distance in space is the separation between bits of matter. Think of
time as a potential for accepting and containing events. Duration in
time is the separation between events.


> > > The concept of continuity is only a theory which we find useful as a
> > > foundation for understanding the universe. All our understanding of
> > > the universe however is done using discrete systems. Some are of such
> > > high resolution that we call them analog or continuous, but yet we
> > > can't prove if they are truly analog, or just discrete systems of a
> > > resolution beyond our ability to find the place where there is no point
> > > between the two other points.
> >
> > The reason we are limited in our understanding is the absence of
> > appropriate tools. There is only one continuous mathematics, the
> > calculus, whereas there are numerous discrete systems. All logic taught
> > in our universities are discrete varieties, with no form of continuous
> > logic available.
>
> Fuzzy logic is a form of continuous logic. But that's not important.


Fuzzy logic is descended from set theory and is a discrete technique.


> It's true that since most our tools are discrete, it tends to make us think
> more in those terms. And it's also true that motion and space and time all
> seem to be continuous to us - which means for almost all practical
> applications, it's safe and valid to just assume they are continuous. But
> we have no way of knowing if they are, or if they just seem to be.


We have acted in the past as if space and time are continuous. If we
act so in the future, our machines will be smarter and give us less
trouble than they do now.

Curt Welch

unread,
Oct 5, 2006, 8:04:45 PM10/5/06
to
"Charlie" <cmoe...@aol.com> wrote:
> Curt Welch wrote:

> Yes, that's what we do in our computers and in sampling theory. We
> make the assumption that something is continuous if it is there
> whenever we choose to test its presence. Due to the marvelous stability
> of our electronic logic and memories, it works almost all the time.
>
> In the time domain, "continuous" does not mean simply "every time
> you test it, it (some condition) is there." Continuous means that it
> is there continuously, without break, whether you test it or not. But
> in order to know for sure that it is there continuously over time (if
> that is a goal), something or someone must be continuously watching it
> to make sure it didn't disappear (or change) during the desired
> period. There is no such capability embedded in our current computers.

Well, you must remember of course that computers are made up of the same
matter we are. There's no limit to the type of IO devices you can create
out of hardware (like a D to A converter which generates an analog
continuous function). The IO devices don't have to be purely discrete.
They are simply required to translate anything they do find, into discrete
symbols in order to interface with the digital parts of the hardware.

So how is a human really all that different since we translate all notions
of continuous into discrete language symbols so that we can think and talk
about them?

> Continuousness appears to be a more sophisticated concept than that of
> discreteness and is most probably, as you suggest, the result of
> building upon discrete observations. The concept of continuity, once
> conceived however, is fundamental and easily apprehended. "Without
> break" is plain and simple enough.

Yes, but it doesn't require that anything actually be continuous to create,
or to understand, or to demonstrate, that plain and simple idea. I can
make a computer display what looks to us like a continuous line on a graph,
when in fact it's nothing but discrete dots. All you need is the ability
to do low resolution sampling of an effect of much higher resolution to
demonstrate the idea of continuous.

The fact that we, as human, use the concept of continuous in our every day
life, doesn't mean any of the things we talk about as being continuous, or
any of the examples we used, when we learned the concept, are in fact
continuous. All we really know is that they act continuous to a higher
resolution than we are able to sample them.

> While we don't know that for sure, what we do know from man-devised
> number systems is the discrete natural numbers (and as many other
> ordinary number sets you care to name) can each and all comfortably
> reside in the continuum of real numbers, but the converse is
> impossible. The continuum can then be seen as the superior and
> encompassing concept.

Yes, but math is not reality. Math is nothing but man playing with a
discrete language of absolutes to see what type of interesting fairy tales
we can create with this language. The properties of math are NOT the
properties of the universe. They are the properties of a language.

Math starts out by defining a few absolute symbols that reference nothing
(or which can reference anything). Then it builds on that by defining more
language which references the absolute symbols themselves. Then it gets
really interesting, when statements start to reference itself (recursion).
But no where in all that math, does reality exist. It's just what happens
when you start to play with language.

Math is so useful to us because when we find parts of reality to align it
with, then anything we can derive in the language alone, we know will also
apply to that aspect of reality.

Negative numbers result from reversing the process of addition. When this
process produces a number that could not have resulted, we created more
language to give names to the different classes of problems which had no
answer. The language of place holders are called negative numbers.
Negative numbers just represent a class of reverse addition problems that
have no answer - but which are useful to name, because if you change the
problem by adding a large enough number to it, it does have an answer.

Multiplication is the repeated application of addition. Division is what
you get by trying to reverse the process of multiplication. The rational
numbers are what you get when you are given a reverse multiplication that
has no answer. They act as place holders for classes of reverse
multiplication problems which have no answer. So again, we created more
language to represent these problems which had no answer, but which could
produce an answer simply by multiplying by the right number.

Square roots are another special type of reverse multiplication. And
again, it creates another class of problems which you can specify, but
which have no answer. So, again, another type of language was created to
give a short hand name for these problems which had no answer. imaginary
numbers.

Then along came the infinite series. These are procedures, like addition,
and multiplication, and division, except they never end. Since they can't
end, then the procedure again, has no answer. So we create more language
to describe these procedures - we call them the irrational numbers - like
pi.

All this language started with the idea of discrete symbols, and then built
layer, after layer, after layer, of new language, which described the old
language, which at the root, described the discrete symbols we started
with. (1, 2, 3, ...).

All our mathematical concepts of continuous are based on infinite series
which can never end in reality, but can be carried out as far as needed for
any use.

Though we think of something like pi as a number of infinite precision. It
in fact is just a name we have given to these very discrete algorithms
which produce a sequence of finite length numbers, which get closer and
closer to what we think of as the number pi, but can never reach it because
no algorithm can run forever. So again, pi, the infinite number, doesn't
exist. Pi is just more language we created to give a name to all these
different infinite procedures.

So, lets look at numbers. We created the decimal numbering system as yet
another new language, where each place holder is a special name for 10x the
place holder to it's right, (aka it's a new "language" where instead of
making up more unique symbols, we create them logically by using old
symbols (0 to 9), in a different location in the written or spoken
language.

Every integer, is defined by a finite sequence of language.

But, when we extend this notation to try and label the rationals, the
language becomes infinite for some numbers (1/3). That is, the conversion
algorithm never ends and keeps producing more and more output.

And likewise, the irrationals, like pi, were already specified by infinite
algorithms and likewise, they produce decimal language for as long as you
let them run.

So, now, using the decimal language we created, we have defined the whole
numbers, which are all finite in length, but which if you produced a
program to try and list them all, would never end. And we have the
rationals, and irritations, which can be represented in this same language,
except they are represented by algorithms that never end, even when trying
to produce a single number like 1/3, or pi.

So, what happens when we try to answer the question of whether there are
more reals than whole numbers? Well, to start with, it's really bogus to
ask the questions, because neither are countable - both our produced by
processes which never end. But we can ask the question in a similar but
different way which isn't as bogus. We can ask if there's a one to one
mapping between the two sets. And with the simple diagonalization
argument, we show that such a mapping can't exist, because we can specify
yet another infinite algorithm to produce a number which we can prove can't
be in the table.

But, if you step back and closely examine what has been proven, it breaks
down to this. Integers, are defined as an algorithm, producing an infinite
number of answers (all the integers). Reals, are defined as an algorithm,
producing an infinite number of algorithms, each of which produce an
infinite answer. In other words, the reals are defined by an infinite
amount of algorithm, where as the integers, are defined by a finite amount
of algorithm.

So, when you said above:

The continuum can then be seen as the superior and
encompassing concept.

Where that comes from is in fact, the idea that an infinite sized algorithm
is always guaranteed to encompass a finite sized algorithm. That's all it
means. And that's something which is so obvious, it doesn't need to be
said.

All our concepts, including "continuum" is expressed, and understood, in
terms of discrete symbols, and algorithms - exactly like what computers do.

> It is logical to carry this theme onward such that space is the
> continuum that can contain discrete objects, and time is the continuum
> that can contain discrete events.

Yes, but it's pointless to do that since our entire concept of "continuum"
was created, and defined, in terms of discrete events. It comes down to
meaning nothing more than a list of N discrete events will always contain a
list of N-1 discrete events.

> After Einstein, we have space-time as
> the continuum that can contain discrete objects and events. This
> explanation is the most probable and least complex and is a natural
> step. You will find most physicists hold this view.

Yes, and it's useful. But it was created and defined by our ability to
process discrete events. And the language of math, is the language of
absolutes we created which is exactly analogous to what we try to make our
computers do (produce absolute discrete answers to absolute discrete
questions (1+1=2, T and T is T, etc). If we can understand all this in
terms of discrete events and symbols, then why can't computers understand
it in terms of discrete events?

> > How can you prove that anything in this universe is actually
> > continuous? You can't. It's only an assumption we make because the
> > assumption works at many of the levels we have to deal with from day to
> > day.
>
> We can provide a demonstration. As a test, we can operate as if the
> assumptions of continuity of space, and especially of time, were true.
> If things in general work better while following those assumptions,
> then you might be willing to say that it was a worthwhile exercise and
> worthy of serious thought and follow-up experiments.

Except all our concepts of continuous are defined on the idea of a
procedure which never terminates. And since you can't run a procedure
which never terminates to completion, it's impossible to test to see if
anything is continuous. We can only test it to the limits of our patients
to wait for the procedure to end. We can only say that for as long as we
have tried to test, we have found nothing to prove it's not continuous (aka
we have found not conflict with the infinite procedure).

All our behavior, unlike math, is based on probabilities, not absolutes.
Absolute truth in fact doesn't exist. We do the things that are most
likely to produce rewards. If a test has never proven wrong, then that's
the highest probability we can measure. It means the correct optimal
behavior is to assume a probability very near 1 that it will happen the
same way on the next test. This is why we it's valid to act as if many of
these things will always be true - even if we can never know if they are in
fact an absolute truth.

We assume gravity will always work like it does. But tomorrow, gravity
might suddenly increase by 10% everywhere in the universe for reasons we
have no way to understand. But it's not optimal behavior to assume that
might happen, because all our evidence suggests the probability of that
happening is so low, that we should do nothing other than assume gravity
will continue to work the same.

> > Space is nothing but a puc. It's a lack of existence. Time is the same
> > thing. Time doesn't exist. Time and space are only concepts created
> > by humans to explain the difference between the things that do exist.
>
> There is no doubt that space-time exists. Think of space as a
> near-infinite potential for accepting and housing matter and energy.
> Distance in space is the separation between bits of matter. Think of
> time as a potential for accepting and containing events. Duration in
> time is the separation between events.

Space and time are derived concepts. They exist only in our head as models
of the things that do exist - the events by which time and space are marked
for us - which is the pulse signals that flow through our brain, and their
timing relative to the way our neurons change over time.

> > > > The concept of continuity is only a theory which we find useful as
> > > > a foundation for understanding the universe. All our understanding
> > > > of the universe however is done using discrete systems. Some are
> > > > of such high resolution that we call them analog or continuous, but
> > > > yet we can't prove if they are truly analog, or just discrete
> > > > systems of a resolution beyond our ability to find the place where
> > > > there is no point between the two other points.
> > >
> > > The reason we are limited in our understanding is the absence of
> > > appropriate tools. There is only one continuous mathematics, the
> > > calculus, whereas there are numerous discrete systems. All logic
> > > taught in our universities are discrete varieties, with no form of
> > > continuous logic available.
> >
> > Fuzzy logic is a form of continuous logic. But that's not important.
>
> Fuzzy logic is descended from set theory and is a discrete technique.

As is all of math. But Fuzzy logic is based on real numbers (isn't it?)
and as such, includes our concept of continuous.

> > It's true that since most our tools are discrete, it tends to make us
> > think more in those terms. And it's also true that motion and space
> > and time all seem to be continuous to us - which means for almost all
> > practical applications, it's safe and valid to just assume they are
> > continuous. But we have no way of knowing if they are, or if they just
> > seem to be.
>
> We have acted in the past as if space and time are continuous. If we
> act so in the future, our machines will be smarter and give us less
> trouble than they do now.

Yes. I agree. But I don't agree there is any evidence that even a
standard computer using only it's very discrete system of symbol
manipulation, would not be able to do exactly what the brain is doing. All
our concepts of continuous are defined using discrete symbols and discrete
algorithms. Though we have a simple intuitive understand of continuous,
both in formal systems like math, and in life in general, I claim we have
no prove that any of that understanding came from a non discrete system.

Though spikes are clearly discrete in their existence, they carry with them
a very continuous like time dimension - they can happen at any point in
time. And though neurons are discrete event generators, when they generate
their spikes is the result of an extremely high resolution time based
chemical reaction which is fair to describe as a continuous analog
calculation. So there is no doubt that the brain is full of processes
which if not continuous, are of such high resolution, we might as well call
them continuous, or analog.

But, at the same time, it's fairly easy to prove that all these analog
processes have limited amounts of resolution below which, there exists only
noise. For the brain to be able to produce a valid reaction to something
like a cat, it must be able to separate the "cat" data, from the internal
noise produced in these analog processes. It has to filter out, and
ignore, most of the noise, or else the brain could never produce useful
reactions to the data. This puts a limit on the S/N ratios of all the
signals in the brain, which means that they can all be translated to a
purely discrete signaling system without loosing any of the useful
information. It's only a question of how much resolution is needed (how
many bits) to duplicate the function.

Sound is very analog, but we have no problem using digital systems to
process it as long as we include A/D and D/A hardware to do the conversions
when needed.

Likewise, the motion of the arm or a leg of a robot is a very continuous
and analog process, yet we have no problem using digital processors to
control the actions of our robotic arms down to resolutions far greater
than any human can reproduce (less than 1/1000 of an inch for example).

Our chemical processing plants and oil refineries are a very analog and
continuous process, yet we have no problem using digital controllers to
regulate their behavior.

Evolution figured out the same thing we figured out. There are many
advantages to using discrete based signaling systems to regulate the
behavior of machines - even machines which produce very continuous
behaviors. The fact that we are experimenting with discrete based
signaling systems to attempt to produce a continuous process in a machine
like a robot, just shows that we are on the same path evolution took.

Human and animal behavior seems very free flowing and continuous to us -
especially compared to the typical behavior of our robots which tend to
produce these odd jerky mechanical motions. So I can understand why people
might have a gut feeling that humans are more analog in nature than our
computers. But we have plenty of digital systems like CD players that
produce analog behavior at such a high resolution that we can't tell
whether it's analog or digital. You can't even look at the output of a CD
player with all the tools in the world and tell if it was produce by an
analog source or a digital source because the the D to A converter is an
analog device which is simply having it's behavior regulated by digital
signals. Humans and animals work the same way. All their muscles and legs
are analog devices controlled by discrete systems just like the motors on
our robots are analog devices controlled by discrete signals. Using a
discrete signal processor like a computer is the perfect way to emulate
what evolution has done in us.

The problem with our current AI programs and robotics hardware is not in
the fact that it's based on discrete symbol processing systems. It's
because we don't produce behavior decisions at a high enough resolution
needed to duplicate that free flowing behavior the brain produces. Brains
are made up of millions of neurons acting together (and competing with each
other) in order to make the final behavior decisions. It's like having
millions or billions of different IF statements in the code working
together to create the final high resolution decision sets that direct our
behaviors. But when we program our robots, we all to often create code
made up of only a handful of If statements (if we are closer than 2 inches
to the wall, stop, back up, and turn left). We hand-code hundreds of if
statements to produce the behavior in our typical robots, where as the
brain, has had a billion of it's IF statements hand adjusted to produce our
behavior.

That very smooth and continuous behavior humans produce comes from the work
of a statistical learning process shaping billions of logical IF statements
in the "code" that controls our behavior. The only way to equal it without
using a statistical learning process to create it, is to hand code billions
of if statements. That's not something that is going to happen any time
soon in our robots unless we develop the statistical learning processes
needed to do it for us.

min...@media.mit.edu

unread,
Oct 6, 2006, 6:18:26 PM10/6/06
to
Nice discussion, Curt. One might also discuss the Löwenheim-Skolem
theorem. (See
http://en.wikipedia.org/wiki/Löwenheim-Skolem_theorem.) Intuitively,
this demonstrates that every mathematical model has an equivalent
countable model, to which the same set of logical statements apply. In
this sense, we don't really need the kind of continuity that would
seem to require the whole set of "real" numbers -- but only the ones
that can be described by the sentences of the logical theory that one
is using.

Why does a countable model always exist? Because any set of symbolic
statements using a finite alphabet is countable -- and can only talk
about the parts of the model that can be described by statements in
that set. At least, that's what I think the L-S theorem means... .

As for the title of this thread, it's true that I don't "get it."
Mainly, of course, because that "it" is false!

AlphaOmega

unread,
Oct 7, 2006, 5:03:51 PM10/7/06
to

"Curt Welch" <cu...@kcwc.com> wrote in message
news:20060927175504.563$0...@newsreader.com...

> "Charlie" <cmoe...@aol.com> wrote:
>> Curt Welch wrote:
>> >> Can you name a single function, that we know as a fact that the brain
>> >> is performing, that we know is impossible to duplicate with a digital
>> >> system?
>>
>> Don wrote:
>> >Curt asked for a problem that "we know is impossible" for a computer.
>>
>> OK, I'll put it this way:
>>
>> We know that simulating continuous time (an actual continuum) is
>> impossible for any digital machine. This is because all digital
>> machines progress from time to time (in either simulated or actual
>> sensed existence) in step-by-step (or frame-by-frame) fashion. There is
>> no continuity in the machine because nothing can be sensed/experienced
>> by it in the between-times. It does not even know if it is "alive"
>> or "dead" from one time to the next.
>>
>> We also know that continuity is precisely the way that most humans
>> think about and experience ongoing existence. If there is a slight or
>> great discontinuity such as sleep or a period of unconsciousness, the
>> human is invariably aware of it at his next waking moment. Just ask
>> anyone.
>
> But the human brain processes all information using spike signals.

This is incorrect. Brain processes information using manifold mechanisms,
including EM, B and E-fields, electrochemical interactions and the soup that
is neurochemicals.

>This
> means, the brain is totally unaware of what is happening between the
> spikes.

That is also incorrect.

--
Posted via a free Usenet account from http://www.teranews.com

Charlie

unread,
Oct 8, 2006, 1:20:46 PM10/8/06
to

Curt Welch wrote:

< ...snip...>

< That very smooth and continuous behavior humans produce comes from
the work
< of a statistical learning process shaping billions of logical IF
statements
< in the "code" that controls our behavior. The only way to equal it
without
< using a statistical learning process to create it, is to hand code
billions
< of if statements. That's not something that is going to happen any
time
< soon in our robots unless we develop the statistical learning
processes
< needed to do it for us.


So your answer for robotics is: Having numerous-enough parallel and
overlapping, but non-synchronous, sensory and motor dchannels to at
least appear to sense and respond over continuous time, much the same
as humans do.

But that is one of my points, thank you.


Another of my points:

Discrete processes do not work without relying upon the concepts and
reality of continuity. Continuity is as vital to discrete process as
friction is to mechanics. (Can't work without it.)

Curt Welch

unread,
Oct 8, 2006, 9:14:20 PM10/8/06
to
"AlphaOmega" <OmegaZ...@yahoo.com> wrote:
> "Curt Welch" <cu...@kcwc.com> wrote in message
> news:20060927175504.563$0...@newsreader.com...

> > But the human brain processes all information using spike signals.


>
> This is incorrect. Brain processes information using manifold
> mechanisms, including EM, B and E-fields, electrochemical interactions
> and the soup that is neurochemicals.

What are EM, B, and E fields?

There are certainly various chemical signal paths at work which are
independent of the spike signals but I'm not aware that sensory information
or effector outputs are conveyed through those pathways. Are they?

> > This
> > means, the brain is totally unaware of what is happening between the
> > spikes.
>
> That is also incorrect.

In what way?

It is true that a lack of a spike in a time period does in fact convey some
information as the probability of when the next spike will happen is
constantly changing as time passes with no spike received. But it's the
spike itself (when it does show up) that carries the real information.

Curt Welch

unread,
Oct 11, 2006, 1:26:54 AM10/11/06
to
"Charlie" <cmoe...@aol.com> wrote:
> Curt Welch wrote:
>
> < ...snip...>
>
> < That very smooth and continuous behavior humans produce comes from
> the work
> < of a statistical learning process shaping billions of logical IF
> statements
> < in the "code" that controls our behavior. The only way to equal it
> without
> < using a statistical learning process to create it, is to hand code
> billions
> < of if statements. That's not something that is going to happen any
> time
> < soon in our robots unless we develop the statistical learning
> processes
> < needed to do it for us.
>
> So your answer for robotics is: Having numerous-enough parallel and
> overlapping, but non-synchronous, sensory and motor dchannels to at
> least appear to sense and respond over continuous time, much the same
> as humans do.
>
> But that is one of my points, thank you.

I don't know anyone that would think otherwise. In order to make a machine
act like a human, you are damn well going to have to have multiple parallel
sensory and effector paths (at least logically).

> Another of my points:
>
> Discrete processes do not work without relying upon the concepts and
> reality of continuity. Continuity is as vital to discrete process as
> friction is to mechanics. (Can't work without it.)

The continuity I think which is most important for creating human like
behavior in a machine is a very fine grained sense of time. Human behavior
is very temporal in nature - we can throw a rock and hit a moving target
while balancing ourselves on two legs. These types of behaviors can't be
learned by a machine unless it's get a very high resolution sense of time
which is an integral part of all its behavior. Our musical and rhythm
ability is a nice abstract way to demonstrate our temporal behavior - but
it's there because we can't do simple things like walk, or grab a moving
object without it.

Charlie

unread,
Oct 11, 2006, 2:20:48 PM10/11/06
to
Curt Welch wrote:
> "Charlie" <cmoe...@aol.com> wrote:

> > Discrete processes do not work without relying upon the concepts and
> > reality of continuity. Continuity is as vital to discrete process as
> > friction is to mechanics. (Can't work without it.)
>
> The continuity I think which is most important for creating human like
> behavior in a machine is a very fine grained sense of time. Human behavior
> is very temporal in nature - we can throw a rock and hit a moving target
> while balancing ourselves on two legs. These types of behaviors can't be
> learned by a machine unless it's get a very high resolution sense of time
> which is an integral part of all its behavior. Our musical and rhythm
> ability is a nice abstract way to demonstrate our temporal behavior - but
> it's there because we can't do simple things like walk, or grab a moving
> object without it.


It is not just "a fine grained sense of time" (which implies a high
sampling rate). This is already being done in computers and it still
isn't enough. The frame problem caused by a "wait" between actions is
not solved in linear-sequential computers no matter how rapid the
sampling rate. The sense of time necessary for human-like behavior and
apprehension in machines is that of continuous time.

That is *time without interruption*.

Curt Welch

unread,
Oct 11, 2006, 6:45:50 PM10/11/06
to
"Charlie" <cmoe...@aol.com> wrote:
> Curt Welch wrote:
> > "Charlie" <cmoe...@aol.com> wrote:
>
> > > Discrete processes do not work without relying upon the concepts and
> > > reality of continuity. Continuity is as vital to discrete process as
> > > friction is to mechanics. (Can't work without it.)
> >
> > The continuity I think which is most important for creating human like
> > behavior in a machine is a very fine grained sense of time. Human
> > behavior is very temporal in nature - we can throw a rock and hit a
> > moving target while balancing ourselves on two legs. These types of
> > behaviors can't be learned by a machine unless it's get a very high
> > resolution sense of time which is an integral part of all its behavior.
> > Our musical and rhythm ability is a nice abstract way to demonstrate
> > our temporal behavior - but it's there because we can't do simple
> > things like walk, or grab a moving object without it.
>
> It is not just "a fine grained sense of time" (which implies a high
> sampling rate). This is already being done in computers and it still
> isn't enough.

Isn't enough for what?

> The frame problem caused by a "wait" between actions is
> not solved in linear-sequential computers no matter how rapid the
> sampling rate. The sense of time necessary for human-like behavior and
> apprehension in machines is that of continuous time.
>
> That is *time without interruption*.

What evidence do you have to support that position?

Don Geddis

unread,
Oct 12, 2006, 3:56:53 PM10/12/06
to
"Charlie" <cmoe...@aol.com> wrote on 11 Oct 2006 11:2:
> The frame problem caused by a "wait" between actions is not solved in
> linear-sequential computers no matter how rapid the sampling rate.

Well, that's true, but not important.

The "frame problem" is an issue in the logical description of state
transitions, and attempting to write compact descriptions of them with
defaults. You are correct that increasing the granularity of time doesn't
solve the frame problem.

But that has nothing to do with the topic you're discussing, which is whether
discrete devices can deal with the real world (vs. whether some kind of
"continuous" processing is required). The frame problem is not a generic
problem of discrete devices.

-- Don
_______________________________________________________________________________
Don Geddis http://don.geddis.org/ d...@geddis.org
The first thing was, I learned to forgive myself. Then, I told myself, "Go
ahead and do whatever you want, it's okay by me."
-- Deep Thoughts, by Jack Handey

Charlie

unread,
Oct 12, 2006, 4:32:04 PM10/12/06
to
Curt Welch wrote:
> "Charlie" <cmoe...@aol.com> wrote:
> > Curt Welch wrote:
> > > "Charlie" <cmoe...@aol.com> wrote:
> >
> > > > Discrete processes do not work without relying upon the concepts and
> > > > reality of continuity. Continuity is as vital to discrete process as
> > > > friction is to mechanics. (Can't work without it.)
> > >
> > > The continuity I think which is most important for creating human like
> > > behavior in a machine is a very fine grained sense of time. Human
> > > behavior is very temporal in nature - we can throw a rock and hit a
> > > moving target while balancing ourselves on two legs. These types of
> > > behaviors can't be learned by a machine unless it's get a very high
> > > resolution sense of time which is an integral part of all its behavior.
> > > Our musical and rhythm ability is a nice abstract way to demonstrate
> > > our temporal behavior - but it's there because we can't do simple
> > > things like walk, or grab a moving object without it.
> >
> > It is not just "a fine grained sense of time" (which implies a high
> > sampling rate). This is already being done in computers and it still
> > isn't enough.
>
> Isn't enough for what?


Isn't enough to solve the 'frame problem caused by a "wait" between
actions'
as I expressed in my 10-11-06, and below.


> > The frame problem caused by a "wait" between actions is
> > not solved in linear-sequential computers no matter how rapid the
> > sampling rate. The sense of time necessary for human-like behavior and
> > apprehension in machines is that of continuous time.
> >
> > That is *time without interruption*.


> What evidence do you have to support that position?


I gave it to you already (above) and on 10-11-06.
But, at the risk of being repetitive, here it is again:
*The frame problem caused by a "wait" between actions is not solved in
linear-sequential computers no matter how rapid the sampling rate.*
(See The Yale Shooting Problem for the complete description.)

Charlie

unread,
Oct 12, 2006, 7:17:25 PM10/12/06
to

Don Geddis wrote:
> "Charlie" <cmoe...@aol.com> wrote on 11 Oct 2006 11:2:
> > The frame problem caused by a "wait" between actions is not solved in
> > linear-sequential computers no matter how rapid the sampling rate.
>
> Well, that's true, but not important.
>
> The "frame problem" is an issue in the logical description of state
> transitions, and attempting to write compact descriptions of them with
> defaults. You are correct that increasing the granularity of time doesn't
> solve the frame problem.
>
> But that has nothing to do with the topic you're discussing, which is whether
> discrete devices can deal with the real world (vs. whether some kind of
> "continuous" processing is required).


The problem we're really discussing is what is required to produce AI,
or mechanized human-level intelligence. I contend that the same
apprehension of continuous time that humans use to survive is required
in machines to reach a level of ability comparable to humans.

We are not discussing "continuous processing," which is an oxymoron,
because linear-sequential processing is a frame-by-frame affair. It is
discrete steps, with nothing continuous about it.


> The frame problem is not a generic
> problem of discrete devices.


But it is a generic problem specific to those devices but not to humans
or analog computers. Simply because discrete devices, such as digital
computers (essentially all universal Turing machines) operate in
algorithms, from step to step, or frame to frame. Humans are not
limited to frame by frame operation.

It is loading more messages.
0 new messages