Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Conference: Loebner and his prize

7 views
Skip to first unread message

Kenny Tilton

unread,
Nov 2, 2002, 6:34:53 PM11/2/02
to
Speaking of the Turing Test, Loebner of the Loebner prize gave a talk.
The Loebner prize is a kinda Turing test competition. One thing came up
I had not thought of. What about typos by the human? A program would not
normally make that mistake. Makes me wonder how one would normalize
responses during a TT to elim that factor. Loebner said not to, amke the
programmers have their program "fake" typos.

Loses sight of the end goal of the TT, viz, when can we say computers
are smart? otoh, I have long maintained that if we ever create truly
intelligent systems they will screw up just like us, as in temporarily
blank ontheir own phone numbers. otoh^2, hardcoding an app to randomly
mistype is not how we mistype, so it is a waste of the AI community's
time to have them worrying about that. otoh^3, i got the feeling loebner
is more interested in seeing his name in print than in promoting AI with
his prize.

I am reminded of (I think it was) Winograd discussing shrdlu.

http://hci.stanford.edu/~winograd/shrdlu/

He himself says (somewhere, not above) shrdlu is a hard-coded stunt
hence uninteresting as AI.

--

kenny tilton
clinisys, inc
---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
and I'm happy to state I finally won out over it.""
Elwood P. Dowd

sv0f

unread,
Nov 2, 2002, 10:47:34 PM11/2/02
to
In article <3DC4628...@nyc.rr.com>, Kenny Tilton <kti...@nyc.rr.com> wrote:

>The Loebner prize is a kinda Turing test competition. One thing came up
>I had not thought of. What about typos by the human? A program would not
>normally make that mistake. Makes me wonder how one would normalize
>responses during a TT to elim that factor. Loebner said not to, amke the
>programmers have their program "fake" typos.

Along these lines, it is my understanding that Loebner prize entries
use algorithms to simulate the rhythyms of human typing.

>I am reminded of (I think it was) Winograd discussing shrdlu.
>
> http://hci.stanford.edu/~winograd/shrdlu/
>
>He himself says (somewhere, not above) shrdlu is a hard-coded stunt
>hence uninteresting as AI.

Winograd appears to have had a more general loss of faith in AI,
so it is not surprising that he has this view of his own early
efforts.

Lovecraftesque

unread,
Nov 2, 2002, 11:49:02 PM11/2/02
to
On Sat, 02 Nov 2002 15:34:53 -0800, Kenny Tilton wrote:

> What about typos by the human? A program would not
> normally make that mistake. Makes me wonder how one would normalize
> responses during a TT to elim that factor. Loebner said not to, amke the
> programmers have their program "fake" typos.

Given the sophistication of the contending programs that's
something they might just as well forget about. You might like to
try and have a conversation with ALICE, the winner during the last
two years; just a couple of questions will allow you to see that the
intelligence embodied in that code is nil, with or without typos.

Kenny Tilton

unread,
Nov 3, 2002, 12:57:02 AM11/3/02
to

I think the egg came before the chicken. Winograd was able to be quite
specific (and convincing) about how un-AIish was shrdlu, so it was not
because of a general loss of faith, it fed that loss. He talked about
how it was just a bunch of hard-coded patches to handle case after case
the NL engine got wrong. ie, They got a decent result within the
incredibly small context of a block world by hard-coding, meaning
nothing they did touched on the general case of NL machine
comprehension. shrdlu would never scale.

Kenny Tilton

unread,
Nov 3, 2002, 1:03:03 AM11/3/02
to

yup. just had a disappointing chat with the old girl:

http://alicebot.org/

Paul Wallich

unread,
Nov 3, 2002, 5:23:57 PM11/3/02
to
In article <3DC4BB95...@nyc.rr.com>,
Kenny Tilton <kti...@nyc.rr.com> wrote:

>sv0f wrote:
>> In article <3DC4628...@nyc.rr.com>, Kenny Tilton <kti...@nyc.rr.com>
>> wrote:
>>>I am reminded of (I think it was) Winograd discussing shrdlu.
>>>
>>> http://hci.stanford.edu/~winograd/shrdlu/
>>>
>>>He himself says (somewhere, not above) shrdlu is a hard-coded stunt
>>>hence uninteresting as AI.
>>
>>
>> Winograd appears to have had a more general loss of faith in AI,
>> so it is not surprising that he has this view of his own early
>> efforts.
>
>I think the egg came before the chicken. Winograd was able to be quite
>specific (and convincing) about how un-AIish was shrdlu, so it was not
>because of a general loss of faith, it fed that loss. He talked about
>how it was just a bunch of hard-coded patches to handle case after case
>the NL engine got wrong. ie, They got a decent result within the
>incredibly small context of a block world by hard-coding, meaning
>nothing they did touched on the general case of NL machine
>comprehension. shrdlu would never scale.

FSVO "scale". One of the things that has always amazed me is the
implicit assumption that intelligence should be easy. It takes 15 years
of fairly hard work, with input devices optimized over millennia and
programming methods optimized over centuries, to make wetware
natural-language-processing machines that work worth a darn. Yet some
graduate student operating through maybe a millionth of the bandwidth is
supposed to do the job in six months or a year, and if it doesn't, that
approach "doesn't scale" or just "isn't promising". (Also note that lots
of the decisions about what scaled or didn't were made when the memories
were about a thousandth the size they are now.)

That's one of the things I think Doug Lenat has right -- not that Cyc or
its immediate derivates will necessarily be magic solutions to the
intelligence problem -- the scale of the undertaking is to be measured
in decades and dozens of people, just like the education of human beings.

If nothing else, this is necessary to get around the catch-22 that if
you can understand how it works, it's not intelligence -- once the
programs get big enough and have been around long enough, no one will
understand how they work.

paul

Erik Naggum

unread,
Nov 3, 2002, 6:50:04 PM11/3/02
to
* Paul Wallich <p...@panix.com>

| If nothing else, this is necessary to get around the catch-22 that if you
| can understand how it works, it's not intelligence -- once the programs
| get big enough and have been around long enough, no one will understand
| how they work.

Maybe they eventually can tell us how we work?

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Paul Wallich

unread,
Nov 3, 2002, 11:42:00 PM11/3/02
to
In article <32453562...@naggum.no>, Erik Naggum <er...@naggum.no>
wrote:

>* Paul Wallich <p...@panix.com>
>| If nothing else, this is necessary to get around the catch-22 that if you
>| can understand how it works, it's not intelligence -- once the programs
>| get big enough and have been around long enough, no one will understand
>| how they work.
>
> Maybe they eventually can tell us how we work?

They will, but we won't be able to understand the explanation.

Erik Naggum

unread,
Nov 3, 2002, 11:52:42 PM11/3/02
to
* Paul Wallich

| They will, but we won't be able to understand the explanation.

I see a religion in the making...

Rob Warnock

unread,
Nov 4, 2002, 4:21:25 AM11/4/02
to
Erik Naggum <er...@naggum.no> wrote:
+---------------

| * Paul Wallich
| | They will, but we won't be able to understand the explanation.
|
| I see a religion in the making...
+---------------

42.


-Rob

-----
Rob Warnock, PP-ASEL-IA <rp...@rpw3.org>
627 26th Avenue <URL:http://www.rpw3.org/>
San Mateo, CA 94403 (650)572-2607

Jens Axel Søgaard

unread,
Nov 4, 2002, 8:22:31 AM11/4/02
to
Erik Naggum wrote:
> * Paul Wallich
> | They will, but we won't be able to understand the explanation.
>
> I see a religion in the making...

http://www.dina.dk/~abraham/religion/

Barry Margolin

unread,
Nov 4, 2002, 11:24:25 AM11/4/02
to
In article <pw-002DFD.17...@reader1.panix.com>,

Paul Wallich <p...@panix.com> wrote:
>FSVO "scale". One of the things that has always amazed me is the
>implicit assumption that intelligence should be easy. It takes 15 years
>of fairly hard work, with input devices optimized over millennia and
>programming methods optimized over centuries, to make wetware
>natural-language-processing machines that work worth a darn.

You're comparing to humans, but AI can't even match animal intelligence. A
newborn horse can walk and run better than any man-made robot. A dog can
understand spoken language about as well as any computer. An ant colony is
more adaptive than a spreadsheet (whenever I see an ant on my windshield
when I'm driving, I always wonder what it does when I get to my
destination, miles away from its colony).

Yes, evolution has had millenia to work out the bugs, but we think that we
have the advantage of using intelligent design (evolution being the "blind
watchmaker") that should allow us to accomplish things more quickly.

I think what most confounds AI is that intelligence is not just a few
clever algorithms. Living things are a rube-goldberg patchwork of kludges
and special cases. Computer programmers like to look for elegant, general
solutions, but intelligence is probably an emergent property of a large
collection of specific rules (Minsky's "Community of Minds").

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Paul Wallich

unread,
Nov 4, 2002, 12:26:56 PM11/4/02
to
In article <Zaxx9.6$zB....@paloalto-snr1.gtei.net>,
Barry Margolin <bar...@genuity.net> wrote:

>In article <pw-002DFD.17...@reader1.panix.com>,
>Paul Wallich <p...@panix.com> wrote:
>>FSVO "scale". One of the things that has always amazed me is the
>>implicit assumption that intelligence should be easy. It takes 15 years
>>of fairly hard work, with input devices optimized over millennia and
>>programming methods optimized over centuries, to make wetware
>>natural-language-processing machines that work worth a darn.
>
>You're comparing to humans, but AI can't even match animal intelligence. A
>newborn horse can walk and run better than any man-made robot. A dog can
>understand spoken language about as well as any computer. An ant colony is
>more adaptive than a spreadsheet (whenever I see an ant on my windshield
>when I'm driving, I always wonder what it does when I get to my
>destination, miles away from its colony).

It dies. That's another option that's not really available to most AI
systems -- it's called "brittleness" and gets you sent to the back of
the VC line. (There's an enormous amount of stuff that can be said about
any "AI" or robotics project that involves manipulating a real body or
the real world, but most of it is about applying the wrong design
principles and trying to substitute billions of cycles for the right
parts and a little knowledge of mechanical engineering)

>Yes, evolution has had millenia to work out the bugs, but we think that we
>have the advantage of using intelligent design (evolution being the "blind
>watchmaker") that should allow us to accomplish things more quickly.

Of course it should. But by what factor? I think it's expecting a bit
too much that a relative handful of people working with limited
resources should be able to recapitulate evolution at 20 million times
the rate that nature managed (even allowing for mass extinctions in both
versions). If you're willing to settle for a mere million times speedup,
we should have a few centuries to go yet...

>I think what most confounds AI is that intelligence is not just a few
>clever algorithms. Living things are a rube-goldberg patchwork of kludges
>and special cases. Computer programmers like to look for elegant, general
>solutions, but intelligence is probably an emergent property of a large
>collection of specific rules (Minsky's "Community of Minds").

I agree with you there. Not necessarily rules in the sense that they're
commonly thought of, or even as Minsky wrote it, but the idea that
intelligent behavior arises from a pile of kluges makes sense to me, if
only from the fact that it's in inspired sloppiness that natural
intelligence seems to show itself most clearly.

paul

Tim Bradshaw

unread,
Nov 4, 2002, 12:57:37 PM11/4/02
to
* Paul Wallich wrote:
> I agree with you there. Not necessarily rules in the sense that
> they're commonly thought of, or even as Minsky wrote it, but the
> idea that intelligent behavior arises from a pile of kluges makes
> sense to me, if only from the fact that it's in inspired sloppiness
> that natural intelligence seems to show itself most clearly.

And of course, once you've piled up enough kludges to build your `AI'
you find that you can't understand how it works, because it's like a
huge goto-ridden FORTRAN program, except thousands of times bigger.
So what, actually, have you gained?

--tim

sv0f

unread,
Nov 4, 2002, 1:21:22 PM11/4/02
to
In article <Zaxx9.6$zB....@paloalto-snr1.gtei.net>, Barry Margolin
<bar...@genuity.net> wrote:

>You're comparing to humans, but AI can't even match animal intelligence. A
>newborn horse can walk and run better than any man-made robot. A dog can
>understand spoken language about as well as any computer.

This might be giving the dog too much credit (and the AI folks too little).

>Yes, evolution has had millenia to work out the bugs, but we think that we
>have the advantage of using intelligent design (evolution being the "blind
>watchmaker") that should allow us to accomplish things more quickly.

Ithought "intelligent design" was the euphism creationists use to
make their account of the origins of life more scientific sounding?

>I think what most confounds AI is that intelligence is not just a few
>clever algorithms. Living things are a rube-goldberg patchwork of kludges
>and special cases. Computer programmers like to look for elegant, general
>solutions, but intelligence is probably an emergent property of a large
>collection of specific rules (Minsky's "Community of Minds").

"Society of Mind".

Barry Margolin

unread,
Nov 4, 2002, 2:18:17 PM11/4/02
to

Yes, evolution produces the ultimate in spaghetti code (it's even worse,
because often the same module is used for multiple, unrelated, purposes,
such as the use of parts the urinary tract for reproduction). It's like
some of the worst programmers, continually tweaking the program until it
works, rather than designing it right in the first place (by its nature,
evolution can only perform minute, incremental modification).

And the folks in the Human Genome Project are trying to decompile this
mess. My suspicion is that intelligence is as complex as all the other
biologic processes combined, so it's no wonder that it's so hard for us to
duplicate (we can barely emulate human walking in machines).

Paul Wallich

unread,
Nov 4, 2002, 2:18:54 PM11/4/02
to
In article <ey3fzuh...@cley.com>, Tim Bradshaw <t...@cley.com>
wrote:

>* Paul Wallich wrote:

You've probably gained some understanding of a bunch of the subtasks
required to implement "intelligence", but mostly you've gained the thing
itself and any of the things it can do. There are lots of tasks you
might want an embodied or disembodies AI to perform, and the tour de
force (ahem) of having added another class of sentient to the short list
currently known to exist should be worth something in itself...

paul until it hires a lawyer, of course

Barry Margolin

unread,
Nov 4, 2002, 2:33:58 PM11/4/02
to
In article <pw-03D474.12...@reader1.panix.com>,

Paul Wallich <p...@panix.com> wrote:
>In article <Zaxx9.6$zB....@paloalto-snr1.gtei.net>,
> Barry Margolin <bar...@genuity.net> wrote:
>>Yes, evolution has had millenia to work out the bugs, but we think that we
>>have the advantage of using intelligent design (evolution being the "blind
>>watchmaker") that should allow us to accomplish things more quickly.
>
>Of course it should. But by what factor? I think it's expecting a bit
>too much that a relative handful of people working with limited
>resources should be able to recapitulate evolution at 20 million times
>the rate that nature managed (even allowing for mass extinctions in both
>versions). If you're willing to settle for a mere million times speedup,
>we should have a few centuries to go yet...

I think the high expectations resulted from the success we've had in
engineering physical devices. We can make machines that are hundreds of
times stronger or faster than anything produced in nature. This has given
us the conceit that anything nature could do, we could do better.

As it turns out, one area that nature seems to have us beat is in designing
flexibility into its products. We can make devices that excel in their
narrow area of expertise, but even the simplest, single-celled creatures
are better at adapting to unfamiliar circumstances than any computer
program or robot.

I think this is actually because this flexibility is added in the same way
in both nature and man-made things: when a situation comes up that couldn't
be handled, you add support for it. In engineering, for instance, building
codes were modified when we started building skyscrapers in
earthquake-prone areas. And this is where evolution's millennia give it
the advantage: environmental changes and species migrations mean that any
genome in existence now has survived thousands of new situations, and been
tweaked to handle them all. You can't design for the unforeseeable (if you
could, it wouldn't be unforeseeable), you can only react to it. Evolution
also had the advantage that the changes it needed to adapt to occurred
slowly; it never had to deal with quantum changes like upgrading (or worse,
switching) operating systems.

Barry Margolin

unread,
Nov 4, 2002, 2:38:07 PM11/4/02
to
In article <none-04110...@129.59.212.53>,

sv0f <no...@vanderbilt.edu> wrote:
>In article <Zaxx9.6$zB....@paloalto-snr1.gtei.net>, Barry Margolin
><bar...@genuity.net> wrote:
>
>>You're comparing to humans, but AI can't even match animal intelligence. A
>>newborn horse can walk and run better than any man-made robot. A dog can
>>understand spoken language about as well as any computer.
>
>This might be giving the dog too much credit (and the AI folks too little).

OK, voice recognition and NL understanding has made some improvements
recently, so it's probably a little better than a dog now. But until the
past few years, the best you could generally do with voice recognition was
the equivalent of "sit" and "give me your paw".

>>Yes, evolution has had millenia to work out the bugs, but we think that we
>>have the advantage of using intelligent design (evolution being the "blind
>>watchmaker") that should allow us to accomplish things more quickly.
>
>Ithought "intelligent design" was the euphism creationists use to
>make their account of the origins of life more scientific sounding?

I intentionally used their phrase.

>>I think what most confounds AI is that intelligence is not just a few
>>clever algorithms. Living things are a rube-goldberg patchwork of kludges
>>and special cases. Computer programmers like to look for elegant, general
>>solutions, but intelligence is probably an emergent property of a large
>>collection of specific rules (Minsky's "Community of Minds").
>
>"Society of Mind".

Yeah, I knew something didn't sound right there.

Paul Wallich

unread,
Nov 4, 2002, 3:14:32 PM11/4/02
to
In article <z0Ax9.19$zB....@paloalto-snr1.gtei.net>,
Barry Margolin <bar...@genuity.net> wrote:

>In article <none-04110...@129.59.212.53>,
>sv0f <no...@vanderbilt.edu> wrote:
>>In article <Zaxx9.6$zB....@paloalto-snr1.gtei.net>, Barry Margolin
>><bar...@genuity.net> wrote:
>>
>>>You're comparing to humans, but AI can't even match animal intelligence. A
>>>newborn horse can walk and run better than any man-made robot. A dog can
>>>understand spoken language about as well as any computer.
>>
>>This might be giving the dog too much credit (and the AI folks too little).
>
>OK, voice recognition and NL understanding has made some improvements
>recently, so it's probably a little better than a dog now. But until the
>past few years, the best you could generally do with voice recognition was
>the equivalent of "sit" and "give me your paw".

The artificial systems were much better, however, at learning "don't chew
on my shoes." (It was quite early that the voice-controlled vehicle
folks learned to recognize various exclamations as synonyms for "Stop
all.")

At this point, constrained-domain speech-to-speech translators with
vocabularies of 5,000 words and up are common, as are voice-response
systems with similar ranges. One of the interesting places where people
are making progress is in conversational repair -- if you count the
number of times humans have to repeat themselves or restate what they're
trying to say in a given conversation, you'd realize that we're not so
good at NLP either, we've just got lots of tricks to work around the
errors.

(In general, that's a place where AI efforts have bogged down, because
the programmer's tendency is to look for a route to a complete solution
rather than to attain a goal by interaction with other agents or with
the environment. But that's another rant.

paul

sv0f

unread,
Nov 4, 2002, 4:54:19 PM11/4/02
to
In article <pw-E391CC.15...@reader1.panix.com>, Paul Wallich
<p...@panix.com> wrote:

>The artificial systems were much better, however, at learning "don't chew
>on my shoes."

Yeah, I can't remember the last time my NLP software destroyed
any of my footwear. ;-)

sv0f

unread,
Nov 4, 2002, 4:52:48 PM11/4/02
to
In article <z0Ax9.19$zB....@paloalto-snr1.gtei.net>, Barry Margolin
<bar...@genuity.net> wrote:

>>>Yes, evolution has had millenia to work out the bugs, but we think that we
>>>have the advantage of using intelligent design (evolution being the "blind
>>>watchmaker") that should allow us to accomplish things more quickly.
>>
>>Ithought "intelligent design" was the euphism creationists use to
>>make their account of the origins of life more scientific sounding?
>
>I intentionally used their phrase.

D'oh. I need to update my own NLP software!

Tim Bradshaw

unread,
Nov 4, 2002, 5:25:45 PM11/4/02
to
* Barry Margolin wrote:
> And the folks in the Human Genome Project are trying to decompile
> this mess.

I seem to remember reading that there are bits of DNA which make sense
and are translated starting from several off-by-one locations, as well
as, possibly, read backwards.

> My suspicion is that intelligence is as complex as all the other
> biologic processes combined, so it's no wonder that it's so hard for
> us to duplicate (we can barely emulate human walking in machines).

Can we emulate it? If we can emulate a human, can we emulate a really
agile animal like a dog or a cat? I remember once spending some time
trying to make our dog fall over unintentionally and discovering that
I couldn't - she could rush around completely madly, leap in the air
&c &c but would essentially never fall over. And she was
significantly less rapid than some of the dogs she talked to. I fell
over quite a lot while trying this...

I have a theory that the hard part about simulating a human is
simulating a dog, and the hard part of that is simulating a sheep, and
the hard part of that is ... And basically a lot of AI has been done
completely backwards by trying to peel of this thin layer at the top
that we set so much store by (because it's all we have to stop us
being chimps).

--tim

Tim Bradshaw

unread,
Nov 4, 2002, 5:28:20 PM11/4/02
to
* Paul Wallich wrote:
> You've probably gained some understanding of a bunch of the subtasks
> required to implement "intelligence", but mostly you've gained the
> thing itself and any of the things it can do. There are lots of
> tasks you might want an embodied or disembodies AI to perform, and
> the tour de force (ahem) of having added another class of sentient
> to the short list currently known to exist should be worth something
> in itself...

Whether it can do useful stuff depends on whether it's smarter than
people, on whether it can be disembodied, and on whether you feel
qualms about getting it to do things that are very dangerous, I
guess. The tour-de-force bit seems like an adequate reason on its
own though!

--tim

Fred Gilham

unread,
Nov 4, 2002, 7:44:53 PM11/4/02
to

> Ithought "intelligent design" was the euphism creationists use to
> make their account of the origins of life more scientific sounding?

Having talked to certain prominent individuals in this field, I can
say with confidence that they at least think they are trying to do
science. That is, they are attempting to find a way to do experiment,
quantification, and testable theories in this area.

Unfortunately some people have the attitude that it's impossible to do
this, and therefore anything these people are doing must be bogus.
It's sadly amusing to watch the same kind of orthodox persecution of
those outside the orthodoxy from so-called scientists that religious
people were so often (rightly) criticized for. Human nature really
does seem to be a constant in some areas. People were talking about
pearls before swine recently. Another bit of insight from that same
remarkable personage is "...no one puts new wine into old wineskins;
if he does, the new wine will burst the skins and it will be spilled,
and the skins will be destroyed. But new wine must be put into fresh
wineskins. And no one after drinking old wine desires new; for he
says, 'The old is good.'" I think Thomas Kuhn wrote something about
this.

I think that if people think they can do science in the area of
intelligent design, they should be judged on whether it's good
science, not whether it's orthodox science.

Since this is related to the Lisp conference, I wonder what people
think of Greenblatt's talk, which started off on the wrong foot
(criticizing CLOS) and then got *more* controversial from there? I
would have been interested in his argument that the "signal
recognition particle" must have been engineered --- unfortunately he
didn't get to it, just mentioned it.

--
Fred Gilham gil...@csl.sri.com
Communism is a murderous failure.
Socialism is communism with movie stars.

Duane Rettig

unread,
Nov 4, 2002, 10:00:01 PM11/4/02
to
no...@vanderbilt.edu (sv0f) writes:

> In article <Zaxx9.6$zB....@paloalto-snr1.gtei.net>, Barry Margolin
> <bar...@genuity.net> wrote:
>
> >Yes, evolution has had millenia to work out the bugs, but we think that we
> >have the advantage of using intelligent design (evolution being the "blind
> >watchmaker") that should allow us to accomplish things more quickly.
>
> Ithought "intelligent design" was the euphism creationists use to
> make their account of the origins of life more scientific sounding?

Ironically, I've always thought of "intelligent design" as the euphemism
that evolutionists use to explain the anomalies in their observations
without having to use the "G" word.

Duane Rettig (not speaking for my company in this article)

ozan s yigit

unread,
Nov 4, 2002, 10:45:14 PM11/4/02
to
Fred Gilham:

> Since this is related to the Lisp conference, I wonder what people
> think of Greenblatt's talk, which started off on the wrong foot
> (criticizing CLOS) and then got *more* controversial from there?

this is the second time this was brought up, so one must ask: is it
because people think CLOS is beyond criticism (i certainly hope not)
or is it the way he criticised it? very curious. does anyone have
*detailed* notes online and willing to share?

oz
--
music is the space between the notes. -- claude debussy

Fred Gilham

unread,
Nov 5, 2002, 12:49:04 AM11/5/02
to

> this is the second time this was brought up, so one must ask: is it
> because people think CLOS is beyond criticism (i certainly hope not)
> or is it the way he criticised it? very curious. does anyone have
> *detailed* notes online and willing to share?

He said that multiple inheritance was provably incorrect, that
multimethods were bad because they had dark corners, and that Lisp
should have used an object system like Objective C.

That was pretty much the gist of it --- his remarks on this were very
brief.

He took one question on the topic and didn't answer it.

--
Fred Gilham gil...@csl.sri.com
Ironically, not only does the imposition of relativism on society
discard the need for tolerance by eliminating all significant
differences, it also breeds intolerance of those who disagree with
relativism. That is, under the guise of tolerance, those who make
exclusive claims to truth are branded intolerant.

sv0f

unread,
Nov 5, 2002, 10:30:17 AM11/5/02
to
In article <ey34rax...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

>I have a theory that the hard part about simulating a human is
>simulating a dog, and the hard part of that is simulating a sheep, and
>the hard part of that is ... And basically a lot of AI has been done
>completely backwards by trying to peel of this thin layer at the top
>that we set so much store by (because it's all we have to stop us
>being chimps).

I disagree. I have little interest in those aspects of human
cognition that other organisms can perform, and even perform
much better than we can. My interest is in high-level
cognition: language, problem solving, mental imagery, etc.
Having God's own knowledge of the ins and outs of a dog would
tell us nothing about those varieties of cognition unique to
humans.

Then again, I consider myself a cognitive psychologist on this
issue, and not a natural scientist or a phenomenologist.

Barry Margolin

unread,
Nov 5, 2002, 10:54:02 AM11/5/02
to
In article <none-05110...@129.59.212.53>,

sv0f <no...@vanderbilt.edu> wrote:
>In article <ey34rax...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:
>
>>I have a theory that the hard part about simulating a human is
>>simulating a dog, and the hard part of that is simulating a sheep, and
>>the hard part of that is ... And basically a lot of AI has been done
>>completely backwards by trying to peel of this thin layer at the top
>>that we set so much store by (because it's all we have to stop us
>>being chimps).
>
>I disagree. I have little interest in those aspects of human
>cognition that other organisms can perform, and even perform
>much better than we can. My interest is in high-level
>cognition: language, problem solving, mental imagery, etc.
>Having God's own knowledge of the ins and outs of a dog would
>tell us nothing about those varieties of cognition unique to
>humans.

But are these things really "unique" to humans, or do we just do them much
better than other animals? Maybe understanding animal cognition would be a
helpful stepping stone to understanding human cognition, just as mapping
the genomes of E. coli, fruitfly, and the mouse have provided useful
background in understanding the human genome. Certainly there are some
aspects of our cognition that are unique (like the symbolic processing that
enables our use of language), but they had to have evolved from more
primitive processes that we can find in animals, and these are likely to be
easier for us to study and understand.

Tim Bradshaw

unread,
Nov 5, 2002, 11:07:41 AM11/5/02
to
* none wrote:
> I disagree. I have little interest in those aspects of human
> cognition that other organisms can perform, and even perform
> much better than we can. My interest is in high-level
> cognition: language, problem solving, mental imagery, etc.

I think you need to distinguish between what it interesting to you and
what is needed to understand the problem. It would be nice if these
were the same thing, but they often aren't: for instance I have no
interest *at all* in being competent at integration and differential
equations, yet it turns out, if I want to do physics, I need to be
*really* competent at these things, however boring they may be. You
can't just understand the `interesting bits' of QM or GR, you have to
understand the enormous tedious infrastructure too. That's why
general-interest books on physics are so superficial and irritating.

> Having God's own knowledge of the ins and outs of a dog would
> tell us nothing about those varieties of cognition unique to
> humans.

Well, so you claim. Language seems at least plausible, but I'm unsure
why you write off the mental imagery and problem solving abilities of
dogs. Even if you can, why is it clear that the unique abilities of
humans are not heavily dependent on the stuff that dogs (or chimps)
can do as well?

There's a physics analogy here, too actually. QM is the classic
example of a theory which simply tore down the roots of physics:
indeed physics has been divided ever since into `classical' and
`quantum' physics (and relativity is classical). There are concepts
in QM which just never cropped up until it was discovered. And yet,
if you want to really understand it, the first thing you need to do is
get *really good* at advanced formalisms for classical mechanics,
especially the Hamiltonian formalism and the calculus of variations.
If you don't understand that stuff, you'll never be able to hack QM.

--tim

sv0f

unread,
Nov 5, 2002, 12:07:37 PM11/5/02
to
In article <ey3of94...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

>* none wrote:
>> I disagree. I have little interest in those aspects of human
>> cognition that other organisms can perform, and even perform
>> much better than we can. My interest is in high-level
>> cognition: language, problem solving, mental imagery, etc.
>
>I think you need to distinguish between what it interesting to you and
>what is needed to understand the problem. It would be nice if these

>were the same thing, but they often aren't.

This is precisely why I used the words I did. There is no royal
road to understanding human cognition. The one that approaches
"from below" is just one of many.

>> Having God's own knowledge of the ins and outs of a dog would
>> tell us nothing about those varieties of cognition unique to
>> humans.
>
>Well, so you claim. Language seems at least plausible, but I'm unsure
>why you write off the mental imagery and problem solving abilities of
>dogs. Even if you can, why is it clear that the unique abilities of
>humans are not heavily dependent on the stuff that dogs (or chimps)
>can do as well?

Kohler demonstrated the surprisingly good problem solving abilities
of apes. But these pale in comparison to what humans can do.
A scientific account of how an ape uses a stick to spear a piece of
food it can't directly reach -- an example of goal-driven and tool-using
behavior -- would be nice. It says nothing about how, say, Pythagoras
proved his theorem about right triangles.

What makes you so sure there's a continuum here?

>There's a physics analogy here, too actually. QM is the classic
>example of a theory which simply tore down the roots of physics:
>indeed physics has been divided ever since into `classical' and
>`quantum' physics (and relativity is classical). There are concepts
>in QM which just never cropped up until it was discovered. And yet,
>if you want to really understand it, the first thing you need to do is
>get *really good* at advanced formalisms for classical mechanics,
>especially the Hamiltonian formalism and the calculus of variations.
>If you don't understand that stuff, you'll never be able to hack QM.

There is another physics analogy that is more telling in my opinion.
As you know, there is strife even within the physics community on the
singular claim of the reductionist approach. This was evident
recently when Anderson and other condensed matter physicists battled
the particle physicists over the merits of sinking a huge proportion
of the US federal grant money available for physics into the
superconducting supercollider project. Anderson (and his colleagues)
won. He had actually articulated his position on reductionism within
and across sciences years earlier:

The main fallacy in this kind of thinking is that the reductionist
hypothesis does not by any means imply a Śconstructionistą one: the
ability to reduce everything to simple fundamental laws does not
imply the ability to start from those laws and reconstruct the
universe. In fact, the more the elementary particle physicists tell
us about the nature of the fundamental laws, the less relevance they
seem to have to the very real problems of the rest of science, much
less to those of society. The constructionist hypothesis breaks down
when confronted with the twon difficulties of scale and complexity.
The behaviors or large and complex aggregates of elementary particles,
it turns out, is not to be understood in terms of a simple
xtrapolation of the the properties of a few particles. Instead, at
each level of complexity enturely new properties appear, and the
understanding of the new behaviors requires research which I think
is as fundamental in its natura as any other. That is, it seems to
me that one may array the sciences roughly linearly in a hierarchy,
according to the idea: The elementary entities of science X obey the
laws of science Y. [Š] But this hierarchy does not imply that science
X is Śjust applied Y.ą At each stage entirely new laws, concepts, and
generalizations are necessary, requiring inspiration and creativity
to just as great a degree as in the previous one. Psychology is not
applied biology, nor is biology applied chemistry."

(p. 393 of Anderson, P. W. (1972). More is different: Broken symmetry
and the nature of hierarchical structure of science. Science, 177,
393-396.)

If understanding human cognition is the capstone of physical science,
then *maybe* one can take the reductionist route, try to model lower
organisms, and hope that these pieces will sum up to an adequate
account. But I doubt it.

If, however, an understanding of human cognition is viewed as a
foundation for understanding the fruits of human cognition -- how
we create literature, art, mathematics, and science -- then I
think there is nothing to be learned from studying dogs and almost
nothing to be learned from studying non-human primates.

sv0f

unread,
Nov 5, 2002, 1:02:20 PM11/5/02
to
In article <uQRx9.3$bl4....@paloalto-snr1.gtei.net>, Barry Margolin
<bar...@genuity.net> wrote:

>But are these things really "unique" to humans, or do we just do them much
>better than other animals? Maybe understanding animal cognition would be a
>helpful stepping stone to understanding human cognition, just as mapping
>the genomes of E. coli, fruitfly, and the mouse have provided useful
>background in understanding the human genome.

This is an empirical question. It will be answered by how fruitful
the bottom-up approach to cognition (e.g., cognitive neuroscience)
turns out to be. My hunch, and it is only a hunch, is that this
approach will be critical for helping us understand lower level
aspects of cognition, such as sensation, perception, and action.
My hunch is that it will provide only a small part of the puzzle
that is higher level cognition.

But these are just my hunches, just as those who claim that the
bottom-up approach is sufficient are offering just their opinions,
and not the only approach licensed by science.

The first chapter of Marr's (1982) book _Vision_ is very useful
in helping one sort through these issues.

sv0f

unread,
Nov 5, 2002, 1:09:35 PM11/5/02
to
In article <u7ela02...@snapdragon.csl.sri.com>, Fred Gilham
<gil...@snapdragon.csl.sri.com> wrote:

>> Ithought "intelligent design" was the euphism creationists use to
>> make their account of the origins of life more scientific sounding?
>
>Having talked to certain prominent individuals in this field, I can
>say with confidence that they at least think they are trying to do
>science. That is, they are attempting to find a way to do experiment,
>quantification, and testable theories in this area.
>

[...]


>
>I think that if people think they can do science in the area of
>intelligent design, they should be judged on whether it's good
>science, not whether it's orthodox science.

If there is a scientific theory called "intelligent design", then
it should be judged on its scientific merits. The problem that I
have seen, and this is from talking with non-scientist fundamentalist
Christians about the topic, is that many proponents of "intelligent
design" do not want to play by the rules of science. That is, they
want their account accorded the status of scientific theory, and
thus taught in science classes across the US, but they refuse to
formalize the theory, resolve its inconsistencies with known
empirical regularities, make novel predictions, remove its
teleological components, etc. There are many philosphies of science,
but perhaps only Feyerabend's "methodological anarchism" would accord
the versions that I have seen of "intelligent design" the status of
scientific theory. This is why I termed it a mere euphimism.

Matthew Danish

unread,
Nov 5, 2002, 2:19:23 PM11/5/02
to
On Tue, Nov 05, 2002 at 11:07:37AM -0600, sv0f wrote:
> me that one may array the sciences roughly linearly in a hierarchy,
> according to the idea: The elementary entities of science X obey the
> laws of science Y. [Š] But this hierarchy does not imply that science
> X is Śjust applied Y.ą At each stage entirely new laws, concepts, and
> generalizations are necessary, requiring inspiration and creativity
> to just as great a degree as in the previous one. Psychology is not
> applied biology, nor is biology applied chemistry."
>
> (p. 393 of Anderson, P. W. (1972). More is different: Broken symmetry
> and the nature of hierarchical structure of science. Science, 177,
> 393-396.)

This was an interesting read, and I agree very much that there are
phenomenon, in a science such as biology, which would be extremely hard
to reason about in an encompassing science, such as physics.

But you cannot let yourself forget that ultimately all the observations
you make will have some sort of underlying pattern. Otherwise you end
up doing what physicians might term "treating the symptoms and not the
disease;" that being a superficial way of dealing with a sick patient.

> If understanding human cognition is the capstone of physical science,
> then *maybe* one can take the reductionist route, try to model lower
> organisms, and hope that these pieces will sum up to an adequate
> account. But I doubt it.
>
> If, however, an understanding of human cognition is viewed as a
> foundation for understanding the fruits of human cognition -- how
> we create literature, art, mathematics, and science -- then I
> think there is nothing to be learned from studying dogs and almost
> nothing to be learned from studying non-human primates.

There is a common problem solving technique that goes: if you have a
complex problem, solve a similar but simpler problem first, and then use
the tools you created for the simpler problem to help you solve the
harder problem.

Understanding "lower organisms" may give us some of the tools needed to
understand ourselves; but in no way am I asserting that it will give us
an understanding of ourselves. I just don't think you should be so
quick to dismiss a potential opportunity. Lifeforms on Earth are not
that radically different from each other. Human beings and other
animals may share quite a lot in common, and ignoring that would only be
to your own detriment.

--
; Matthew Danish <mda...@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."

Michael Sullivan

unread,
Nov 5, 2002, 2:26:06 PM11/5/02
to
Tim Bradshaw <t...@cley.com> wrote:

I disagree. It will depend much more on how expensive it is to produce.
There are currently *huge* numbers of low or no-skill tasks which must
be done, and for which there is serious downward pressure on wages.
Only humans (or perhaps very well trained chimps) can perform these
tasks, but just about *any* human can do so. If a low-level AI, that is
no smarter than a relatively retarded human can be created
inexpensively, it would have an economic impact possibly greater than
the industrial revolution.

Of course, keeping the resulting watershed from becoming a mass labor
catastrophe would be an Interesting[tm] political problem.


Michael

--
Michael Sullivan
Business Card Express of CT Thermographers to the Trade
Cheshire, CT mic...@bcect.com

Michael Sullivan

unread,
Nov 5, 2002, 2:26:06 PM11/5/02
to
sv0f <no...@vanderbilt.edu> wrote:
> In article <ey34rax...@cley.com>, Tim Bradshaw <t...@cley.com> wrote:

> >I have a theory that the hard part about simulating a human is
> >simulating a dog, and the hard part of that is simulating a sheep, and
> >the hard part of that is ... And basically a lot of AI has been done
> >completely backwards by trying to peel of this thin layer at the top
> >that we set so much store by (because it's all we have to stop us
> >being chimps).

> I disagree. I have little interest in those aspects of human
> cognition that other organisms can perform, and even perform
> much better than we can. My interest is in high-level
> cognition: language, problem solving, mental imagery, etc.
> Having God's own knowledge of the ins and outs of a dog would
> tell us nothing about those varieties of cognition unique to
> humans.

I disagree entirely. Look at the argument being made that these various
properties of sentience are quite possibly emergent from a suffficiently
complicated set of basic cognition bits. It's quite possible that we
can achieve that level of intelligence only by figuring out the nitty
gritty details of how to simulate things like vision and muscular
agility/balance. In fact, it's quite possible that the main thing we
need to simulate sentience is to give a sufficiently complicated neural
net access to a body and sensory input that is sufficiently close that
of a really well developed animal.

When all you have is a hammer, everything looks like a nail. Even
probably non-sentient animals have access to sensory and interactivity
hardware that it many orders of magnitude more flexible than anything
under a current "AI"'s control.

IOW, a chimp's, or even a dog's brain has access to a fairly complete
set of extremely well designed tools. Your current sample of "AI"
attempts have access to a hammer. No wonder that when you talk to
A.L.I.C.E., the conversation keeps coming back to nails. I have much
more interesting conversations with my dog.

sv0f

unread,
Nov 5, 2002, 3:09:42 PM11/5/02
to
In article <1fl6au6.6b840tyer8riN%mic...@bcect.com>, mic...@bcect.com
(Michael Sullivan) wrote:

>When all you have is a hammer, everything looks like a nail. Even
>probably non-sentient animals have access to sensory and interactivity
>hardware that it many orders of magnitude more flexible than anything
>under a current "AI"'s control.

I said nothing about Artificial Intelligence; I only mentioned cognitive
psychology.

But...

>IOW, a chimp's, or even a dog's brain has access to a fairly complete
>set of extremely well designed tools. Your current sample of "AI"
>attempts have access to a hammer. No wonder that when you talk to
>A.L.I.C.E., the conversation keeps coming back to nails. I have much
>more interesting conversations with my dog.

...perhaps you're message was not intended for me?!?

("My current sample of 'AI' attempts"?)

sv0f

unread,
Nov 5, 2002, 3:38:12 PM11/5/02
to
In article <2002110514...@lain.cheme.cmu.edu>, Matthew Danish
<mda...@andrew.cmu.edu> wrote:

>But you cannot let yourself forget that ultimately all the observations
>you make will have some sort of underlying pattern.

I vehemetly agree that the central task of scientific theories "is to
make the wonderful commonplace: to show that complexity, correctly
viewed, is only a mask for simplicity; to find pattern hidden in apparent
chaos." (p. 1 of Simon, H. A. (1996) The sciences of the artificial.
Cambridge, MA: MIT Press.)

>> If, however, an understanding of human cognition is viewed as a
>> foundation for understanding the fruits of human cognition -- how
>> we create literature, art, mathematics, and science -- then I
>> think there is nothing to be learned from studying dogs and almost
>> nothing to be learned from studying non-human primates.
>
>There is a common problem solving technique that goes: if you have a
>complex problem, solve a similar but simpler problem first, and then use
>the tools you created for the simpler problem to help you solve the
>harder problem.
>
>Understanding "lower organisms" may give us some of the tools needed to
>understand ourselves; but in no way am I asserting that it will give us
>an understanding of ourselves. I just don't think you should be so
>quick to dismiss a potential opportunity. Lifeforms on Earth are not
>that radically different from each other. Human beings and other
>animals may share quite a lot in common, and ignoring that would only be
>to your own detriment.

I do not think I am being quick to dimiss anything. I divided
cognition into two realms (low-level domains such as sensation,
perception and action, and high-level domains such as language
and problem solving) and said that my hunch was that low-level
cognition will be illuminated by the reductionist strategy and
that high-level cognition will not be. I think I am the only
one in this thread who is not quickly dismissing other strategies
whole cloth.

The sentiment that I sense is out there, and perhaps this is only
my paranoia, is that there are two ways to understand a domain:
the reductionist strategy, which is real science, and everything
else, which is psuedoscientific by definition.

My main goal has been to argue that reduction is a scientific
strategy, but not *the* scientific strategy. This is why I cited
Anderson, and his advocation of the constructionist strategy.

A secondary goal of mine was to argue that, reduction proceeds
bottom-up, there is a top-down strategy as well: One can
look at the products of human cognition (i.e., literature,
art, mathematics, science) and explain them scientifically.
A good example of this is Langley, Simon, Bradshaw, and
Zytkow's (1987) "Scientific discovery: Computational
explorations of the creative processes", but there are others.
Piaget attempted something similar in his program of "genetic
epistemology". The point here is that an adequate account of
human cognition must not just scale down to fit the facts of
biology "below"; it must also illuminate the discplines "above".
However, looking back over my other posts, I didn't really make
this point explicitly.

There is another point of relevance. My reading of the history
of science is that reduction is after-the-fact. There is a
theory of a low-level domain (say mechanics) and a theory of a
high-level domain (say thermodynamics). These theories were
hatched through the hard work of scientists working at each
level (say Newton and Carnot/Clausius, respectively). The
phenomena of each domain are understood adequately through each
of the theories.

Reduction in this case is a kind of scientifc tidying up.
Statistical mechanics unifies the two theories, and gives us
comfort that we will one day be able to write down a single
physical theory which accounts for everything above it. Of
course, reduction also produces new insights for both the
low and high level domains, but these are second-order in
magnitude. However, reduction is not possible until adequate
theories of both domains come into exsitence. Which is to say,
independent efforts to understand high-level human cognition
must proceed by whatever means helps us understand phenomena
at this level. In particular, there is no reason to focus on
just the biological level below and to ignore intrinsic
levels of psychological science and other levels above.
Reduction, in this reading, will find its primary role later,
to unify adequate psychological accounts with adequate
biological accounts formulates by scientists working relatively
independently at the respective levels.

This is all in my opinion of course. I have consistently
made this disclaimer throughout my messages in this thread, and
also been more careful than others in partitioning cognition and
making comparative judgments of the worth of different research
strategies, so please no more characterizing me as the one making
simplistic and absolute claims.

Paul Wallich

unread,
Nov 5, 2002, 3:56:22 PM11/5/02
to
In article <none-05110...@129.59.212.53>,
no...@vanderbilt.edu (sv0f) wrote:

It might well. Speaking from introspection and some observation (which
we know to be mostly useless for dissecting brain function) much of what
gets lauded as serendipitous creativity is the result of techniques
that would make AM snort in contempt (if only it could snort or feel
contempt). Just find a representation where ringing relatively simple
changes on known idea gives "executable" results, or where more or less
isomorphic mappings from one domain to another ditto.

What seems to be interesting is the quality of the execution engine and
the classifier, so that you know the shape of the result you want and
can figure out quickly whether a particular set of transformations on
current reality will lead to something like that shape.

Having a rigorous explanation of how you go from a stick and some boxes
and a bunch of bananas up on a hook to an ape eating bananas might well
do most of the work toward figuring out how you go from a bunch of
squares and right triangles to a proof.

paul

sv0f

unread,
Nov 5, 2002, 4:29:11 PM11/5/02
to
[TEST: PLEASE IGNORE]

In article <pw-FE6BF0.15...@reader1.panix.com>, Paul Wallich
<p...@panix.com> wrote:

>It might well. Speaking from introspection and some observation (which
>we know to be mostly useless for dissecting brain function) much of what
>gets lauded as serendipitous creativity is the result of techniques
>that would make AM snort in contempt (if only it could snort or feel
>contempt). Just find a representation where ringing relatively simple
>changes on known idea gives "executable" results, or where more or less
>isomorphic mappings from one domain to another ditto.

Imagine Hofstadter and Minsky in a bathtub. Wittgenstein fetches them
a drink. The Dixie Construct laughs, sending chills up CYC's spine.

>What seems to be interesting is the quality of the execution engine and
>the classifier, so that you know the shape of the result you want and
>can figure out quickly whether a particular set of transformations on
>current reality will lead to something like that shape.

Colorless green ideas sleep furiously.

>Having a rigorous explanation of how you go from a stick and some boxes
>and a bunch of bananas up on a hook to an ape eating bananas might well
>do most of the work toward figuring out how you go from a bunch of
>squares and right triangles to a proof.

This sentence is false.

[TEST: PLEASE IGNORE]

Michael Sullivan

unread,
Nov 5, 2002, 4:26:23 PM11/5/02
to
sv0f <no...@vanderbilt.edu> wrote:
> In article <1fl6au6.6b840tyer8riN%mic...@bcect.com>, mic...@bcect.com
> (Michael Sullivan) wrote:

> I said nothing about Artificial Intelligence; I only mentioned cognitive
> psychology.

> But...

This is all in the context of a thread about Loebner's prize, which is
essentially trying to produce a Turing test passing AI. When you said
"I am only interesting in those aspects of human cognition...", I
assumed you meant from the standpoint of understanding them well enough
to replicate them in a machine.

> >IOW, a chimp's, or even a dog's brain has access to a fairly complete
> >set of extremely well designed tools. Your current sample of "AI"
> >attempts have access to a hammer. No wonder that when you talk to
> >A.L.I.C.E., the conversation keeps coming back to nails. I have much
> >more interesting conversations with my dog.

> ...perhaps you're message was not intended for me?!?

> ("My current sample of 'AI' attempts"?)

It's a US (possibly any English) colloquiallism. Read 'Your' as 'The'
or 'A', and it should make more sense.

sv0f

unread,
Nov 5, 2002, 4:58:41 PM11/5/02
to
In article <1fl6nh3.bz5rrbh3u0sxN%mic...@bcect.com>, mic...@bcect.com
(Michael Sullivan) wrote:

>This is all in the context of a thread about Loebner's prize, which is
>essentially trying to produce a Turing test passing AI. When you said
>"I am only interesting in those aspects of human cognition...", I
>assumed you meant from the standpoint of understanding them well enough
>to replicate them in a machine.

Fair enough.

>It's a US (possibly any English) colloquiallism. Read 'Your' as 'The'
>or 'A', and it should make more sense.

OK, I'll go back and try again.

sv0f

unread,
Nov 5, 2002, 5:38:18 PM11/5/02
to

>I have articulated my position on the bottom-up approach to conquering
>human cognition (in its entirety) in other messages in this thread.
>Like Anderson (1972), I believe that reductive knowledge of how the
>parts work does not guarantee knowledge of how the whole works. Or
>to put it more accurately, theories of level N in the hierarchy of
>science (where N>=1, and N=1 for physics) cannot be mechanically
>aggregated to yield theories of level N. Rather, I believe that
=======================================^
N+1
>theories must be formulated relatively independently at each level
>-- only in this way will they do justice to the relevant phenomena
>-- and then related after the fact via reduction.

sv0f

unread,
Nov 5, 2002, 5:36:39 PM11/5/02
to
In article <pw-FE6BF0.15...@reader1.panix.com>, Paul Wallich
<p...@panix.com> wrote:

>>Kohler demonstrated the surprisingly good problem solving abilities
>>of apes. But these pale in comparison to what humans can do.
>>A scientific account of how an ape uses a stick to spear a piece of
>>food it can't directly reach -- an example of goal-driven and tool-using
>>behavior -- would be nice. It says nothing about how, say, Pythagoras
>>proved his theorem about right triangles.
>

>It might well. [...] Just find a representation where ringing relatively


>simple changes on known idea gives "executable" results, or where more
>or less isomorphic mappings from one domain to another ditto.

I agree with this. Does it seem strange to others that (1) I believe
that subsymbolic, emergent computations drive large chunks of cognition
yet (2) I don't think the linguistic abilities of dogs and problem
solving abilities of apes shed any light on such computations?

Minsky's "Society of Mind" and especially Hofstadter's "Fluid Concepts
and Creative Analogies" are computational attempts to realize such
processing, and neither draws much at all on animal intelligence.

[Tim B.: Hofstadter's ideas might be particularly comforting to your
inner physicist.]

>Having a rigorous explanation of how you go from a stick and some boxes
>and a bunch of bananas up on a hook to an ape eating bananas might well
>do most of the work toward figuring out how you go from a bunch of
>squares and right triangles to a proof.

How much of the work does it do for going from a bunch of words to
a Shakesperean sonnett? A bunch of paint to "The Birth of Venus"?
(Let's not bring Jackson Pollock into this!) The axioms of Zermelo-
Frankl set theory to Wiles' proof of Fermat's last theorem?

sv0f

unread,
Nov 5, 2002, 5:33:02 PM11/5/02
to
In article <1fl6au6.6b840tyer8riN%mic...@bcect.com>, mic...@bcect.com
(Michael Sullivan) wrote:

>I disagree entirely. Look at the argument being made that these various
>properties of sentience are quite possibly emergent from a suffficiently
>complicated set of basic cognition bits. It's quite possible that we
>can achieve that level of intelligence only by figuring out the nitty
>gritty details of how to simulate things like vision and muscular
>agility/balance.

You are speaking of Brooks work on insect-like robots and recent
advances in cognitive neuroscience, I presume?

I have articulated my position on the bottom-up approach to conquering
human cognition (in its entirety) in other messages in this thread.
Like Anderson (1972), I believe that reductive knowledge of how the
parts work does not guarantee knowledge of how the whole works. Or
to put it more accurately, theories of level N in the hierarchy of
science (where N>=1, and N=1 for physics) cannot be mechanically
aggregated to yield theories of level N. Rather, I believe that

theories must be formulated relatively independently at each level
-- only in this way will they do justice to the relevant phenomena
-- and then related after the fact via reduction.

This is the reason I keep stressing the cognitive phenomena that
interest me. They are high-level aspects of thinking such as
language use and problem solving. No theories of animal intelligence
do justice to the "top end" of human cognition. This is why I do
not believe theories of human cognition should be built from the
first principles of animal cognition.

Another reference I offered is the first chapter of Marr's (1982)
_Vision_. There he explicitly confronts the inability of even
seminal findings in the neuroscience of vision to add up to a
satisfactory explanation of vision:

===
The initial discoveries of the 1950s and 1960s were not being
followed by equally dramatic discoveries in the 1970s. No
neuropsychologists had recorded new and clear high-level
correlates of perception. [Š] None of the new studies succeeded
in elucidating the function of the visual cortex. [Š] Suppose,
for example, that one actually found the apocryphal grandmother
cell. Would that really tell us anything much at all? It would
tell us that it existed ­ Gross¹s hand-detectors tell us almost
that ­ but not why or even how such a thing may be constructed
from the outputs of previously discovered cells. [Š] As one
reflected on these sorts of issues in the early 1970s, it
gradually became clear that something important was missing that
was not present in either of the disciplines of neurophysiology
or psychophysics. The key observation is that neurophysiology and
psychophysics have as their business to describe the behavior of
cells or subjects but not to explain such behavior. What are the
visual areas of the cortex actually doing? What are the problems
in doing it that need explaining, and at what level of description
should such explanations be sought? (pp. 14-15)
===

To continue:

===
The reason for this is that the nature of the computations that
underlie perception depends more upon the computational problems
that have to be solved than upon the particular hardware in which
their solutions are implemented. To phrase the matter another way,
an algorithm is likely to understood more readily by understanding
the nature of the problem being solved than by examining the
mechanism (and the hardware) in which it is embodied. In a similar
vein, trying to understand perception by studying only neurons is
like trying to understand bird flight by studying only feathers:
It just cannot be done. In order to understand bird flight, we have
to understand aerodynamics; only then do the structure of feathers
and the different shapes of birds¹ wings make sense. (p. 27)
===

Chomsky, reflecting on the failure of AI, reaches a similar
conclusion:

===
Much of the work in AI seems to me misguided, in that it is too
concreteŠ The AI approaches are too concrete in that they are much
too concerned with the actual algorithm ­ there may be many algorithms
to realize a particular computational theory, and the study of
algorithms requires a prior understanding of the structure of the
problem being addresses. Again, this is a point that Marr has
emphasized. To assume that your program is your theory is simply to
abandon any hope for understanding what people are doing. (p. 348)
===

>In fact, it's quite possible that the main thing we
>need to simulate sentience is to give a sufficiently complicated neural
>net access to a body and sensory input that is sufficiently close that
>of a really well developed animal.

It's possible that you'll be able to simulate animal intelligence
this way, but I doubt it. Model building is integral to science,
e.g., Harvey's model of the heart was critical for understanding
its circulatory properties. But when scientists build models,
they embody their nascent theoretical ideas, and use the models
to understand the implications of these ideas. They don't implement
blank slates, wait a while, and then read off new theories. Or, more
colorfully, consider the following well-known koan:

===
In the days when Sussman was a novice, Minsky once came to him as he sat
hacking at the PDP-6.

"What are you doing?", asked Minsky.

"I am training a randomly wired neural net to play Tic-Tac-Toe", Sussman
replied.

"Why is the net wired randomly?", asked Minsky.

"I do not want it to have any preconceptions of how to play", Sussman said.

Minsky then shut his eyes.

"Why do you close your eyes?", Sussman asked his teacher.

"So that the room will be empty."

At that moment, Sussman was enlightened.
===

Moreover, even if your experiment worked, you wouldn't learn
anything about high-level cognition -- language understanding,
complex problem solving, etc. Otherwise, why haven't animals
raised in human environments learned language, mathematics, etc.?
(And they haven't -- researchers have raised primates alongside
their own children with disappointing results.) There is more
to high-level cognition than even superb perception and motor
capabilities.

>When all you have is a hammer, everything looks like a nail. Even
>probably non-sentient animals have access to sensory and interactivity
>hardware that it many orders of magnitude more flexible than anything
>under a current "AI"'s control.

Flexible in many ways, but limited in all the aspects of high-level
cognition that differentiate us from other species. Sure, I want
to know how Barry Bonds can hit a 95 mph fastball. But I also want
to know how writers create new prose, scientists new theories, and
programmers new programs.

>IOW, a chimp's, or even a dog's brain has access to a fairly complete
>set of extremely well designed tools. Your current sample of "AI"
>attempts have access to a hammer.

"Complete" for what purpose? Not for high-level cognition.

Nicholas Geovanis

unread,
Nov 5, 2002, 5:33:12 PM11/5/02
to
On Mon, 4 Nov 2002, Barry Margolin wrote:

> Evolution
> also had the advantage that the changes it needed to adapt to occurred
> slowly; it never had to deal with quantum changes like upgrading (or worse,
> switching) operating systems.

I can't keep up with the human-evolution schema "du jour", but....
Once upon a time it held that it was rapid, essentially "discontinuous"
climate change during one of the ice ages which selected homo sapiens and
eliminated the others. IOW the larger brain helped with the "bear warm,
wear bear" kind of problems; not to mention "plant food, food grow" and
"snow come, chase sun" (south, not west). YMMV.

> Barry Margolin, bar...@genuity.net

* Nick Geovanis
| IT Computing Svcs Computing's central challenge:
| Northwestern Univ How not to make a mess of it.
| n-geo...@nwu.edu -- Edsger Dijkstra
+------------------->

Erik Naggum

unread,
Nov 7, 2002, 2:46:54 AM11/7/02
to
* Nicholas Geovanis

| Once upon a time it held that it was rapid, essentially "discontinuous"
| climate change during one of the ice ages which selected homo sapiens and
| eliminated the others. IOW the larger brain helped with the "bear warm,
| wear bear" kind of problems; not to mention "plant food, food grow" and
| "snow come, chase sun" (south, not west). YMMV.

Evolution is not about survival of the fittest, but death of the unfit,
which is quite a different story. All sorts of things survive, but when
some illness or catastrophy or other disastrous event occurs, a lot of
individuals die. It is entirely random (as far as survival pre-disaster
is concerned) which factor allows individuals to survive the disaster.

--
Erik Naggum, Oslo, Norway

Act from reason, and failure makes you rethink and study harder.
Act from faith, and failure makes you blame someone and push harder.

Russell Wallace

unread,
Nov 8, 2002, 1:08:09 AM11/8/02
to
On 07 Nov 2002 07:46:54 +0000, Erik Naggum <er...@naggum.no> wrote:

> Evolution is not about survival of the fittest, but death of the unfit,
> which is quite a different story. All sorts of things survive, but when
> some illness or catastrophy or other disastrous event occurs, a lot of
> individuals die. It is entirely random (as far as survival pre-disaster
> is concerned) which factor allows individuals to survive the disaster.

This is true if the disaster is something which recurs at intervals
longer than normal evolutionary timescales. (E.g. an asteroid hitting
the Earth.)

However, most things which are disasters for an individual or even a
group, are actually quite normal, repeated occurrences on an
evolutionary timescale. For example, when an individual survives an
illness, it's usually because its ancestors survived it on previous
occasions and passed on their genes for a well-adapted immune system.

So to answer the question of what factors drove a complex adaptation
such as intelligence to appear, one often needs to look for things
which might be disasters for an individual, but would be frequent
occurrences over the length of time during which the adaptation
evolved.

--
"Mercy to the guilty is treachery to the innocent."
Remove killer rodent from address to reply.
http://www.esatclear.ie/~rwallace

Erik Naggum

unread,
Nov 8, 2002, 1:17:56 AM11/8/02
to
* Erik Naggum

| Evolution is not about survival of the fittest, but death of the unfit,
| which is quite a different story. All sorts of things survive, but when
| some illness or catastrophy or other disastrous event occurs, a lot of
| individuals die. It is entirely random (as far as survival pre-disaster
| is concerned) which factor allows individuals to survive the disaster.

* Russell Wallace


| This is true if the disaster is something which recurs at intervals
| longer than normal evolutionary timescales. (E.g. an asteroid hitting
| the Earth.)

Huh? What does the timescale have to do with anything?

| However, most things which are disasters for an individual or even a
| group, are actually quite normal, repeated occurrences on an evolutionary
| timescale. For example, when an individual survives an illness, it's
| usually because its ancestors survived it on previous occasions and
| passed on their genes for a well-adapted immune system.

How is this not just what I said?

| So to answer the question of what factors drove a complex adaptation such
| as intelligence to appear, one often needs to look for things which might
| be disasters for an individual, but would be frequent occurrences over
| the length of time during which the adaptation evolved.

Are you sure you read what I wrote?

I am quite puzzled by your response, which looks like it tries to refute
something, but does nothing to restate what I wrote as far as I can see.

Russell Wallace

unread,
Nov 8, 2002, 10:13:27 AM11/8/02
to
On 08 Nov 2002 06:17:56 +0000, Erik Naggum <er...@naggum.no> wrote:

>* Russell Wallace


>| However, most things which are disasters for an individual or even a
>| group, are actually quite normal, repeated occurrences on an evolutionary
>| timescale. For example, when an individual survives an illness, it's
>| usually because its ancestors survived it on previous occasions and
>| passed on their genes for a well-adapted immune system.
>
> How is this not just what I said?

You said: "It is entirely random (as far as survival pre-disaster is
concerned) which factor allows individuals to survive the disaster". I
took this to mean, in paraphrase: there is no feature of the world
prior to the disaster that would have led specifically to adaptations
preparing for it. I then pointed out that in most cases, the harmful
phenomenon will have occurred in previous generations, so when it
comes around for the Nth time, the population will have adapted to it.

That said, I may have misinterpreted you. If so, what exactly did you
mean by "It is entirely random (as far as survival pre-disaster is
concerned) which factor allows individuals to survive the disaster"?

Erik Naggum

unread,
Nov 8, 2002, 10:48:17 AM11/8/02
to
* Russell Wallace

| You said: "It is entirely random (as far as survival pre-disaster is
| concerned) which factor allows individuals to survive the disaster". I
| took this to mean, in paraphrase: there is no feature of the world prior
| to the disaster that would have led specifically to adaptations preparing
| for it. I then pointed out that in most cases, the harmful phenomenon
| will have occurred in previous generations, so when it comes around for
| the Nth time, the population will have adapted to it.

What an odd interpretation and so entirely out of context. The point is
that most mutations, which occur naturally and all the time, do not kill
individuals and neither do they specifically cause them to survive. A
host of variations survive. Which factors then let individual survive an
illness, disaster or other catastrophe is unpredictable beforehand.

0 new messages