Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

hostory of AI Winter?

395 views
Skip to first unread message

Chris Perkins

unread,
Apr 14, 2003, 9:22:13 PM4/14/03
to
I was re-reading PAIP this weekend, admiring Prolog, and looking over
the TI Explorer manuals posted on LemonOdor, when I got to wondering
"What happened?" "How did a language like Lisp, with such abstractive
power and productivity fall into disfavor, or passed by?"

Did Lisp simply never gain a significant 'mindshare'? Or did it once
have it and then lost it? If so, how?

I did not study computers in college (in the 80's) where I might have
had an introduction. I am mostly self taught and all I knew about
Lisp until two years ago was that it was for AI and had lots of
parentheses.

I'm sure many of you have your own Lisp history synopses. Care to
share?

Jochen Schneider

unread,
Apr 15, 2003, 7:42:27 AM4/15/03
to
There is a text that might interest you:

Eve M. Phillips, "If It Works, It's Not AI: A Commercial Look at
Artificial Intelligence Startups". MSc thesis done under the
supervision of Patrick Winston.
<http://kogs-www.informatik.uni-hamburg.de/~moeller/symbolics-info/ai-business.pdf>

Jochen

Pascal Costanza

unread,
Apr 15, 2003, 8:46:28 AM4/15/03
to
Chris Perkins wrote:
> I was re-reading PAIP this weekend, admiring Prolog, and looking over
> the TI Explorer manuals posted on LemonOdor, when I got to wondering
> "What happened?" "How did a language like Lisp, with such abstractive
> power and productivity fall into disfavor, or passed by?"

My recent "theory" about this goes like this: Lisp doesn't fit on slides.

Lisp's power comes from the fact that it is a highly dynamic language.
It's hard to understand many of its features when you actually haven't
experienced them. It's really hard to explain dynamics on slides and on
paper (articles and books).

I am convinced that static features are preferred over dynamic features,
because you can easily explain static features on slides and on paper,
such that even non-programmers believe they understand them. For
example, static type systems, especially when they use explicit types.
UML diagrams. Certain software development processes. You get the idea.

Many decisions about languages and tools are made by people who don't
actually write programs. The recent trend towards scripting languages is
a clear sign that programmers want something different. The term
"scripting languages" is a clever invention because it downplays the
importance of their features. "Of course, we use a serious language for
developing our components, we just use a bunch of scripts to glue them
together."

AspectJ is another such clever invention. Aspect-oriented programming
takes the ideas of metaobject protocols and turns them into something
static, into something that fits on slides.

To paraphrase what Richard Gabriel said at a conference a while ago:
It's time to vote with our feet.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Gorbag

unread,
Apr 15, 2003, 12:08:34 PM4/15/03
to
On 4/15/03 5:46 AM, in article b7guv6$h48$1...@f1node01.rhrz.uni-bonn.de,
"Pascal Costanza" <cost...@web.de> wrote:

> Chris Perkins wrote:
>> I was re-reading PAIP this weekend, admiring Prolog, and looking over
>> the TI Explorer manuals posted on LemonOdor, when I got to wondering
>> "What happened?" "How did a language like Lisp, with such abstractive
>> power and productivity fall into disfavor, or passed by?"
>
> My recent "theory" about this goes like this: Lisp doesn't fit on slides.
>

I think AI winter had very little to do with Lisp per se, and more to do
with government funding, certain personalities overblowing expectations, and
making folks believe that all this wonderful stuff was intimately tied with
the power of Lisp, PROLOG, and other "AI languages". But believe you me,
Lisp DID "fit on the slides." (DARPA would not have taken such an interest
in Common Lisp if they didn't think they were going to get a substantial
benefit in integrating the output from all the university AI efforts.)

Consider that the end of the Japanese fifth generation project came about in
the late 80s. This project was very much about AI and high productivity
tools like Lisp and PROLOG. The very threat that the Japanese might beat us
to critical applications of these technologies caused the government to fund
very expensive research centers like MCC in Austin, Texas (full disclaimer:
I worked there for a time until they shut down their software programs). MCC
was originally a very high overhead operation, full of folks whom I will not
name many of whom made large overblown claims as to what they would be able
to do in 10 years (the length of the primary MCC funding). Large
corporations (like GE, etc.) bought $1M shares in the enterprise, funding
the obscene overhead rates of 12-1(!!) just to make these researchers happy.
(While I was not there at the time, I was regaled with tales of wine at
lunch, waiters, free popcorn and soda all day long, "Fahita Fridays" etc.
Not to mention relocation packages that included assistance from the
Governor's office!) Of course, MCC was probably only the most egregious
example, and there were pockets of hyperbole from Boston to Stanford, all
eager to suck at the government teat.

And believe me, all these guys were very very public about how languages
like Lisp were going to make these very hard problems (anyone remember the
MIT summer vision project?) obscenely easy.

Needless to say, things didn't work out that way.

Had folks been more realistic (and I know there were a few sober minds at
the time who were trying to be realistic) there may not have been such a
pump of funds, but the area would not have been set back 10 years either
during the dump. The largest problem was probably not so much that Lisp is
slow, but that the algorithms of the time were not practical on the hardware
of the time. For instance, parsers would typically take anywhere from hours
to days to process a sentence (with the exception of the Tomita parser, but
this was the first in a series of 90% coverage parsers which folks at the
time were not ready to countenance). Researchers using Lisp would extol the
benefits of rapid prototyping with the essential mantra that "first we have
to figure out how to do it at all, and we can figure out how to do it fast
later". True, but beside the point if you are fundamentally in marketing
mode and need to deliver on the hype. Many programs that were incredibly
slow at the time with some performance work and still in Lisp will run in
"near real-time" on today's hardware. For instance, parsers that might have
taken hours will now run in less than a second using modern techniques
(everything is not a property) on fast Xeon machines, or even current
generation SPARCstations.

So, Lisp and PROLOG got dragged into AI winter because they were so closely
associated with the hype. Companies like Symbolics and TI benefited from all
the corporate and university interest in launching projects to "keep up with
the Japanese", use expert systems like KEE that ran tolerably slowly on the
lisp hardware because soon "everyone would be a knowledge engineer" and
"programming was obsolete" etc. These companies went public while the hype
was still hot (though interestingly enough, had probably already peaked).
When the pendulum swung, those who had tied their fortunes to the fad du
jour were caught in the backdraft.

Some folks who were fortunate enough to be associated with pockets of work
that did not overly hype their wares were effected slightly less harshly.
Substantial portions of this work began to concentrate on efforts that were
tied to practical problems where actual results could be shown. Until the
mid 90s, it was common to build entire NLP systems, for instance, that could
only actually handle one particular scripted interaction of text, with the
purpose of showing how a technique could handle some obscure use of
language. In the mid 90s (and I am proud to say that I was associated with
this change), the emphasis became one of general coverage with less perfect
results - we could now actually start to measure coverage, effectiveness,
etc. which was impossible (or more accurately nil) in the old way of
building systems.

As a result, we are only now starting to see enough momentum that folks
outside of AAAI conferences are beginning to sense that AI is starting to
deliver. A major credit is the application of AI to systems that a lot of
folks get exposure to, such as games. Because about 1/2 the retrenchment has
come from folks who are more interested in engineering than in scientific AI
(that is, they are more interested in creating systems with particular
features or behaviors than in exploring the mechanisms of cognition), most,
though not all of the work being done today is in non-AI languages. This is
especially true for work that has concentrated on more mathematical or
statistical approaches, such as machine vision, machine learning, and even
recently parsing. Universities also are finding their talent pool is filled
with graduate students who know languages like Java or C++, and this has
also pushed a new wave tendency for systems to be implemented in these
languages. Lisp is not dead in AI, but it tends to be concentrated in a few
areas, and even these are changing over to other languages. I expect Lisp to
continue to be used but it will not grow as fast as the area (it will
continue to lose market share). Even when I continue to use it in projects,
I usually have to sell it as an "direct executable specification language"
so there is some hope that the effort can be "thrown over the wall" at some
point.

One short perspective from the trenches,
Brad Miller

Tim Bradshaw

unread,
Apr 15, 2003, 2:27:44 PM4/15/03
to
I think that if you want to know the history of the AI winter (rather
than what, if anything, Lisp had to do with it), then you just need to
watch XML and particularly the whole `semantic web' rubbish - history
is being rewritten as we watch.

--tim

Bob Bane

unread,
Apr 15, 2003, 4:47:28 PM4/15/03
to
Tim Bradshaw wrote:

If we're lucky, we'll be talking about 'XML Winter' in a few years. If
we're REALLY lucky, Java will be blamed for XML Winter.

- Bob Bane

Fred Gilham

unread,
Apr 15, 2003, 6:14:32 PM4/15/03
to

> If we're lucky, we'll be talking about 'XML Winter' in a few years.
> If we're REALLY lucky, Java will be blamed for XML Winter.

What scares me is that the above may happen --- and we'll wind up with
C-octothorp. I already had the frisson-producing experience of seeing
someone ask for a way to call ACL from C-octothorp today.

BTW, some time ago I wrote an explanation of how Lisp got where it was
for a certain individual who shall remain nameless for fear that using
his name in a posting would somehow become invocation....

Here it is:

----------------------------------------
Well, I really think this has gone far enough, and the best thing to
do is to let i---- in on the REAL SECRET of Lisp.

He wrote:
> i try to assimilate the best of LISP.
>
> and to throw away the garbage of LISP.

The REAL SECRET of Lisp is that Lisp is ALL garbage. Yes, it's true.
It all started like this.

Back in the early '60s a bunch of mathematicians were thinking,
"Mathematicians are like farmers: they never make any money. How can
we cash in on the computer revolution?"

So they decided to invent a computer language that they could use to
impress the government and get lots of grants. They found this
language that had some claims to a mathematical foundation and decided
that with a little massaging they could get it to be virtually
incomprehensible except to those who were in the know. They came up
with things like a natural language parser that could identify the
parts of speech, and promised that in a few years they'd be able to
understand Russian. And so on. They made a few mistakes, of course.
One famous failure was a robot-arm that was supposed to catch a tennis
ball. (The assumption was that more advanced versions might be able to
THROW a tennis ball, and perhaps even throw things like hand-grenades
and so on. The military applications were "obvious".) Anyway, when
they got the generals in for a demo, they threw the ball at the arm.
As it reached up to catch the ball, the Lisp control program paused to
"reclaim memory" (heh, heh, yes, I know what it was really doing),
causing the arm to miss the ball. The generals were not impressed,
coming perilously close to penetrating the deception when the panicked
researchers actually mentioned `garbage collection'.

Nevertheless, as plots go it had a pretty nice run. It went for about
thirty years before people in the government started to catch on.
Many of the early illuminati were able to parlay their association
with Lisp into reputations that allowed them to move on to other, more
respectable endeavors. Some, such as John McCarthy, even won prizes.

More recently, creative members of the illuminati have sometimes taken
advantage of Lisp's impenetrability to profit from the dot-com craze.
One such person, in a clever application of `recursion theory' (heh,
heh, yes, I know), went around describing Lisp as a "programmable
programming language" and was able to make quite a nice pile of cash.

The people who post in this newsgroup consist of two kinds of people:
1) Ex-illuminati who are nostalgic for the good old days, and
2) Want-to-be illuminati who are hoping to revive and cash in on the
plot.

That's why newcomers are treated with such disdain (we don't want
people horning in on our action), and why all attempts to make Lisp
more comprehensible to the mainstream are rejected.

At first I thought that perhaps i---- would be a worthy member of the
illuminati. After all, few even of the early illuminati have a
writing style that gives such a tantalizing appearance of content,
while involving the reader in such mazes of bewilderment when he
attempts to actually discover that content. (Guy Steele, for example,
actually verges on comprehensibility from time to time.)

Unfortunately, for some inexplicable reason i---- insisted upon trying
to make Lisp understandable by attempting to write reader macros that
would massage its syntax into something the average person might be
comfortable with. Of course that would be fatal, making it clear to
everyone that Lisp was, as I said, completely without redeeming social
value. He thus showed that he was not, in fact, worthy of being a
part of the illuminati. Sorry, but a line has to be drawn somewhere.
Sell all the snake oil you want, but don't queer the pitch for the
rest of us.

So, i----, you are wasting your time and should probably go back to
C++ or Java, where you can get some real things accomplished. Or
something.
----------------------------------------

--
Fred Gilham gil...@csl.sri.com
America does not know the difference between sex and money. It treats
sex like money because it treats sex as a medium of exchange, and it
treats money like sex because it expects its money to get pregnant and
reproduce. --- Peter Kreeft

Kenny Tilton

unread,
Apr 15, 2003, 8:14:31 PM4/15/03
to
Very interesting stuff. Someone should write a serious history of this
language. i think an oral history would be a blast, since so many of the
pioneers are still with us. any writers out there?

> slow,...

<gasp!> Not "was slow"? Did they have decent compilers back then?

... but that the algorithms of the time were not practical on the hardware

> features or behaviors than in exploring the mechanisms of cognition)...

That is my theory on why Cells (and Garnet and COSI) rule but multi-way,
non-deterministic constraint systems are used only in application
niches. The simpler, one-way systems were developed in response to
engineering demands, the multi-way systems were reaching for the stars
in understandable enthusiasm over constraints as a paradigm.

>, most,
> though not all of the work being done today is in non-AI languages. This is
> especially true for work that has concentrated on more mathematical or
> statistical approaches, such as machine vision, machine learning, and even
> recently parsing. Universities also are finding their talent pool is filled
> with graduate students who know languages like Java or C++, and this has
> also pushed a new wave tendency for systems to be implemented in these
> languages. Lisp is not dead in AI, but it tends to be concentrated in a few
> areas, and even these are changing over to other languages. I expect Lisp to
> continue to be used but it will not grow as fast as the area (it will
> continue to lose market share). Even when I continue to use it in projects,
> I usually have to sell it as an "direct executable specification language"
> so there is some hope that the effort can be "thrown over the wall" at some
> point.

Hey, maybe you can sell Lisp as a fast Python. :) Talk about coming in
thru the backdoor. The funny thing is, Norvig already completed the
C-Python-Lisp bridge with his Python-Lisp paper, and Graham just made
the connection by talking up Lisp as the language for 2100 (paraphrasing
somewhat <g>).

Am I the only one who sees here the ineluctable triumph of Lisp?

--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Everything is a cell." -- Alan Kay

Tim Bradshaw

unread,
Apr 15, 2003, 7:39:15 PM4/15/03
to
* Bob Bane wrote:

> If we're lucky, we'll be talking about 'XML Winter' in a few years.

I hope so

> If we're REALLY lucky, Java will be blamed for XML Winter.

But not this. Java is a step up from C/C++, and not controlled by a
monopolist like C#.

--tim

Gabe Garza

unread,
Apr 16, 2003, 2:15:25 AM4/16/03
to
Tim Bradshaw <t...@cley.com> writes:

Sun is less evil then Microsoft like Pol Pot is less evil then
Hitler[1].

Gabe Garza

[1] There's no way a religious conversation like Sun v. Microsoft is
going to be rationally[2] discussed. I hereby breach rationality
and in doing so prematurely invoke Godwin's law--may the soul
of this thread rest in peace.

[2] Traps have been set for the assassins.

Kenny Tilton

unread,
Apr 16, 2003, 2:51:49 AM4/16/03
to

Oh, Christ. We need a ten-step program for Lisp Gods who are losing the
faith, succumbing to Stockholm Syndrome like Eran, Tim and Peter, Paul
and ****.

C'mon, when Java collapses, smothering beneath it the lingering embers
of (its legacy) C++, it's a free-for-all between CL and ...what? Perl?
Python? Ruby? Eiffel? Is that a fight we fear?

These ringing (not!) defenses of Java -- "A Step Up from C/C++" -- you
/do/ know the Mark Twain line about "damning with faint praise", don't you?

Paul F. Dietz

unread,
Apr 16, 2003, 7:16:22 AM4/16/03
to
Gabe Garza wrote:
> I hereby breach rationality
> and in doing so prematurely invoke Godwin's law--may the soul
> of this thread rest in peace.

Sorry -- Godwin's law doesn't take effect if you try to invoke
it deliberately.

Paul

Tim Bradshaw

unread,
Apr 16, 2003, 7:15:26 AM4/16/03
to
* Gabe Garza wrote:

> Sun is less evil then Microsoft like Pol Pot is less evil then
> Hitler[1].

Neither is evil. Only free software cultists talk about good and evil
in these terms, because it means they don't have to think. They are
both just companies, trying to do what companies do, which is make a
lot of money. Microsoft have become a monopolist as Sun would no
doubt like to do. Monopolies are often a bad thing, but this doesn't
mean that companies should not try to become monopolies: they should,
and the legal & regulatory system should prevent them doing so -
that's what it's *for*. There is *nothing wrong* with MS or Sun, what
is wrong is the legal / regulatory framework, which has failed the
people of the US, and also of the rest of the world so far.

Or to put it another way, if you must think in terms of good and evil:
MS's *monopoly* is evil, but MS are not. Sun's monopoly would be evil
too, if they had one (but they don't). A company attempting to
acquire a monopoly is not evil (unless they break the rules, which MS
may have done, of course). Dammit, *Cley* wants a monopoly! Am I
evil (oh, yes, I guess that's what the horns and hooves are, I'd
always wondered...).

--tim (on the internet, no one knows you're the Devil)


Tim Bradshaw

unread,
Apr 16, 2003, 7:23:24 AM4/16/03
to
* Kenny Tilton wrote:
> Oh, Christ. We need a ten-step program for Lisp Gods who are losing
> the faith, succumbing to Stockholm Syndrome like Eran, Tim and Peter,
> Paul and ****.

Where did you get that I'm `losing the faith'. I've been trying to
make it clear that:

Faith has no place in these arguments (what do you think I'm trying
to say when I talk about `free software cultists'?);

There is a place for more than one decent language in the world,
and CL, Java and C are among the decent languages (though I think
that C++ is not, and C#, while it may be decent (I don't know) is
too tainted by its origin).

I have not lost my faith in CL: I never *had* any faith in CL because
I reserve my faith for other parts of my life. I think it's a way
cool language, and a really good solution for many problems. But I
think there are other good languages, and I definitely do not want a
CL monoculture any more than I want a C, Java, Windows or Unix
monoculture. Sorry about that.

--tim

Michael D. Sofka

unread,
Apr 16, 2003, 9:02:00 AM4/16/03
to
Kenny Tilton <kti...@nyc.rr.com> writes:

> Very interesting stuff. Someone should write a serious history of this
> language. i think an oral history would be a blast, since so many of
> the pioneers are still with us. any writers out there?
>

History of Programming Languages II included an interesting article
by Guy Steele (IIRC---I can't find my copy).

Mike

--
Michael D. Sofka sof...@rpi.edu
C&CT Sr. Systems Programmer AFS/DFS, email, usenet, TeX, epistemology.
Rensselaer Polytechnic Institute, Troy, NY. http://www.rpi.edu/~sofkam/

Paolo Amoroso

unread,
Apr 16, 2003, 10:12:44 AM4/16/03
to
On 14 Apr 2003 18:22:13 -0700, cper...@medialab.com (Chris Perkins) wrote:

> I was re-reading PAIP this weekend, admiring Prolog, and looking over
> the TI Explorer manuals posted on LemonOdor, when I got to wondering
> "What happened?" "How did a language like Lisp, with such abstractive
> power and productivity fall into disfavor, or passed by?"

You may check the book "The Brain Makers". There are also a few papers
about Lisp and the AI winter at a Lisp Machine online museum site.


Paolo

P.S.
No references handy, I'm in a hurry, sorry.
--
Paolo Amoroso <amo...@mclink.it>

Arthur T. Murray

unread,
Apr 16, 2003, 11:42:55 AM4/16/03
to
If said "AI Winter" ever existed, it
is thawing into Spring right now :-)

Mentifex
--
http://www.scn.org/~mentifex/theory5.html -- AI4U Theory of Mind;
http://www.scn.org/~mentifex/jsaimind.html -- Tutorial "Mind-1.1"
http://www.scn.org/~mentifex/mind4th.html -- Mind.Forth Robot AI;
http://www.scn.org/~mentifex/ai4udex.html -- Index for book: AI4U

Paolo Amoroso

unread,
Apr 16, 2003, 12:07:22 PM4/16/03
to
On 16 Apr 2003 12:15:26 +0100, Tim Bradshaw <t...@cley.com> wrote:

> may have done, of course). Dammit, *Cley* wants a monopoly! Am I
> evil (oh, yes, I guess that's what the horns and hooves are, I'd

Since you use Lisp, you are an eval.


Paolo
--
Paolo Amoroso <amo...@mclink.it>

Paolo Amoroso

unread,
Apr 16, 2003, 12:07:23 PM4/16/03
to
On Wed, 16 Apr 2003 00:14:31 GMT, Kenny Tilton <kti...@nyc.rr.com> wrote:

> Very interesting stuff. Someone should write a serious history of this
> language. i think an oral history would be a blast, since so many of the

You might check "The Brain Makers".

Paolo Amoroso

unread,
Apr 16, 2003, 12:07:23 PM4/16/03
to
On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <gorbag...@NOSPAMmac.com>
wrote:

> tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> to critical applications of these technologies caused the government to fund
> very expensive research centers like MCC in Austin, Texas (full disclaimer:

What does MCC stand for?

Kenny Tilton

unread,
Apr 16, 2003, 12:29:16 PM4/16/03
to

Tim Bradshaw wrote:
> * Kenny Tilton wrote:
>
>>Oh, Christ. We need a ten-step program for Lisp Gods who are losing
>>the faith, succumbing to Stockholm Syndrome like Eran, Tim and Peter,
>>Paul and ****.
>
>
> Where did you get that I'm `losing the faith'.

A little extrapolation was required for that half-serious rant:

I figure if a Lispnik is praising Java for managing to become popular,
then they must be cracking under the pressure of being one of the few
albeit happpy few to dig Lisp. Likewise for, in this case, praising a
language for being a little better than C/C++.

I sense a discouraged advocate, hence the "losing faith" poke.

Daniel Barlow

unread,
Apr 16, 2003, 12:27:50 PM4/16/03
to
Paolo Amoroso <amo...@mclink.it> writes:

> On 16 Apr 2003 12:15:26 +0100, Tim Bradshaw <t...@cley.com> wrote:
>
>> may have done, of course). Dammit, *Cley* wants a monopoly! Am I
>> evil (oh, yes, I guess that's what the horns and hooves are, I'd
>
> Since you use Lisp, you are an eval.

I aspire to that state. I guess I'll just have to apply myself.


-dan

--

http://www.cliki.net/ - Link farm for free CL-on-Unix resources

Erann Gat

unread,
Apr 16, 2003, 11:56:42 AM4/16/03
to
In article <3E9CFFA0...@nyc.rr.com>, Kenny Tilton
<kti...@nyc.rr.com> wrote:

> Oh, Christ. We need a ten-step program for Lisp Gods who are losing the
> faith, succumbing to Stockholm Syndrome like Eran,

This is getting surreal in so many different ways.

I'm not sure what is more bizzare: being promoted to "Lisp God", or having
people care enough about what I think that they want me to "recover" so I
can think the right things.

E.

Erann Gat

unread,
Apr 16, 2003, 1:05:54 PM4/16/03
to
In article <3E9CA231...@nyc.rr.com>, Kenny Tilton
<kti...@nyc.rr.com> wrote:

> Very interesting stuff. Someone should write a serious history of this
> language. i think an oral history would be a blast, since so many of the
> pioneers are still with us. any writers out there?

www.dreamsongs.com/NewFiles/Hopl2.pdf

E.

Joe Marshall

unread,
Apr 16, 2003, 4:57:49 PM4/16/03
to
Gabe Garza <g_g...@ix.netcom.com> writes:

> Tim Bradshaw <t...@cley.com> writes:
>
> > * Bob Bane wrote:
> > > If we're REALLY lucky, Java will be blamed for XML Winter.
> >
> > But not this. Java is a step up from C/C++, and not controlled by a
> > monopolist like C#.
>
> Sun is less evil then Microsoft like Pol Pot is less evil then
> Hitler[1].

It didn't take long for this thread to mention Hitler.

Joe Marshall

unread,
Apr 16, 2003, 4:59:01 PM4/16/03
to
Paolo Amoroso <amo...@mclink.it> writes:

> On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <gorbag...@NOSPAMmac.com>
> wrote:
>
> > tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> > to critical applications of these technologies caused the government to fund
> > very expensive research centers like MCC in Austin, Texas (full disclaimer:
>
> What does MCC stand for?

Micro Computer Consortium (if I recall correctly)

Marco Antoniotti

unread,
Apr 16, 2003, 7:14:47 PM4/16/03
to

Bob Bane wrote:


If we are REALLY REALLY lucky, Perl and Python will get the blame. If
we are REALLY REALLY REALLY lucky, VB and C# will get the blame.

However, given the state of the world these days, the most probable
thing that will happen is that Lisp will get the blame :)

Cheers

--
Marco

Henrik Motakef

unread,
Apr 16, 2003, 7:39:08 PM4/16/03
to
Marco Antoniotti <mar...@cs.nyu.edu> writes:

>> If we're lucky, we'll be talking about 'XML Winter' in a few years. If
>> we're REALLY lucky, Java will be blamed for XML Winter.
>
> If we are REALLY REALLY lucky, Perl and Python will get the blame. If
> we are REALLY REALLY REALLY lucky, VB and C# will get the blame.
>
> However, given the state of the world these days, the most probable
> thing that will happen is that Lisp will get the blame :)

After all, XML is just clumsy sexprs, no?

Regards
Henrik ;-)

Michael D. Kersey

unread,
Apr 16, 2003, 10:43:36 PM4/16/03
to
Paolo Amoroso wrote:
>
> On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <gorbag...@NOSPAMmac.com>
> wrote:
>
> > tools like Lisp and PROLOG. The very threat that the Japanese might beat us
> > to critical applications of these technologies caused the government to fund
> > very expensive research centers like MCC in Austin, Texas (full disclaimer:
>
> What does MCC stand for?

Microelectronics and Computer Technology Corporation.
http://www.mcc.com/ is the URL of what remains.

Carl Shapiro

unread,
Apr 17, 2003, 12:35:56 AM4/17/03
to
Paolo Amoroso <amo...@mclink.it> writes:

> On Wed, 16 Apr 2003 00:14:31 GMT, Kenny Tilton <kti...@nyc.rr.com> wrote:
>
> > Very interesting stuff. Someone should write a serious history of this
> > language. i think an oral history would be a blast, since so many of the
>
> You might check "The Brain Makers".

I certainly wouldn't. That book has a startling number of factual
errors.

Fernando Mato Mira

unread,
Apr 17, 2003, 8:43:34 AM4/17/03
to
Henrik Motakef <henrik....@web.de> wrote in message news:<87wuhu5...@interim.henrik-motakef.de>...

Check this out for more proof:

http://www.datapower.com/products/xa35.html

Marco Antoniotti

unread,
Apr 17, 2003, 11:55:48 AM4/17/03
to

Fernando Mato Mira wrote:

> Henrik Motakef wrote in message
> news:<87wuhu5...@interim.henrik-motakef.de>...


>
> >Marco Antoniotti writes:
> >
> >
> >>>If we're lucky, we'll be talking about 'XML Winter' in a few years. If
> >>>we're REALLY lucky, Java will be blamed for XML Winter.
> >>
> >>If we are REALLY REALLY lucky, Perl and Python will get the blame. If
> >>we are REALLY REALLY REALLY lucky, VB and C# will get the blame.
> >>
> >>However, given the state of the world these days, the most probable
> >>thing that will happen is that Lisp will get the blame :)
> >
> >After all, XML is just clumsy sexprs, no?
>
>
> Check this out for more proof:
>
> http://www.datapower.com/products/xa35.html


How come I have a sense of "deja vu"? :)

Cheers

--
Marco Antoniotti


Mario S. Mommer

unread,
Apr 17, 2003, 11:57:52 AM4/17/03
to
Marco Antoniotti <mar...@cs.nyu.edu> writes:
> > http://www.datapower.com/products/xa35.html
>
> How come I have a sense of "deja vu"? :)

The first time it is tragedy, the second time, comedy.

Bob Bane

unread,
Apr 17, 2003, 12:24:31 PM4/17/03
to
Henrik Motakef wrote:

> Marco Antoniotti <mar...@cs.nyu.edu> writes:
>
> After all, XML is just clumsy sexprs, no?
>

My Slashdot .sig has always been:


To a Lisp hacker, XML is S-expressions in drag.


Joe Marshall

unread,
Apr 17, 2003, 12:26:07 PM4/17/03
to

Yeah? How about the forty-third time?

Fred Gilham

unread,
Apr 17, 2003, 12:51:52 PM4/17/03
to

mato...@acm.org (Fernando Mato Mira) wrote:
> Check this out for more proof:
>
> http://www.datapower.com/products/xa35.html

I found the following interesting.

Q: Which XML and XSLT processors does the XA35 use? Doesn?t the
XA35 use open-source or third-party software?

Absolutely not! The XA35 is purpose-built to process XML and
XSLT using our own advanced compiler technologies. DataPower
owns all of its patent-pending intellectual property and is
not restricted by the development schedules of third parties.

So not being open-source seems to be a selling point??? Well, I guess
maybe they're just claiming that they don't use other people's stuff,
so they're not subject to other people's problems.

I also notice that they want to patent compilers:

Q: What is XML Generation Three(tm) technology?

XML Generation Three(tm) or XG3(tm) is a patent pending
technology invented by DataPower to address the unique
demands of XML and XSLT processing. It is the core technology
within the XA35 XML Accelerator and all of DataPower's
XML-Aware products.

Q: Why is the XA35 so fast? How fast is it?

XG3(tm) technology compiles the operations described in a
stylesheet directly to *machine code*, the actual instructions
executed by the target CPU. This results in order-of-magnitude
performance advantage over java-based or other interpreter
systems. The dynamic nature of XSL is not lost, from the
user's perspective the output is the same --- just accelerated
by 10X or more in most cases. It is important to note that the
10x performance improvement the XA35 delivers is for both
latency and throughput, the two crucial measurements of speed.

Does this give anyone else besides me a kind of feeling of impending
doom?

--
Fred Gilham gil...@csl.sri.com
A common sense interpretation of the facts suggests that a
superintellect has monkeyed with physics, as well as with chemistry
and biology, and that there are no blind forces worth speaking about
in nature. --- Fred Hoyle

Paolo Amoroso

unread,
Apr 17, 2003, 2:13:52 PM4/17/03
to
On 17 Apr 2003 00:35:56 -0400, Carl Shapiro <cshapi...@panix.com>
wrote:

> Paolo Amoroso <amo...@mclink.it> writes:
[...]


> > You might check "The Brain Makers".
>
> I certainly wouldn't. That book has a startling number of factual
> errors.

Could you please provide a few examples?

Marc Battyani

unread,
Apr 17, 2003, 5:14:33 PM4/17/03
to

"Fred Gilham" <gil...@snapdragon.csl.sri.com> wrote

> I also notice that they want to patent compilers:
>
> Q: What is XML Generation Three(tm) technology?
>
> XML Generation Three(tm) or XG3(tm) is a patent pending
> technology invented by DataPower to address the unique
> demands of XML and XSLT processing. It is the core technology
> within the XA35 XML Accelerator and all of DataPower's
> XML-Aware products.
>
> Q: Why is the XA35 so fast? How fast is it?
>
> XG3(tm) technology compiles the operations described in a
> stylesheet directly to *machine code*, the actual instructions
> executed by the target CPU. This results in order-of-magnitude
> performance advantage over java-based or other interpreter
> systems. The dynamic nature of XSL is not lost, from the
> user's perspective the output is the same --- just accelerated
> by 10X or more in most cases. It is important to note that the
> 10x performance improvement the XA35 delivers is for both
> latency and throughput, the two crucial measurements of speed.
>
> Does this give anyone else besides me a kind of feeling of impending
> doom?

The sad point is that they will surely be granted some patents for this by
the USPTO.
BTW European people should look at http://www.eurolinux.org/ to see the
current software patents status. The news are rather alarming.

Marc


Gorbag

unread,
Apr 17, 2003, 6:39:12 PM4/17/03
to
On 4/16/03 9:07 AM, in article nX2dPiaVFXzJmm...@4ax.com, "Paolo
Amoroso" <amo...@mclink.it> wrote:

> On Tue, 15 Apr 2003 09:08:34 -0700, Gorbag <gorbag...@NOSPAMmac.com>
> wrote:
>
>> tools like Lisp and PROLOG. The very threat that the Japanese might beat us
>> to critical applications of these technologies caused the government to fund
>> very expensive research centers like MCC in Austin, Texas (full disclaimer:
>
> What does MCC stand for?

Well, it's a lawyer and a lightbulb now, but you can still see for yourself:

http://www.mcc.com

For those of you too lazy to click through,

Microelectronics and Computer Technology Corporation.

For a short while, I think it was called MCTC, but they shortened it to MCC
since they liked it better.
>
>
> Paolo

Jeff Katcher

unread,
Apr 18, 2003, 3:55:04 PM4/18/03
to
Carl Shapiro <cshapi...@panix.com> wrote in message news:<ouy1y01...@panix3.panix.com>...
Can you please cite examples? I had the pleasure :) of seeing much of
what the author described from a front row seat. My memory (and it's
been a while since I read it) tells me that it caught the piquance of
the era pretty well, certainly the people that I knew anyway.

Jeffrey Katcher

Eray Ozkural exa

unread,
Apr 18, 2003, 5:11:25 PM4/18/03
to
Gorbag <gorbag...@NOSPAMmac.com> wrote in message news:<BAC17C92.3128%gorbag...@NOSPAMmac.com>...
> I think AI winter had very little to do with Lisp per se, and more to do
> with government funding, certain personalities overblowing expectations, and
> making folks believe that all this wonderful stuff was intimately tied with
> the power of Lisp, PROLOG, and other "AI languages". But believe you me,
> Lisp DID "fit on the slides." (DARPA would not have taken such an interest
> in Common Lisp if they didn't think they were going to get a substantial
> benefit in integrating the output from all the university AI efforts.)

In my opinion the relative slow rate of development in AI research
should not be attributed to any particular programming system. As a
matter of fact, when I look at programming languages today, I think
LISP was the first real programming language and it was C-like
languages which made it harder to write complex software! Today, IMHO,
languages such as ocaml are bound to close the gap between the
theoretical mindset a scientist must accumulate and the practical
creativity that is the mark of a good programmer. (Though, it should
be said that there is no such thing as a "perfect programming
language".)

After all, without the proper algorithm no language can help you
realize it. However, an increased effectiveness in putting theory into
functional systems will speed up development cycles. My opinion is
that this cycle right now is so long that no single research group can
see to their real goals. It might be just about 3-4 years to realize a
simple idea, which is the average duration a graduate student will
stay at a university. We are probably being prevented by the finite
bounds of human patience and ambition :) A physicist friend told me
that very few physicists can be in full command of both general
relativity theory and quantum theory at the same time. This might be
what happens when you push to the limits.

As you have guessed, I will say that the real problem is a theoretical
crisis. Many young researchers seem to think that when we give
something a snappy name like genetic algorithms or neural networks, it
ought to be the ultimate theoretical treatment in AI or that when we
can program robots to do simple tasks we are actually making science.
That is, however, ultimately wrong. And it is exactly this kind of
"Starvation of Ideas" that led to the AI winter. There was simply not
enough innovation. We lacked, I think, the pursuit of higher order
intelligence problems. We were stuck with easy problems and a few
inefficient and low quality solutions that have no chance of scaling
up.

In my opinion, however, the rebirth of machine learning research is a
good direction. A lot of people are now trying to find hard problems
and objectively evaluating a range of methods. I think we have finally
gotten rid of "connectionist" and "symbolic" camps; which will turn
out to be the most insensible distinction in a scientific community
decades later. The mind is computational or not. That simple. And if
it is computational, there are computations that _must_ be
characterized, of course that characterization is independent of a
particular architecture.

I actually have a "slingshot argument" about this issue, but it's
going to take some time before I can put it into words.

Thanks,

__
Eray Ozkural

Michael Schuerig

unread,
Apr 18, 2003, 6:25:41 PM4/18/03
to
Eray Ozkural exa wrote:

> And it is exactly this kind of
> "Starvation of Ideas" that led to the AI winter. There was simply not
> enough innovation. We lacked, I think, the pursuit of higher order
> intelligence problems. We were stuck with easy problems and a few
> inefficient and low quality solutions that have no chance of scaling
> up.

I'm a complete outsider to the recent history of AI, just sitting on the
fence and reading. The impression I've got is that AI as a research
program has been creating results at a steady pace. In my opinion, most
of it has nothing to do with intelligence/cognition, but that's another
matter entirely.

The real problem, from my limited point of view, was that there were a
few people who made completely overblown promises -- "there are now
machines that think...", "in 10 years..." -- and another few who were
all to willing to believe the hype. It's astonishing it took reality so
long until biting back.

Incidentally, I seem to remember reading (in this group?) that AI
logistics software saved more money during Desert Storm than DARPA had
ever spent on AI research. Can anyone confirm or disprove this claim?
How's the balance for civilian uses of AI?

Michael

--
Michael Schuerig All good people read good books
mailto:schu...@acm.org Now your conscience is clear
http://www.schuerig.de/michael/ --Tanita Tikaram, "Twist In My Sobriety"

M H

unread,
Apr 19, 2003, 4:36:52 AM4/19/03
to
Michael Schuerig wrote:
> The real problem, from my limited point of view, was that there were a
> few people who made completely overblown promises -- "there are now
> machines that think...", "in 10 years..." -- and another few who were
> all to willing to believe the hype. It's astonishing it took reality so
> long until biting back.

The real problem was that 20 years ago people did not know how difficult
AI problems really are. In AI simple approaches often work for simple
examples. But when it comes to real-world settings with noisy, large
datasets these approaches break down. It took some time to discover
that and to come up with an alternative. In many AI fields this
alternative was the probabilistic treatment of AI problems. If you are
interested you can clearly see this paradigm shift when comparing
Norvigs PAIP to his "AI - A Modern Approach".

Interestingly, this shift in concepts also induced a shift in the tools
researcher use for programming. I would _guess_ (I don't have any hard
data here) that Matlab and C++ now are the most frequently used language
for research in vision, speech, machine learning.

Matthias

Michael Schuerig

unread,
Apr 19, 2003, 4:50:34 AM4/19/03
to
M H wrote:

> The real problem was that 20 years ago people did not know how
> difficult
> AI problems really are. In AI simple approaches often work for
> simple
> examples. But when it comes to real-world settings with noisy, large
> datasets these approaches break down. It took some time to discover
> that and to come up with an alternative. In many AI fields this
> alternative was the probabilistic treatment of AI problems. If you
> are interested you can clearly see this paradigm shift when comparing
> Norvigs PAIP to his "AI - A Modern Approach".

Indeed, PAIP is a great study of historically significant AI programs,
reduced to their core. Still, even in the heyday of Logic Theorist and
General Problem Solver, even with a most optimistic outlook,
extravagant claims about intelligent computers were utterly unfounded.
I don't believe at all, that only with hindsight one can see that.

Michael

--
Michael Schuerig They tell you that the darkness
mailto:schu...@acm.org is a blessing in disguise.
http://www.schuerig.de/michael/ --Janis Ian, "From Me To You"

M H

unread,
Apr 19, 2003, 7:52:39 AM4/19/03
to
Michael Schuerig wrote:
> Indeed, PAIP is a great study of historically significant AI programs,
> reduced to their core. Still, even in the heyday of Logic Theorist and
> General Problem Solver, even with a most optimistic outlook,
> extravagant claims about intelligent computers were utterly unfounded.
> I don't believe at all, that only with hindsight one can see that.

In the fields I cited (esp. vision and speech recognition) it is
regularly difficult to explain to non-experts why computers should have
difficulties solve tasks which are so simple even a three-year-old can
do them effortlessly!

And how could one have known in advance that not logic and rule
induction but probability theory and graphs would become the tools to
build the most successful expert systems with?

If the researchers in AI had had the right concepts in the early 80s
where would we be today, taking into account all that funding AI has
received? Can you tell? I can't.

Matthias

Bulent Murtezaoglu

unread,
Apr 19, 2003, 8:32:27 AM4/19/03
to
>>>>> "MH" == M H <M> writes:
[...]
MH> And how could one have known in advance that not logic and
MH> rule induction but probability theory and graphs would become
MH> the tools to build the most successful expert systems with?
[...]

I think we knew that all along? It isn't immediately obvious to me
that if the funding and efforts were steered away from purely symbolic
systems to hybrid probabilistic ones, say, right after Mycin, a
remarkably different sequence of events/accomplishments would have
unfolded. Direct funding would have been no substitute for the
explosion in inexpensive computing power partially brought about by
completely different set of market forces on top of Moore's law.

The market will eventually shake the anti-AI hype, and instead try to
pick and choose between good and bad prospects w/o knee-jerk
rejection. There is good technology out there and competent people to
apply it. Whether it is called AI or not is immaterial as far as the
development is concerned. While it pretty clear that the party has
been over for a while, we are not left with just a hangover. That is,
the endevour hasn't been pointless and a total loss.

cheers,

BM



Paul Wallich

unread,
Apr 19, 2003, 9:03:53 AM4/19/03
to
In article <87u1cua...@acm.org>, Bulent Murtezaoglu <b...@acm.org>
wrote:

What is remarkable to me is how well some of the early research ideas
have held up, if only because there haven't been a lot of new ones to
replace them. Even in cases where Moore's Law has made an enormous
difference to the kinds of tradeoffs programmers must make, the general
algorithms were explored back in the mid-80s. (In the past few years
there has been some new movement perhaps.)

One thing that concerns me, though, is the apparent increase in the
opacity of much of the work being done now. Statistical recognizers and
memory-based systems can give awfully good performance on test sets (and
sometimes real-world data) but by leaving the "knowledge extraction" to
systems where the meaning of extracted features is unclear, you run all
kinds of applications risks. This argument has been made and lost
before, of course, but that's part of the general trend for the
most-computable solution to win.

paul

Paolo Amoroso

unread,
Apr 19, 2003, 11:49:27 AM4/19/03
to
On Sat, 19 Apr 2003 00:25:41 +0200, Michael Schuerig <schu...@acm.org>
wrote:

> Incidentally, I seem to remember reading (in this group?) that AI
> logistics software saved more money during Desert Storm than DARPA had
> ever spent on AI research. Can anyone confirm or disprove this claim?

I seem to have read that that software run on Symbolics Lisp Machines.

Frank A. Adrian

unread,
Apr 19, 2003, 1:34:46 PM4/19/03
to
Paul Wallich wrote:

> One thing that concerns me, though, is the apparent increase in the
> opacity of much of the work being done now. Statistical recognizers and
> memory-based systems can give awfully good performance on test sets (and
> sometimes real-world data) but by leaving the "knowledge extraction" to
> systems where the meaning of extracted features is unclear, you run all
> kinds of applications risks. This argument has been made and lost
> before, of course, but that's part of the general trend for the
> most-computable solution to win.

Even human intelligence has application risks. The real issue for AI is not
whether you can eliminate application risk, but whether you can expand the
range of correct operation to human limits with similar (or lesser) risk
attributes over the domain of the given application and expand the
operation to a wide enough set of domains.

Also, the less opacity a system has, the more automaton-like the system
looks. At some point it no longer looks like AI, it's just programming.
An interesting question is whether or not you can bound the risk while
still providing the "surprising behaviors" within those bounds that seem to
characterize intelligence. Or is there an uncertainty principal at work
that says that unexpected behavior - and thus intelligence - is tied to
risk in an inherent way such that, once the risk is bounded, we no longer
see intelligence and, conversely, if we want intelligence, we need to put
up with the risk that the intelligence will not always behave optimally or
even correctly.

It is a conundrum...

faa

Frank A. Adrian

unread,
Apr 19, 2003, 1:38:59 PM4/19/03
to
Fred Gilham wrote:

> Does this give anyone else besides me a kind of feeling of impending
> doom?

It gives me a feeling of impending horselaugh...

faa

Paul Wallich

unread,
Apr 19, 2003, 8:34:53 PM4/19/03
to
In article <WMfoa.15$Qa5....@news.uswest.net>,

"Frank A. Adrian" <fad...@ancar.org> wrote:

> Paul Wallich wrote:
>
> > One thing that concerns me, though, is the apparent increase in the
> > opacity of much of the work being done now. Statistical recognizers and
> > memory-based systems can give awfully good performance on test sets (and
> > sometimes real-world data) but by leaving the "knowledge extraction" to
> > systems where the meaning of extracted features is unclear, you run all
> > kinds of applications risks. This argument has been made and lost
> > before, of course, but that's part of the general trend for the
> > most-computable solution to win.
>
> Even human intelligence has application risks. The real issue for AI is not
> whether you can eliminate application risk, but whether you can expand the
> range of correct operation to human limits with similar (or lesser) risk
> attributes over the domain of the given application and expand the
> operation to a wide enough set of domains.

In a lot of cases, you don't even have to do that -- it may be
sufficient in an overall system context to have an "AI" with much worse
than human performance, but at much lower cost or with much higher
throughput, as long as there's a human supervisor. But by applications
risks, I'm talking more about the overall system than just about the AI
part (more below)



> Also, the less opacity a system has, the more automaton-like the system
> looks. At some point it no longer looks like AI, it's just programming.
> An interesting question is whether or not you can bound the risk while
> still providing the "surprising behaviors" within those bounds that seem to
> characterize intelligence. Or is there an uncertainty principal at work
> that says that unexpected behavior - and thus intelligence - is tied to
> risk in an inherent way such that, once the risk is bounded, we no longer
> see intelligence and, conversely, if we want intelligence, we need to put
> up with the risk that the intelligence will not always behave optimally or
> even correctly.

At its simplest, that's Minsky's speech from '84 or so about how as soon
as something starts working reliably, it ceases to be admitted as AI. I
think in general, you're right, but with an important caveat about what
we mean by opacity. The hallmark of many operations of human
intelligence is that they can't necessarily be predicted, but they can
be followed.

I don't think the current generation of statistically-based (for lack of
a better term) AI systems is terribly well-designed in terms of letting
people explore the semantic implications of the features that get
extracted. And that opacity in (what passes for)
knowledge-representation and conclusion-drawing is where the excess risk
lies. Unless your training data is really good and really representative
of the universe of interest, you're going to get results that make Parry
vs Eliza look robust, but you're not going to know it until fairly late
in the game, and you're going to have a heck of a time figuring out what
went wrong.

Of course, most of what one hear from the outside are the horror
stories, so I'm probably biased.

paul

Eray Ozkural exa

unread,
Apr 22, 2003, 4:14:53 PM4/22/03
to
M H <m...@nospam.org> wrote in message news:<b7rdc0$dc8$07$1...@news.t-online.com>...

> In the fields I cited (esp. vision and speech recognition) it is
> regularly difficult to explain to non-experts why computers should have
> difficulties solve tasks which are so simple even a three-year-old can
> do them effortlessly!

Also note how vision and speech recognition are not called AI nowadays
although speech recognition was something of a grand challenge in AI
when Space Odyssey was shot :)

That's because the focus of the research in those domais shifted from
the cognitive to (electrical!!!) engineering problems. IMHO real
speech recognition is AI-complete of course. This can easily be shown
via the dependency of semantic analysis, and thereon syntactic
analysis and thereon morphology and thereon phonology on pragmatics.
:)

Also some of the things, after they get "trivial" to understand, are
thought not to be part of AI research any more. I don't think I
perceive an alpha-beta pruning game playing algorithm as AI research,
because there is really very little left to explore using that model.
But still a search algorithm is more AI than a lot of things in
computer science, it's the first part of AIMA. :)

Thanks,

__
Eray Ozkural

Eray Ozkural exa

unread,
Apr 22, 2003, 4:16:11 PM4/22/03
to
Paul Wallich <p...@panix.com> wrote in message news:<pw-EEB233.20...@reader1.panix.com>...

> I don't think the current generation of statistically-based (for lack of
> a better term) AI systems is terribly well-designed in terms of letting
> people explore the semantic implications of the features that get
> extracted. And that opacity in (what passes for)
> knowledge-representation and conclusion-drawing is where the excess risk
> lies. Unless your training data is really good and really representative
> of the universe of interest, you're going to get results that make Parry
> vs Eliza look robust, but you're not going to know it until fairly late
> in the game, and you're going to have a heck of a time figuring out what
> went wrong.

This is the argument for the "symbolic camp". I want to point out that
this distinction is too vague to be useful. There is essentially no
difference between
1) C4.5
2) A naive bayesian learning algorithm
3) An ANN learning algorithm
4) Another LISP code that has the same goal!!!

I have an argument that shows, in theory, there is no useful
distinction to be made of the "symbolic-ity" of the models employed.
As a matter of fact a DT is just as "opaque" as an ANN if you think
about it carefully. What happens when you increase the number of input
dimensions to 1000 and training instances to 1M? What happens when you
design by hand an XOR circuit with ANN?

__
Eray Ozkural
Bilkent Univ. CS Dept. Miserable PhD student

sv0f

unread,
Apr 22, 2003, 4:54:29 PM4/22/03
to
In article <fa69ae35.03042...@posting.google.com>,

er...@bilkent.edu.tr (Eray Ozkural exa) wrote:

>There is essentially no
>difference between
> 1) C4.5
> 2) A naive bayesian learning algorithm
> 3) An ANN learning algorithm
> 4) Another LISP code that has the same goal!!!
>
>I have an argument that shows, in theory, there is no useful
>distinction to be made of the "symbolic-ity" of the models employed.

Are you aware of papers that establish the approximate
equivalence of (1), (2), and (3)? I know there were some
empirical comparisons of ID3 and ANNs in the late 1980s
but have lost track of the literature. Any recent references
would be much appreciated.

>As a matter of fact a DT is just as "opaque" as an ANN if you think
>about it carefully. What happens when you increase the number of input
>dimensions to 1000 and training instances to 1M? What happens when you
>design by hand an XOR circuit with ANN?

Maybe I'm not thinking "carefully" enough. Isn't this just
the claim that applying any learning algorithm to a really
really really complex domain will result in a nearly
uninterpretable "solution"?

Paul Wallich

unread,
Apr 23, 2003, 10:41:06 AM4/23/03
to
In article <fa69ae35.03042...@posting.google.com>,
er...@bilkent.edu.tr (Eray Ozkural exa) wrote:

Thanks for rediscovering Turing-equivalence ;-)

In practice, however, people who use different kinds of programming
paradigms tend to expect different things from them. So although
"symbolic" representations can be at least as opaque as artificial
neural nets (I've seen some beautiful explications of why certain
systems have high-weight connections in particular places) there's a
strong tendency toward more black-boxness in the more traditionally
numeric-intensive computational methods. Combine that with hidden
regularities in your training sets, and you can get some really
spectacular failures -- especially in cases where your statistical
algorithms appear to be doing best in finding simple classifications.
(Historical examples include the tank-recognizer that zeroed in on
picture quality and sun angle, or the mortgage assistant that found race
to be the crucial classifier for its training set.) The dog's breakfast
that is off-axis face recognition is probably a good current example.

The whole thing reminds me of the brief period when straight simulated
annealing was the hottest thing in chip layout. An acquaintance
remarked, "It's really a great technique if you know nothing at all
about the problem you're trying to solve."

paul

M H

unread,
Apr 23, 2003, 10:17:48 AM4/23/03
to
Paul Wallich wrote:
> In practice, however, people who use different kinds of programming
> paradigms tend to expect different things from them. So although
> "symbolic" representations can be at least as opaque as artificial
> neural nets (I've seen some beautiful explications of why certain
> systems have high-weight connections in particular places) there's a
> strong tendency toward more black-boxness in the more traditionally
> numeric-intensive computational methods. Combine that with hidden
> regularities in your training sets, and you can get some really
> spectacular failures -- especially in cases where your statistical
> algorithms appear to be doing best in finding simple classifications.

For some statistical classifiers there are upper bounds on the
generalization error you can derive. If you can't derive any useful
bounds you can estimate the generalization error using, e.g., some sort
of crossvalidation. Assuming that your training data samples the true
distribution generating your data you will be able to give accurate
bounds on the errors you are to expect when your system is employed.
(If you don't have properly sampled training data you shouldn't rely on
machine learning in the first place.)

Note that interpretability and classification performance are two
different goals and often you only need one. The human visual system is
an example of a classifier which shows excellent performance but really
bad interpretability. Of course, this may improve as science proceeds.

Similarly, many recent spam mail filters are based on a grossly
simplified statistical model of language. Although their classification
decisions are not easy to understand these systems perform better than
their older keyword-regexp-based counterparts. Most people will just be
interested in a good recognition rate. They won't care that regexps are
so much easier to read.

Matthias

Paul Wallich

unread,
Apr 23, 2003, 1:20:26 PM4/23/03
to
In article <b86ecj$cfg$06$1...@news.t-online.com>, M H <m...@nospam.org>
wrote:

> For some statistical classifiers there are upper bounds on the
> generalization error you can derive. If you can't derive any useful
> bounds you can estimate the generalization error using, e.g., some sort
> of crossvalidation. Assuming that your training data samples the true
> distribution generating your data you will be able to give accurate
> bounds on the errors you are to expect when your system is employed.

and the kicker:

> (If you don't have properly sampled training data you shouldn't rely on
> machine learning in the first place.)

Perhaps you would be good enough to go and chisel this line into the
foreheads of research directors and program managers around the world?

Especially in some of the intelligence-related areas where machine
learning is currently being touted, the quality of some of the training
data would set fire to your hair.

Of course, this issue is really just another version of the "If you know
how to do it, it's not AI" problem -- the real challenge is developing
techniques that work (or that fail sensibly) for really lousy,
incomplete and nonrepresentative training sets.

paul

Bulent Murtezaoglu

unread,
Apr 24, 2003, 2:04:55 AM4/24/03
to
>>>>> "PW" == Paul Wallich <p...@panix.com> writes:
[...]
PW> Of course, this issue is really just another version of the
PW> "If you know how to do it, it's not AI" problem -- the real
PW> challenge is developing techniques that work (or that fail
PW> sensibly) for really lousy, incomplete and nonrepresentative
PW> training sets.

[I have been away from the field a long time so grains of salt are
indicated but] 'Lousy incomplete and ronrepresentative' training sets
and a general algorithm will necessarily give you something that
either does the wrong thing or is indecisive most of the time. If you
want to infer structures that are obscured or non-existent in the
training set you'll just have to bias the system in some other way
outside of your learning algorithm. You can't have it both ways.
Think of the extreme case where 'lousy, incomplete and
ronrepresentative' means random data. If you feed the program random
data as the training set and it still does the right thing for your
purposes, something outside of the learning algorithm must be at work.

IMHO but not too H, I'd be surprised and maybe excited if this were
not so.

cheers,

BM

Michael Sullivan

unread,
May 1, 2003, 12:27:33 PM5/1/03
to
Michael Schuerig <schu...@acm.org> wrote:

> Indeed, PAIP is a great study of historically significant AI programs,
> reduced to their core. Still, even in the heyday of Logic Theorist and
> General Problem Solver, even with a most optimistic outlook,
> extravagant claims about intelligent computers were utterly unfounded.
> I don't believe at all, that only with hindsight one can see that.

I don't believe it at all either. I've never been an AI researcher, but
I've been interested since I can remember. I was in college a bit fewer
than 20 years ago, and was reading and talking to people who did AI
research at the time. I never thought those problems were easy, and the
people I was reading or talking to at the time didn't think so either.

It seems to me that the only people who thought we'd have thinking
computers by 2000, that any kind of AI-complete problem was easy, were
people who didn't really understand those problems -- lay-folk without a
real appreciation for the field.

I think you had actual researchers who were overeager figuring they
could deliver *something useful and worth the investment* in 5-10 years
and talking in very optimistic terms, and outsiders reading that as
"robots that think in 20-30 years", not realizing that said researcher's
"something worth the investment" was about a millionth of the way down
the road to actual AI, if that.

In the 1970s Kubrick had HAL showing up in 2001, but by 1984, if you'd
asked the people I was talking to, they'd have considered 5001 a lot
more likely.


Michael

Gorbag

unread,
May 1, 2003, 1:03:28 PM5/1/03
to
On 5/1/03 9:27 AM, in article 1fua2g2.1mwj7uu1mq4uwkN%mic...@bcect.com,
"Michael Sullivan" <mic...@bcect.com> wrote:

> Michael Schuerig <schu...@acm.org> wrote:
>
>> Indeed, PAIP is a great study of historically significant AI programs,
>> reduced to their core. Still, even in the heyday of Logic Theorist and
>> General Problem Solver, even with a most optimistic outlook,
>> extravagant claims about intelligent computers were utterly unfounded.
>> I don't believe at all, that only with hindsight one can see that.

> It seems to me that the only people who thought we'd have thinking


> computers by 2000, that any kind of AI-complete problem was easy, were
> people who didn't really understand those problems -- lay-folk without a
> real appreciation for the field.

Demonstrably false. There are several AAAI fellows who were making wild-eyed
predictions ten or more years ago about the state of the art by the year
2000. The rest of the AAAI fellows know who these guys were, so it shouldn't
be too hard to find some names if you care to look. (I'm not putting
anyone's name into a newsgroup; I don't know where my paycheck is going to
be coming from when some email of mine is dredged up on Google).

sv0f

unread,
May 1, 2003, 1:21:00 PM5/1/03
to
In article <BAD6A170.3C83%gorbag...@NOSPAMmac.com>,
Gorbag <gorbag...@NOSPAMmac.com> wrote:

>> Michael Schuerig <schu...@acm.org> wrote:
>>
>>> Indeed, PAIP is a great study of historically significant AI programs,
>>> reduced to their core. Still, even in the heyday of Logic Theorist and
>>> General Problem Solver, even with a most optimistic outlook,
>>> extravagant claims about intelligent computers were utterly unfounded.
>>> I don't believe at all, that only with hindsight one can see that.

[...]


>There are several AAAI fellows who were making wild-eyed
>predictions ten or more years ago about the state of the art by the year
>2000. The rest of the AAAI fellows know who these guys were, so it shouldn't
>be too hard to find some names if you care to look.

I've lost track of why you guys are upset.

That some AI pioneers made predictions that turned out to be
optimistic?

That the early AI technologies developed by these pioneers were
obviously unsuitable, and should have been recognized as such
back in the day?

That the early AI technologies developed by these pioneer turned
out to be unsuitable, but current technologies are much better?

That AI was and will always be pseudoscience?

Something else?!

Arthur T. Murray

unread,
May 1, 2003, 3:31:47 PM5/1/03
to
sv0f <no...@vanderbilt.edu> wrote on Thu, 01 May 2003:
> [...] I've lost track of why you guys are upset.

>
> That some AI pioneers made predictions that turned out
> to be optimistic?
>
> That the early AI technologies developed by these pioneers
> were obviously unsuitable, and should have been recognized
> as such back in the day?
>
> That the early AI technologies developed by these pioneer turned
> out to be unsuitable, but current technologies are much better?

http://www.scn.org/~mentifex/jsaimind.html is a JavaScript AI.


>
> That AI was and will always be pseudoscience?
>
> Something else?!

http://www.scn.org/~mentifex/ai4udex.html is a hyperlink index
(in two directions: AI/CS/Philosophy background and AI4U pages)
to the textbook "AI4U: Mind-1.1 Programmer's Manual" (q.v.).

Duane Rettig

unread,
May 1, 2003, 2:35:07 PM5/1/03
to
mic...@bcect.com (Michael Sullivan) writes:

> In the 1970s Kubrick had HAL showing up in 2001, but by 1984, if you'd
> asked the people I was talking to, they'd have considered 5001 a lot
> more likely.

2001 was the time setting for the movie, but not when HAL was
"born". I saved an old message from a colleage of mine, sent on
Jan 8, 1992, saying:

Just read on hackers_guild that according to "2001, A Space
Odyssey", HAL is born on 12-Jan-92, this Sunday.

So HAL was almost 9 years old when the movie was to take place...

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Tim Bradshaw

unread,
May 1, 2003, 2:55:18 PM5/1/03
to
* Michael Sullivan wrote:

> It seems to me that the only people who thought we'd have thinking
> computers by 2000, that any kind of AI-complete problem was easy,
> were people who didn't really understand those problems -- lay-folk
> without a real appreciation for the field.

No. Major people in the field made these stupid, bogus claims.

--tim

Eray Ozkural exa

unread,
May 1, 2003, 7:34:02 PM5/1/03
to
Paul Wallich <p...@panix.com> wrote in message news:<pw-45A375.10...@reader1.panix.com>...

>
> Thanks for rediscovering Turing-equivalence ;-)
>

I'm taking that as a humorous remark. :>

However, it doesn't detract from my argument. I argue not for the
computational equivalence of four methods. (I leave it as an exercise
to the reader if the models they produce are computationally
equivalent [seriously]. Hint: BP learning and LISP program can produce
computationally equivalent models) I argue that there is NO useful
qualitative distinction to be made among these wildly different ranks
of learning methods. The instances of the argument I gave clearly
shows the validity of the argument schema in both directions.

> In practice, however, people who use different kinds of programming
> paradigms tend to expect different things from them. So although
> "symbolic" representations can be at least as opaque as artificial
> neural nets (I've seen some beautiful explications of why certain
> systems have high-weight connections in particular places) there's a
> strong tendency toward more black-boxness in the more traditionally
> numeric-intensive computational methods. Combine that with hidden
> regularities in your training sets, and you can get some really
> spectacular failures -- especially in cases where your statistical
> algorithms appear to be doing best in finding simple classifications.
> (Historical examples include the tank-recognizer that zeroed in on
> picture quality and sun angle, or the mortgage assistant that found race
> to be the crucial classifier for its training set.) The dog's breakfast
> that is off-axis face recognition is probably a good current example.
>

As you may have recognized I argue against this argument which I find
useless. Essentially, the "comprehensibility" or "communicability" of
the results is another problem, which should not be taken as a general
machine learning problem. However, in literature that is seen as part
of KDD process of which a data mining or machine learning algorithm
comprises another step. For instance the overall system must be able
to present the intermediate results in a graphical and concise form to
the user, say a scientist who's trying to understand what's inside a
petabyte dataset.

Can this influence the choice of algorithm?

The answer is YES, but that cannot be used to disprove my argument. By
nature, the algorithm can be either a "connectionist" OR "symbolic"
one. _Depends on the application_. Maybe I should say that twice or
thrice.

Usually one will have to apply quite sophisticated
visualization/summarization/fuzzification algorithms to make sense of
the output of the data mining step, whatever kind of model it is. The
key here is thinking BIG. Number of training instances > 10^7. Number
of dimensions > 10^3. One should not assume that a human can make
sense of a functional model derived from such data.

> The whole thing reminds me of the brief period when straight simulated
> annealing was the hottest thing in chip layout. An acquaintance
> remarked, "It's really a great technique if you know nothing at all
> about the problem you're trying to solve."

Now I see a true programmer :) In practice only carefully crafted
algorithms can find near-optimal solutions in VLSI layout which has
also given rise to state-of-the art graph and hypergraph partitioning
algorithms. Similar remarks apply nearly to all "really hard"
problems.

SA was investigated but I think quickly abandoned by the VLSI
community. Actually I'm having to explain why SA should not be seen as
a generally applicable method to a not-too-clueful student right now
:/

Happy hacking,

__
Eray Ozkural

Tom Osborn

unread,
May 2, 2003, 1:20:00 AM5/2/03
to

My view (having been there at the time) is that three things went wrong.

[Generalisation warning - all comments below have exceptions].

Firstly academic AI which didn't SCALE well was offered as the direction to
head, rather than tackling the problem of scale. Many MS student "trained"
to build limited domain, limited robustness "Expert Systems" and then
were employed as ES experts.

Secondly, the methodologies didn't transfer from narrow, limited domains
very well at all. CBR was an attempt to address that, but it was a version
of amateur tinkering for a fair while. Statistics and ML had more to offer.
AI supported by decision theory and cognitive modelling also.

Thirdly, the marketplace (IT manager/buyers/users) were skeptical, defensive
and ignorant. They often still are. It's an "evolutionary" thing, like a Peter
Principle about "ideas that can work, and [therefore] we are prepared to
adopt them". Hype from AI did not help. SQL _could_ be written to
support ANY kinds of management decisions and insights discovery
(etc), but that's a fucking lot of non-transparent SQL.

Unfortunately "Prolog as a database query language" was killed by the
LISP/Prolog divide (or US/European divide). LISP fostered "algorithm
thinking" (bounded search, side effects, etc), while Prolog fostered
"data semantics and requirements thinking". I think Prolog was a better
bet for the long term... [This paragraph may cause flames. It is my opinion
having been involves in many "camps" over the years. Feel free to defend
alternative opinions, rather than attack me...].

Lastly, the symbolic vs NN/statistical/maths/decision theory WAR was
very dumb indeed.

Tom.

Mario S. Mommer

unread,
May 2, 2003, 3:59:15 AM5/2/03
to

True.

If someone needs a reference, you can find some quotes and names in
the chapter on the General Problem Solver in PAIP.

Mario.

David Longley

unread,
May 2, 2003, 5:01:44 AM5/2/03
to
In article <3eb20...@news.iprimus.com.au>, Tom Osborn <MAPStom@DELETE_
CAPS.nuix.com.au> writes


There's another possibility - namely that those pursuing "AI" had
misconceived the nature of human "intelligence" and/or skills. More
fundamentally, they had misconceived the nature of the psychological.

--
David Longley

Jeff Caldwell

unread,
May 2, 2003, 8:25:27 AM5/2/03
to
Tom,

I take it you exclude a Prolog compiler embedded in Lisp, as in Chapter
12 of PAIP. Pure Prolog because it is more efficient than a Prolog
written and embedded in Lisp, as in PAIP, or because of a deeper
semantic difference?

Jeff

Tom Osborn wrote:
>...I think Prolog was a better

Christopher Browne

unread,
May 2, 2003, 9:54:33 AM5/2/03
to
Quoth Jeff Caldwell <jd...@yahoo.com>:

> Tom Osborn wrote:
>>...I think Prolog was a better
>> bet for the long term...
>
> I take it you exclude a Prolog compiler embedded in Lisp, as in
> Chapter 12 of PAIP. Pure Prolog because it is more efficient than a
> Prolog written and embedded in Lisp, as in PAIP, or because of a
> deeper semantic difference?

The point would be that implementing systems using Prolog involves
mostly writing code that declares intent rather than declaring how to
compute things.

In the long run (of course, Keynes observes "we're all dead"...), the
hope would be for Prolog system environments to get increasingly
efficient at picking different computational techniques, in much the
way that SQL DBMSes have gradually gotten better at tuning queries.

It may run into limits as to how much fancier the back end can get at
"tuning," and fruitful counterarguments would also fall out of claims
amounting to "Prolog isn't expressive enough to describe what we want
to describe."

Those all seem to be somewhat tenable positions...
--
If this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me
http://cbbrowne.com/info/prolog.html
"What is piracy? Piracy is the act of stealing an artist's work without
any intention of paying for it. I'm not talking about Napster-type
software. I'm talking about major label recording contracts."
-- Courtney Love, Salon.com, June 14, 2000

BK

unread,
May 2, 2003, 10:32:40 AM5/2/03
to
Paolo Amoroso <amo...@mclink.it> wrote ...

> Michael Schuerig <schu...@acm.org> wrote:
>
> > Incidentally, I seem to remember reading (in this group?) that AI
> > logistics software saved more money during Desert Storm than DARPA had
> > ever spent on AI research. Can anyone confirm or disprove this claim?
>
> I seem to have read that that software run on Symbolics Lisp Machines.


Are you talking about Ascent? Apperently they use Allegro

http://www.franz.com/success/customer_apps/scheduling/ascent.lhtml

rgds
bk

ozan s yigit

unread,
May 2, 2003, 11:14:28 AM5/2/03
to
Jeff Caldwell <jd...@yahoo.com> writes [to tom osborn]:

> I take it you exclude a Prolog compiler embedded in Lisp, as in
> Chapter 12 of PAIP.

actually there have been more interesting couplings of the two, eg. dorai's
schelog, and another scheme+prolog implementation at indiana. i suspect that
this ends up being an academic exercise with negligible value in production
use. the two can play together, but it always feels forced, substandard.
quintus is not easy to substitute in allegro, and vice versa.

oz

>
> Tom Osborn wrote:
> >...I think Prolog was a better
> > bet for the long term...
>

---
practically no other tree in the forest looked so tree-like as this tree.
-- terry pratchett

sv0f

unread,
May 2, 2003, 11:22:05 AM5/2/03
to
In article <ey3llxq...@cley.com>, Tim Bradshaw <t...@cley.com>
wrote:

>Major people in the field made these stupid, bogus claims.

IMO, major people in AI made optmistic claims, many of
which turned out to be too optimistic (hey, it was the
birth of a new field!) and some of which led to productive
research.

FWIW, the historical addendum to Newell and Simon's (1972)
"Human Problem Solving" succinctly captures the zeitgeist
in which AI came into being. Reading it, I can imagine
being a young scholar, drunk on the sudden convergence of
a half dozen fields, and making bold predictions like
Simon's infamous 1950s utterance that a computer would
be world chess champion within ten years. (He missed by
30 years, of course.)

Michael Sullivan

unread,
May 2, 2003, 11:37:02 AM5/2/03
to
Tim Bradshaw <t...@cley.com> wrote:

> * Michael Sullivan wrote:

Alright, I can believe that -- I certainly wasn't reading everything and
I wouldn't say I was current in the field then or ever.

I also phrased that more sweepingly than I should have.

What I should have said is that I don't remember taking any such claims
seriously. Basically, I agree with Michael Scheurig that it doesn't
take hindsight to call such claims "stupid and bogus". It didn't even
take PhD level knowledge at the time. As an undergraduate math major
and regular basher of computer keyboards with only a lay interest in
serious AI, it was clear to me. It's hard for me to believe that major
researchers who made such claims actually believed them, and weren't
just priming the money pump with bullshit.


Michael

Erann Gat

unread,
May 2, 2003, 1:27:01 PM5/2/03
to
In article <none-E89AF5.1...@news.vanderbilt.edu>, sv0f
<no...@vanderbilt.edu> wrote:

> Simon's infamous 1950s utterance that a computer would
> be world chess champion within ten years. (He missed by
> 30 years, of course.)

That still makes him closer to being right than the naysayers who claimed
that a computer playing grandmaster-level chess was fundamentally
impossible.

E.

Gorbag

unread,
May 2, 2003, 2:52:07 PM5/2/03
to
On 5/2/03 6:54 AM, in article b8ttao$dk1jk$1...@ID-125932.news.dfncis.de,
"Christopher Browne" <cbbr...@acm.org> wrote:

> In the long run (of course, Keynes observes "we're all dead"...), the
> hope would be for Prolog system environments to get increasingly
> efficient at picking different computational techniques, in much the
> way that SQL DBMSes have gradually gotten better at tuning queries.

You may be able to make this argument for requirements specified in formal
logic, but not in PROLOG. PROLOG uses SLD resolution, which makes it more a
programming language with a veneer of logic. There are no choices as to
resolution techniques. (A programmer can depend on the clauses being
resolved in a certain order, permitting use of CUT for instance.)

Of course, there were a lot of systems competing with PROLOG that had
different (or even formally unspecified) resolution techniques so a
programmer could NOT count on clause ordering. I am just arguing that these
systems were not PROLOG.

sv0f

unread,
May 2, 2003, 3:23:12 PM5/2/03
to
In article <gat-020503...@k-137-79-50-101.jpl.nasa.gov>,
g...@jpl.nasa.gov (Erann Gat) wrote:

Hey, I'm on your side in this one!

I'm just trying to understand the arguments of those who
appear to (but *may* not) be claiming that AI was bunk
from the start; that this should have been obvious to
anyone with an undergraduate degree in math; that no
progress has been made; that no progress is possible;
that N incorrect claims delegitimize M correct claims
(when it is not the case that N>>M); etc.

On a related note, I'm 20 pages from the end of Feng-hsiung
Hsu's "Behind Deep Blue: Building the Computer that Defeated
the World Chess Champion". It's been an engrossing read,
one I recommend.

It is striking that, in Hsu's words, Deep Thought and Deep
Blue essentailly implement Shannon's and Newell/Simon's
initial insights into machine chess, but in hardware. (He
makes this comment in debunking the efforts of others to
inject more AI than is necessary into computer chess
players.) So it appears the early AI pioneers were correct
in predicting that machines would eclipse humans in chess
and correct in the method by which computers would do so.
What/all they got wrong was *when*.

Eray Ozkural exa

unread,
May 2, 2003, 3:51:25 PM5/2/03
to
"Tom Osborn" <MAPStom@DELETE CAPS.nuix.com.au> wrote in message news:<3eb20...@news.iprimus.com.au>...

> Lastly, the symbolic vs NN/statistical/maths/decision theory WAR was
> very dumb indeed.


Not having the time to respond to your other comments, I must say I
wholeheartedly agree with this statement.

Regards,

__
Eray Ozkural (exa) <erayo at cs.bilkent.edu.tr>

Eray Ozkural exa

unread,
May 2, 2003, 3:55:40 PM5/2/03
to
Greetings,

David Longley <Da...@longley.demon.co.uk> wrote in message news:<AFDizQA4...@longley.demon.co.uk>...


> In article <3eb20...@news.iprimus.com.au>, Tom Osborn <MAPStom@DELETE_

> >of amateur tinkering for a fair while. Statistics and ML had more to offer.
> >AI supported by decision theory and cognitive modelling also.
>

> There's another possibility - namely that those pursuing "AI" had
> misconceived the nature of human "intelligence" and/or skills. More
> fundamentally, they had misconceived the nature of the psychological.

David's eloquence brings hope that civilized discussion might exist.

I would like to point out that Tom essentially addresses your well
articulated concern with the phrase "AI supported by decision theory
and cognitive modelling".

Indeed, it is unimaginable to me how an AI that has not mathematical
basis of any sort or a correct understanding of the requirements of an
advanced psychology such as found in human beings could be successful.

The nature of the psychological is so oft-overlooked that I am most of
the time staring awestruck at the arbitrary omission of prominent
psychological factors in AI research.

It gives me a feeling of loneliness and hopelessness that I can't
begin to describe. It's I think the same feeling when you ask a
cosmologist "Is the universe infinite?" and find out that he has never
bothered to think about it!!! What blasphemy!!

Assume you have a learning algorithm that can recognize faces. Also
assume that you have a learning algorithm that can recognize speech.
Not my classical example, but what good are these algorithms when they
cannot be bound by an autonomous system that will use such abilities
as a basis in subsequent cognitive tasks?

I am not ably expressing my deep concerns. I know how to recognize a
phone. I know how to talk. But I also know how to pick up the handset
and dial a number. I can combine all of these abilities in a conscious
way. The truth is, I have learnt all of these, none of these could be
inscribed in genes. And only a company of clownery and mockery of
naive evolutionary and behaviorist theorists would be sufficiently
uninitiated to dismiss these concerns.

I appreciate David's comments very much.

Regards,

__
Eray Ozkural <erayo at cs.bilkent.edu.tr>
CS Dept. , Bilkent University, Ankara

Christopher Browne

unread,
May 2, 2003, 5:34:53 PM5/2/03
to

.. But the nature of how they expected this to occur has changed. It
was anticipated that the programs would be "very clever."

They aren't; they use the notion of searching absolutely enormous
state spaces. And the approaches wind up being quite specialized, not
much useful to solving other problems.

The expectation was that a system that could play a good game of chess
would be good for other things; such systems turn out to have
vanishingly tiny application.
--
(concatenate 'string "cbbrowne" "@cbbrowne.com")
http://www.ntlug.org/~cbbrowne/
Avoid unnecessary branches.

Michael Sullivan

unread,
May 2, 2003, 5:26:14 PM5/2/03
to
sv0f <no...@vanderbilt.edu> wrote:

> FWIW, the historical addendum to Newell and Simon's (1972)
> "Human Problem Solving" succinctly captures the zeitgeist
> in which AI came into being. Reading it, I can imagine
> being a young scholar, drunk on the sudden convergence of
> a half dozen fields, and making bold predictions like
> Simon's infamous 1950s utterance that a computer would
> be world chess champion within ten years. (He missed by
> 30 years, of course.)

I don't think we know how many years he missed by. A computer is not
world chess champion yet. There have been a couple well publicized
matches of a computer against the strongest human player of our time,
but the circumstances have favored the computer (Kasparov was not
allowed to study many of Deep Blue's previous games, but Deep Blue was
tuned by a team of chess experts with access to all of Kasparov's
games).

It's also well known in computer go circles that programs rate quite a
bit stronger against people who have never played them before than
against people who have, because they do not learn in the same way
humans do. This is true of chess programs as well. There are a number
of grandmasters who have studied strong computer chess programs and
found ways to make them look a lot weaker, though none of them has been
granted a match with Deep Blue AFAIK.

So Deep Blue appears to be the strongest player in the world right now,
but we've only seen matches against *one* player, and not very many of
those. The researchers who are responsible for it, must know that if
you made all its games public and had it play lots of different
grandmaster opponents, that it would probably look much less strong. So
far they simply haven't agreed to let this happen. Personally, I think
they won't until they are quite sure it will stand up to the full
scrutiny, and the reason it's not happening now is that they are not
sure.

If you entered Deep Blue into a world championship competition with the
same rules that any human would face (for instance they could be touched
by their programmers only under the same conditions a human would be
allowed to consult other people, matches would be made public record to
other contestants, etc.) -- it's by no means certain that it would win.
Until this happens and it does win, I refuse to recognize DB as "world
champion" no matter how well it does against Kasparov.

Grandmaster level play has clearly been achieved, but "world champion"
has not.


Michael

Erann Gat

unread,
May 2, 2003, 5:41:26 PM5/2/03
to
In article <none-3F1699.1...@news.vanderbilt.edu>, sv0f
<no...@vanderbilt.edu> wrote:

> In article <gat-020503...@k-137-79-50-101.jpl.nasa.gov>,
> g...@jpl.nasa.gov (Erann Gat) wrote:
>
> >In article <none-E89AF5.1...@news.vanderbilt.edu>, sv0f
> ><no...@vanderbilt.edu> wrote:
> >
> >> Simon's infamous 1950s utterance that a computer would
> >> be world chess champion within ten years. (He missed by
> >> 30 years, of course.)
> >
> >That still makes him closer to being right than the naysayers who claimed
> >that a computer playing grandmaster-level chess was fundamentally
> >impossible.
>
> Hey, I'm on your side in this one!

Don't be so sure.

> I'm just trying to understand the arguments of those who
> appear to (but *may* not) be claiming that AI was bunk
> from the start; that this should have been obvious to
> anyone with an undergraduate degree in math; that no
> progress has been made; that no progress is possible;
> that N incorrect claims delegitimize M correct claims
> (when it is not the case that N>>M); etc.

The problem with the claim "AI was bunk from the start" is that the term
"AI" is not well defined. It can be taken to be a general term covering
all efforts to understand how to make an artifact that behaves
intelligently (whatever that means), or it can be taken in a narrower
sense to mean the particular efforts made by a particular group of people
at a particular time in history, typically starting with McCarthy and
Minsky and the Dartmouth Conference, peaking in the late 80's, and
on-going today under the auspices of the AAAI and other professional
societies.

On that second definition I think that there was (and still is) an awful
lot of bunk, and that certainly a lot of what's being done today under the
rubric of AI (neural nets, fuzzy logic, swarm intelligence, etc. etc.) is
easily recognizable as bunk. How much of it was recognizable back when is
not so clear. I think that the original AI folks had some basically sound
ideas, but they were so constrained by their available hardware that they
ended up going down some very unfruitful paths -- at least with respect to
the goals of AI. A lot of useful discoveries having nothing to do with AI
per se were made along the way, so it wasn't a complete waste of time.
But IMHO we're not much closer to understanding intelligence now than we
were in 1950. And what progress we have made is largely in spite of, not
because of, those who style themselves "AI researchers". (And I call
myself an AI researcher.)

Personally, I think that the two most significant results in AI to date
are Eliza and Google. Eliza is usually cited as an example of how careful
you have to be when assessing whether or not something is intelligent
because Eliza seemed to be intelligent when it "obviously" wasn't. IMO it
is far from clear that Eliza had zero intelligence. Google is the closest
thing so far to a machine that really "understands" something (in Google's
case it "understands" the concept of "importance" or "relevance" or
"authority" or something like that) and it's really nothing more than
Eliza with a huge collaboratively-built database. It is not at all clear
to me that "real" intelligence is not simply a matter of taking something
like that, adding a relativelty small number of clever hacks, and scaling
up. Certainly the ability to pass the Turing Test on Usenet seems to be
within reach following this sort of strategy. (Remember Ilias?)

> So it appears the early AI pioneers were correct
> in predicting that machines would eclipse humans in chess
> and correct in the method by which computers would do so.
> What/all they got wrong was *when*.

No, I think they got something else wrong too: they all thought that
getting a machine to win at Chess would be worthwhile because it would
give you some insight into how to get a machine to do other clever things,
like carry on a conversation. They were wrong about that.

E.

Michael Sullivan

unread,
May 2, 2003, 5:39:09 PM5/2/03
to
sv0f <no...@vanderbilt.edu> wrote:

> In article <gat-020503...@k-137-79-50-101.jpl.nasa.gov>,
> g...@jpl.nasa.gov (Erann Gat) wrote:
>
> >In article <none-E89AF5.1...@news.vanderbilt.edu>, sv0f
> ><no...@vanderbilt.edu> wrote:
> >
> >> Simon's infamous 1950s utterance that a computer would
> >> be world chess champion within ten years. (He missed by
> >> 30 years, of course.)
> >
> >That still makes him closer to being right than the naysayers who claimed
> >that a computer playing grandmaster-level chess was fundamentally
> >impossible.
>
> Hey, I'm on your side in this one!

> I'm just trying to understand the arguments of those who
> appear to (but *may* not) be claiming that AI was bunk
> from the start; that this should have been obvious to
> anyone with an undergraduate degree in math; that no
> progress has been made; that no progress is possible;
> that N incorrect claims delegitimize M correct claims
> (when it is not the case that N>>M); etc.

Whoa. I've got to be one of the people you're referring to here, and I
don't think that at all.

I think a lot of very interesting progress has been made, and I'd like
to see a lot more AI research than there is now. But claims that strong
AI-complete problems would be solved in the near term were (and still
are) bogus.

Grandmaster level Chess doesn't appear to be a strong AI-complete
problem. I don't think pro 9d level Go is either, and that's a *long*
way from being solved.

I'm certainly not trying to say that Ai research is pointless and that
the AI winter was entirely justified. Only that any claims of strong-AI
being solved in my lifetime were, and still are (short of a major
unforeseen watershed), somewhere between ridiculously optimistic and
completely insane, and that it didn't take an AI researcher to see that
20 years ago.


Michael

Paul Wallich

unread,
May 2, 2003, 6:22:27 PM5/2/03
to
In article <b8uo9t$drber$3...@ID-125932.news.dfncis.de>,
Christopher Browne <cbbr...@acm.org> wrote:

> In the last exciting episode, g...@jpl.nasa.gov (Erann Gat) wrote:
> > In article <none-E89AF5.1...@news.vanderbilt.edu>, sv0f
> > <no...@vanderbilt.edu> wrote:
>
> >> Simon's infamous 1950s utterance that a computer would be world
> >> chess champion within ten years. (He missed by 30 years, of
> >> course.)
>
> > That still makes him closer to being right than the naysayers who
> > claimed that a computer playing grandmaster-level chess was
> > fundamentally impossible.
>
> .. But the nature of how they expected this to occur has changed. It
> was anticipated that the programs would be "very clever."
>
> They aren't; they use the notion of searching absolutely enormous
> state spaces. And the approaches wind up being quite specialized, not
> much useful to solving other problems.

This isn't exactly true. Although Deep Blue et al relied on brute-force
searches and ridiculous amounts of processing power, Deep Junior and its
brethren run on PC-scale machines and do in fact rely on sophisticated
evaluation.

Obviously the precise code doesn't apply immediately to other problems,
but that would seem to be asking a bit much -- you wouldn't want Boris
Spassky reading your xrays either.

I think that one of the things that went wrong with AI was an implicit
underestimation of the amount of time and supervision that humans
require to achieve human-level performance in a given field, much less
in multiple fields. Perhaps only a tiny handful of systems have been
given that kind of attention.

paul

Gorbag

unread,
May 2, 2003, 7:25:10 PM5/2/03
to
On 5/2/03 12:55 PM, in article
fa69ae35.03050...@posting.google.com, "Eray Ozkural exa"
<er...@bilkent.edu.tr> wrote:

> Greetings,
>
> David Longley <Da...@longley.demon.co.uk> wrote in message
> news:<AFDizQA4...@longley.demon.co.uk>...
>> In article <3eb20...@news.iprimus.com.au>, Tom Osborn <MAPStom@DELETE_
>>> of amateur tinkering for a fair while. Statistics and ML had more to offer.
>>> AI supported by decision theory and cognitive modelling also.
>>
>> There's another possibility - namely that those pursuing "AI" had
>> misconceived the nature of human "intelligence" and/or skills. More
>> fundamentally, they had misconceived the nature of the psychological.
>
> David's eloquence brings hope that civilized discussion might exist.
>
> I would like to point out that Tom essentially addresses your well
> articulated concern with the phrase "AI supported by decision theory
> and cognitive modelling".
>
> Indeed, it is unimaginable to me how an AI that has not mathematical
> basis of any sort or a correct understanding of the requirements of an
> advanced psychology such as found in human beings could be successful.
>
> The nature of the psychological is so oft-overlooked that I am most of
> the time staring awestruck at the arbitrary omission of prominent
> psychological factors in AI research.

I'm not sure where this is coming from; AI has had a tremendous influence on
psychology (cognitively plausible models, e.g. SOAR and ACT-R), and vice
versa (Piaget, behaviorists, etc.). The literature is replete with
references in either direction.

Arguing that there is no formal mathematical model of cognition is of course
true; that is in some sense the goal, not the starting point.

You also need to define which part of AI you are involved in. Some AI folks
do AI to have a better idea of what goes on inside of humans, but many
others are interested either in artifacts that can do things we'd call
intelligent if an animal (or human) did them, and don't care HOW they do
them, or they are interested in some philosophical approach that is not
always possible to be reduced through empirical experiment. (Arguing, for
instance, that human intelligence is just a happenstance instance, and not
the only possible implementation of intelligence; further many things humans
do are not really intelligent either, so why would you want to emulate
them?) Stumbling onto intelligence through simulacrum (learning) or
happenstance (evolution) doesn't really tell you anything about
*intelligence*, only something that seems to be intelligent when you observe
it, but you cannot be sure. (And this is the foundation of the "war" between
the NN and symbolic crowd, a thread that continues to exist today because it
has not been resolved - you will never be able to tell WHY a NN does what it
does, or make predictions about what it will (not) do next. This is
particularly a problem with non-supervised systems like Reinforcement
Learning). Arguing you have the same problems with humans is beside the
point, most AI folks want to improve on humans, not recreate them.

These folks are interested in intelligence in the abstract, and psychology
doesn't enter into it. So you have to be careful which school of thought you
are doing your reading in before you start to criticize. That humans can
talk and answer a phone is interesting, but intelligent systems may not need
to have phones to talk to each-other. The design of the phone itself is far
less than optimal (consider the phone number for instance). So that there
are major branches of AI research doesn't care about the skills to answer
phones is not really surprising.

Kenny Tilton

unread,
May 2, 2003, 10:01:00 PM5/2/03
to

Christopher Browne wrote:
> In the last exciting episode, g...@jpl.nasa.gov (Erann Gat) wrote:
>
>>In article <none-E89AF5.1...@news.vanderbilt.edu>, sv0f
>><no...@vanderbilt.edu> wrote:
>
>
>>>Simon's infamous 1950s utterance that a computer would be world
>>>chess champion within ten years. (He missed by 30 years, of
>>>course.)
>>
>
>>That still makes him closer to being right than the naysayers who
>>claimed that a computer playing grandmaster-level chess was
>>fundamentally impossible.
>
>
> .. But the nature of how they expected this to occur has changed. It
> was anticipated that the programs would be "very clever."
>
> They aren't; they use the notion of searching absolutely enormous
> state spaces.

Yep, and even that is not enough, they have to bring in grandmasters to
help with the hard-coded position evaluator. Deep Blue, by playing even
with Kasparov, simply provides a measure of Kasparov's superiority:

positions-examined-by-deep
--------------------------
positions-examined-by-gary

uh-oh. :)


--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Everything is a cell." -- Alan Kay

David Longley

unread,
May 3, 2003, 6:44:04 AM5/3/03
to
In article <BAD84C66.3EE6%gorbag...@NOSPAMmac.com>, Gorbag
<gorbag...@NOSPAMmac.com> writes

There are *profoundly* difficult issues here - and to appreciate just
how profound they are I suggest those interested look at how McCarthy
has tried to come to grips with the fundamentals in some of his very
recent work. Personally I don't find what he has done helpful, but I do
respect that he (like some others) has taken the problem seriously.

In my view, this has a lot to do with the nature and composition of
testable statements and how these are a function of reliable logical
quantification (something I explicated here at length some years ago in
the context of science and the intensional/extensional dynamic).

Psychological terms are not just indeterminate from this perspective,
they are elements of a modus vivendi which is progressively being
replaced by an expanding web of scientific 'belief'. There can be little
doubt that science as a means of prediction and control does far better
than the corpus of folk psychological and folk physical lore which most
of us resort to in the absence of formal education. What can be
productively doubted is whether "AI" is really anything other than
engineering - anything more than an attempt to make some areas of
engineering appear more "sexy" or meritorious though a misguided
association with the "psychological".

This is hype.
--
David Longley

Eray Ozkural exa

unread,
May 3, 2003, 10:09:05 AM5/3/03
to
Gorbag <gorbag...@NOSPAMmac.com> wrote in message news:<BAD84C66.3EE6%gorbag...@NOSPAMmac.com>...

> On 5/2/03 12:55 PM, in article
> fa69ae35.03050...@posting.google.com, "Eray Ozkural exa"
> <er...@bilkent.edu.tr> wrote:
>
> I'm not sure where this is coming from; AI has had a tremendous influence on
> psychology (cognitively plausible models, e.g. SOAR and ACT-R), and vice
> versa (Piaget, behaviorists, etc.). The literature is replete with
> references in either direction.
>
> Arguing that there is no formal mathematical model of cognition is of course
> true; that is in some sense the goal, not the starting point.
>

Evidently David argues for a much stronger position than I would
admit. I think the mind is essentially computational, ie. as can be
seen in my recent "protocol stack theory of mind" argument. What I do
argue for is that there are psychological factors that we must attend
to.

> You also need to define which part of AI you are involved in. Some AI folks
> do AI to have a better idea of what goes on inside of humans, but many
> others are interested either in artifacts that can do things we'd call
> intelligent if an animal (or human) did them, and don't care HOW they do
> them, or they are interested in some philosophical approach that is not
> always possible to be reduced through empirical experiment.

I am interested in all of these. :) I think I've done everything from
studying linguistics, philosophy of mind/language to implementing
classification/clustering algorithms, to data mining... My feeling is
that without philosophy one is wandering in the dark and without
computational theories and empirical study one is lost in the
profileration of the impossible.


> (Arguing, for
> instance, that human intelligence is just a happenstance instance, and not
> the only possible implementation of intelligence; further many things humans
> do are not really intelligent either, so why would you want to emulate
> them?)

To such a thing I would say "yes" since the spectrum of life on our
planet shows remarkable how diverse intelligence can be.

> Stumbling onto intelligence through simulacrum (learning) or
> happenstance (evolution) doesn't really tell you anything about
> *intelligence*, only something that seems to be intelligent when you observe
> it, but you cannot be sure.

That I would kindly object to. The evolution argument I agree with, as
stated in my other posts. However, simulacra and learning are not
synonymous. Let me talk layer-wise. Interestingly, my layer argument
helps me answer almost any philosophical question! When one simulates
the physical layer, all he gets is a simulation of physics like CFD.
That is not in itself sufficient to arrive at a theory of mind.
However, if one attains a deliberate simulation of a key layer:
computation by writing algorithms that in turn simulate the functional
layer, he will have effectively obtained a transformation that will
have produced an entire mind that is _understood_ both in the mental
and the physical....

Since simulating learning roughly corresponds to "writing algorithms
to replace the computational layer" I think you're making a mistake
here. If you had said "simulation of the physical operation of the
brain only" I would agree 100%.

> (And this is the foundation of the "war" between
> the NN and symbolic crowd, a thread that continues to exist today because it
> has not been resolved - you will never be able to tell WHY a NN does what it
> does, or make predictions about what it will (not) do next. This is
> particularly a problem with non-supervised systems like Reinforcement
> Learning). Arguing you have the same problems with humans is beside the
> point, most AI folks want to improve on humans, not recreate them.
>

I think that foundation is obsolete and at best an unrewarding
discussion. Basically, I think NN crowd have proved themselves to be
unscientific by asserting allegiance to a certain biological
structure, and symbolic crowd have done likewise by asserting
allegiance to certain mathematical formalisms. I don't think that is
unlike Chomsky's volumes full of bigotry.

> These folks are interested in intelligence in the abstract, and psychology
> doesn't enter into it. So you have to be careful which school of thought you
> are doing your reading in before you start to criticize. That humans can
> talk and answer a phone is interesting, but intelligent systems may not need
> to have phones to talk to each-other. The design of the phone itself is far
> less than optimal (consider the phone number for instance). So that there
> are major branches of AI research doesn't care about the skills to answer
> phones is not really surprising.

I don't think you have precisely understood what I meant. I formulated
a hard learning problem that nobody has ever formulated. I am saying
that the learning problems in real life do not fit the basic problem
descriptions in machine learning. I don't know, yet, how to resolve
this apparent discrepancy but it is there.

Another example. I stare at a scene and I see objects. Things I may
not have seen before. I'm looking at a table and seeing things:
unsupervised learning. I can tell one object apart from other. The
problem is: what do I do this with this information? I feel that I
have some other abilities, learnt traits, that help me decide what to
do with this information. For instance, there is a paper, there is
some drawing on it, partially obscured by another paper. I notice a
drawing on the paper. Next, I take the surface of the paper as its own
domain. Since the paper is partially obscured, I lift the obscuring
paper. Geometric drawings and symbols are revealed. I recognize three
distinct drawings on the paper. The drawings are geometric figures. On
the right is crude approximations of circles and line segments. On the
left a similar figure, this time some points have been marked with
symbols such as a_i. Just on top of it, I see a triangle and another
inside it, three rays meeting at a point, separating the vertices of
the triangles. Apparently, it's a Voronoi diagram that is left from
last day's homework. The geometric drawings were part of a proof I was
trying to form.

In the most basic form my argument is that without an architecture
that puts them into proper use a learning algorithm is "ungrounded".
But you need to have the correct architecture. Simply stacking data
and algorithms won't help!!! That's where our CS mindset gets us into
a mighty trap. Look at the above example. There are several elements
of uttermost importance that no system can accomplish:
1) I can choose among data
2) I can choose among algorithms
3) I can choose output
4) I can combine algorithms to arrive at compositional solutions
5) I can learn methods
6) I know what a learnt method is and when to apply it
7) I know what I recognize
...

Among countless other things that we ignore completely in the
framework of learning. Had I said how surprised I was when I
understood how lacking our theories were? I would have to write a book
to tell them all.

Cheers,

__
Eray Ozkural

Acme Debugging

unread,
May 3, 2003, 10:39:03 AM5/3/03
to
David Longley <Da...@longley.demon.co.uk> wrote in message news:<6fgMFQA0...@longley.demon.co.uk>...

> In article <BAD84C66.3EE6%gorbag...@NOSPAMmac.com>, Gorbag
> <gorbag...@NOSPAMmac.com> writes
> >On 5/2/03 12:55 PM, in article
> >fa69ae35.03050...@posting.google.com, "Eray Ozkural exa"
> ><er...@bilkent.edu.tr> wrote:

> >> The nature of the psychological is so oft-overlooked that I am most of
> >> the time staring awestruck at the arbitrary omission of prominent
> >> psychological factors in AI research.
> >
> >You also need to define which part of AI you are involved in. Some AI folks
> >do AI to have a better idea of what goes on inside of humans, but many
> >others are interested either in artifacts that can do things we'd call
> >intelligent if an animal (or human) did them, and don't care HOW they do
> >them, or they are interested in some philosophical approach that is not
> >always possible to be reduced through empirical experiment. (Arguing, for

> > <snip>

> >These folks are interested in intelligence in the abstract, and psychology
> >doesn't enter into it. So you have to be careful which school of thought you

> >are doing your reading in before you start to criticize. <snip>



> There are *profoundly* difficult issues here

What is profound? That some people place greater value on AI
simulating humans and thus seeking descriptions of the mind, while
others place greater value on AI as intelligence in the abstract
requiring little or no psychology? I see that as profoundly simple, a
question of values, of personal choice. Not to mention application
(One wishes for the best chess program possible, not one that plays
like the average human). One can try to project one's values on
others, in fact people do this incessantly, however a newsgroup is
about the last place on Earth one should expect to succeed. Read some
political groups. I rest my case.



> In my view, this has a lot to do with the nature and composition of
> testable statements and how these are a function of reliable logical
> quantification (something I explicated here at length some years ago in
> the context of science and the intensional/extensional dynamic).

Your statement is indeed profound, I subscribe to it 100%, I am
"awestruck" that many do not. But I don't see how it applies here. One
would be arguing, "Your values should be the same as mine." While
possibly true in an objective sense, chances are almost non-existent
that it could be objective in any argument between two people in most
places. Exceptions are like parent-child, but that certainly doesn't
apply in newsgroups and any other intellectual setting I can think of,
save faith-based groups.



> Psychological terms are not just indeterminate from this perspective,
> they are elements of a modus vivendi which is progressively being
> replaced by an expanding web of scientific 'belief'.

Though faith in empirical "tests" as above are certainly a "belief," I
think this belief is assumed in this newsgroup. At least, I do not
find arguments intentionally based on faith. If so, I think one should
make that declaration at the top of any message in all-caps. Of course
we all find arguments unintentionally based on faith everywhere.
Unobjectivity, the subconscious logic/fact snatcher, the "blind-spot"
of the mind, the paradox: "People agree they are not perfect reasoners
in general, then act as if they were in particular cases," experience
obtains undue value, "in real life all the marbles are not the same"
(statistics), experience obtains undue value, etc., etc. forever and
ever. This is the main reason I seek AI reasoning totally independent
of the human mind, and I don't care that it might not be technically
as proficient or if "semantics" might need to be "solved in
documentation." Who cares how proficient a brain reasons when the
results are so unreliable due to unobjectivity? How easy it would be
for an objective kludge to do better in many important applications!
It see no end to uses broadly beneficial to humans, perhaps as
beneficial as an AI robot that cries with you on bad hair days, etc.

> There can be little
> doubt that science as a means of prediction and control does far better
> than the corpus of folk psychological and folk physical lore which most
> of us resort to in the absence of formal education.

Psychology (popularly) has such a bad name, at least where I come from
(U.S.). It has been, and still is, misused politically and in many
other ways, has at times been shown to have little or no predictive
value above simple wisdom, and this is very legitimately part of the
resistance to new psychological ideas IMHO. A psychologist today must
inevitably overcome these mistakes and continuing misuses, though
possibly not responsible for them in any way, and hope not to add to
them. Of course regarding particular cases in this newsgroup, one can
simply choose to engage and hope to prevail. Many are usually ready
and waiting at a given time. Of course each has their own "rules of
engagement" which are usually reasonable, but granting priveleged
positions is hardly ever among them. That's the #1 source of newsgroup
frustration in my experience (and probably most places).

Larry

Gorbag

unread,
May 3, 2003, 2:06:24 PM5/3/03
to
On 5/3/03 3:44 AM, in article 6fgMFQA0...@longley.demon.co.uk, "David
Longley" <Da...@longley.demon.co.uk> wrote:

> What can be
> productively doubted is whether "AI" is really anything other than
> engineering - anything more than an attempt to make some areas of
> engineering appear more "sexy" or meritorious though a misguided
> association with the "psychological".

I don't understand the negative connotation of your comment; if science is
the study of the observed universe and coming to grips with some descriptive
models that can predict what we will see, and engineering is starting with
some goal behavior and creating a system which produces that behavior, then
indeed, most of AI is about engineering not science. One can say the same
about most of "computer science" or indeed almost anything that can get you
an actual job. There is no need to be pejorative about it. But I don't think
most of AI is unnecessarily more associated with psychology than many other
things. Any technology that has some human interfaces (including, e.g., the
design of programming languages or the design of roadways) by a right ought
to have some concern for the actual users of the technology and design
accordingly. I don't think AI is, or holds itself to be special in this
regard. (Perhaps special only in that some of the work tries to make the
model explicit and have the program reason about the model itself, rather
than implicit in terms of rules of thumb for design by a human designer).
But it is such reasoning about the models that does make AI somewhat a
different branch of engineering than, say, programming language design.

And it is this association with the "psychological" through models that
gives certain techniques their raison d'etre. If a program solves problems
in a way similar to a human thinks about how they solve problems; if a
program can then address language usage (at the pragmatics level) in a way
similar to the way a human uses language (e.g., in terms of argumentation
structure, not just word senses), then we will presumably have the tools to
build systems and services that are easier for humans to use. It is not
necessary that the mechanisms be the same as the wetware we employ, what is
key is only that a lay human's introspection of their own problem solving
methodology is modeled. (This is still an open problem AFAIK).

I don't think such interfaces will fall out of "standard engineering
methodologies" e.g., capability maturity models pushed by SEI at CMU, or
pair programming, etc.

sv0f

unread,
May 3, 2003, 3:08:20 PM5/3/03
to
In article <b8uo9t$drber$3...@ID-125932.news.dfncis.de>,
Christopher Browne <cbbr...@acm.org> wrote:

>> That still makes him closer to being right than the naysayers who
>> claimed that a computer playing grandmaster-level chess was
>> fundamentally impossible.
>
>.. But the nature of how they expected this to occur has changed. It
>was anticipated that the programs would be "very clever."
>
>They aren't; they use the notion of searching absolutely enormous
>state spaces.

I disagree. I think from the beginning AI researchers realized
it would take plenty of knowledge (cleverness?) and plenty of search.
They simply underestimated how much of each.

>And the approaches wind up being quite specialized, not
>much useful to solving other problems.
>
>The expectation was that a system that could play a good game of chess
>would be good for other things; such systems turn out to have
>vanishingly tiny application.

I agree. This was especially true of expert systems. The
effort to build one was high and the result was so domain-specific
that less than was hoped (which is to say almost nothing) transfered
to other domains.

sv0f

unread,
May 3, 2003, 3:16:57 PM5/3/03
to
In article <pw-6E8DC4.18...@reader1.panix.com>,
Paul Wallich <p...@panix.com> wrote:

>I think that one of the things that went wrong with AI was an implicit
>underestimation of the amount of time and supervision that humans
>require to achieve human-level performance in a given field, much less
>in multiple fields. Perhaps only a tiny handful of systems have been
>given that kind of attention.

It's funny, this is where the diffuse nature of AI becomes
problematic. In some circles it was recognized by the early
1970s that knowledge is power, and that endowing a system
with sufficient knowledge was time and resource intensive.

Herb Simon and his collaborators (such as William Chase and
Anders Ericsson) conducted numerous studies on the number
of "chunks" of knowledge one must acquire to achieve world
class performance in a domain. Their results showed, in
domains ranging from chess to pole-vaulting, that essentially
ten years of practicing eight hours per day were required.
Simon cites Bobby Fischer as an exception to this estimate --
he achieved a Grandmaster ranking a bit more than nine years
aftering taking up chess. (Simon also deals with obvious
exceptions, child prodigies such as Mozart, for the
interested.)

My sense is that AI researchers who were true cognitive
scientists, and therefore studenst of linguistics,
psychology, anthropology, neuroscience, and other
related differences, understood the importance of
knowledge, learning, and development. It was the
exclusively mathematical/engineering types who were
wrong. Unfortunately, it's their work that fills the
pages of the field's flagship journal, "Artificial
Intelligence".

sv0f

unread,
May 3, 2003, 3:28:18 PM5/3/03
to
In article <1fucdsk.177ymz41s5ogg2N%mic...@bcect.com>,
mic...@bcect.com (Michael Sullivan) wrote:

>sv0f <no...@vanderbilt.edu> wrote:
>> I'm just trying to understand the arguments of those who
>> appear to (but *may* not) be claiming that AI was bunk
>> from the start; that this should have been obvious to
>> anyone with an undergraduate degree in math; that no
>> progress has been made; that no progress is possible;
>> that N incorrect claims delegitimize M correct claims
>> (when it is not the case that N>>M); etc.
>
>Whoa. I've got to be one of the people you're referring to here, and I
>don't think that at all.

I don't think anyone believes all these claims -- I was
just running them altogether for rhetorical purposes.

>I think a lot of very interesting progress has been made, and I'd like
>to see a lot more AI research than there is now. But claims that strong
>AI-complete problems would be solved in the near term were (and still
>are) bogus.

What do you mean by "bogus"? Pure snake oil -- knowingly
offered exaggerations -- or scientific predictions that
turned out to be false?

>But claims that strong
>AI-complete problems would be solved in the near term were (and still
>are) bogus.
>
>Grandmaster level Chess doesn't appear to be a strong AI-complete
>problem. I don't think pro 9d level Go is either, and that's a *long*
>way from being solved.
>
>I'm certainly not trying to say that Ai research is pointless and that
>the AI winter was entirely justified. Only that any claims of strong-AI
>being solved in my lifetime were, and still are (short of a major
>unforeseen watershed), somewhere between ridiculously optimistic and
>completely insane, and that it didn't take an AI researcher to see that
>20 years ago.

I don't understand some of your terms (which I recall hearing
before in philosophical discussions, so I know you're not making
them up). Could you give me a sense of what "AI-complete",
"strong AI-complete problem", and "strong AI" mean?

Also, I think I have a different time frame than you. The
claim that machine would soon best man in chess was made
45 years ago, at the dawn of AI, when its promise and pitfalls
were unknown quantities. It is this claim and others of its
era that I was addressing.

I agree that any overly-optimistic proclamations made 20 years
ago, when the field had enough experience under its belt to
know the difficulty of problems such as machine vision and
discourse comprehension, were "completely insane" to use
your phrase, or scientifically irresponsible to pick up the
critique(s) offered by other poster(s).

sv0f

unread,
May 3, 2003, 3:43:12 PM5/3/03
to

>On that second definition I think that there was (and still is) an awful
>lot of bunk, and that certainly a lot of what's being done today under the
>rubric of AI (neural nets, fuzzy logic, swarm intelligence, etc. etc.) is
>easily recognizable as bunk.

I agree with you here, but I wouldn't call these ideas (and others
such as elaborate logics, statistical reasoning, and decision
theory) "bunk". Rather, I think that they're mere engineering,
mere technology, and their pursuit has turned AI into a form
of applied mathematics. Probably many AI researchers are happy
about this, as the definition-theorem-example ryhthym of current
AI papers fits well in the academic genre.

However, this shift has given up the "human" or "cognitive"
dimension of "Intelligence". It's also no longer scientific
(IMO).

>But IMHO we're not much closer to understanding intelligence now than we
>were in 1950.

AI produced insights for 20 years, but then stopped.

>And what progress we have made is largely in spite of, not
>because of, those who style themselves "AI researchers". (And I call
>myself an AI researcher.)

Note that the larger enterprise of Cognitive Science continues
to untangle the mechanisms of intelligence, albeit at the slow
pace of conventional science.

>Personally, I think that the two most significant results in AI to date
>are Eliza and Google. Eliza is usually cited as an example of how careful
>you have to be when assessing whether or not something is intelligent
>because Eliza seemed to be intelligent when it "obviously" wasn't. IMO it
>is far from clear that Eliza had zero intelligence. Google is the closest
>thing so far to a machine that really "understands" something (in Google's
>case it "understands" the concept of "importance" or "relevance" or
>"authority" or something like that) and it's really nothing more than
>Eliza with a huge collaboratively-built database. It is not at all clear
>to me that "real" intelligence is not simply a matter of taking something
>like that, adding a relativelty small number of clever hacks, and scaling
>up. Certainly the ability to pass the Turing Test on Usenet seems to be
>within reach following this sort of strategy. (Remember Ilias?)

Interesting. I too believe ELIZA has been monumentally
underestimated. It's interesting that you picked a nearly
knowledge-free system and one that's essentially pure
knowledge.

>> So it appears the early AI pioneers were correct
>> in predicting that machines would eclipse humans in chess
>> and correct in the method by which computers would do so.
>> What/all they got wrong was *when*.
>
>No, I think they got something else wrong too: they all thought that
>getting a machine to win at Chess would be worthwhile because it would
>give you some insight into how to get a machine to do other clever things,
>like carry on a conversation. They were wrong about that.

Well, my quoted statement was purely about the chess domain.

But I do agree that the lack of transfer between domains of
intelligence has been quite surprising. In the worst case, it
hints that intelligence is just a bunch of special cases or hacks.
Some neo-darwinians think this conclusion obvious. I find it
pessimistic, although as you say, it is consistent with a half
century of history.

Christopher C. Stacy

unread,
May 3, 2003, 4:50:13 PM5/3/03
to
>>>>> On Sat, 03 May 2003 11:06:24 -0700, Gorbag ("Gorbag") writes:

Gorbag> On 5/3/03 3:44 AM, in article 6fgMFQA0...@longley.demon.co.uk, "David
Gorbag> Longley" <Da...@longley.demon.co.uk> wrote:

>> What can be
>> productively doubted is whether "AI" is really anything other than
>> engineering - anything more than an attempt to make some areas of
>> engineering appear more "sexy" or meritorious though a misguided
>> association with the "psychological".

Gorbag> I don't understand the negative connotation of your comment; if science is
Gorbag> the study of the observed universe and coming to grips with some descriptive
Gorbag> models that can predict what we will see, and engineering is starting with
Gorbag> some goal behavior and creating a system which produces that behavior, then
Gorbag> indeed, most of AI is about engineering not science. One can say the same
Gorbag> about most of "computer science" or indeed almost anything that can get you

Any field with the word "science" in it, isn't science.

David Longley

unread,
May 3, 2003, 5:19:53 PM5/3/03
to
In article <35fae540.03050...@posting.google.com>, Acme
Debugging <L.F...@lycos.co.uk> writes
>David Longley <Da...@longley.demon.co.uk> wrote in message news:<6fgMFQA015s+EwX
>O...@longley.demon.co.uk>...

>> In article <BAD84C66.3EE6%gorbag...@NOSPAMmac.com>, Gorbag
>> <gorbag...@NOSPAMmac.com> writes
>> >On 5/2/03 12:55 PM, in article
>> >fa69ae35.03050...@posting.google.com, "Eray Ozkural exa"
>> ><er...@bilkent.edu.tr> wrote:
>
>> >> The nature of the psychological is so oft-overlooked that I am most of
>> >> the time staring awestruck at the arbitrary omission of prominent
>> >> psychological factors in AI research.
>> >
>> >You also need to define which part of AI you are involved in. Some AI folks
>> >do AI to have a better idea of what goes on inside of humans, but many
>> >others are interested either in artifacts that can do things we'd call
>> >intelligent if an animal (or human) did them, and don't care HOW they do
>> >them, or they are interested in some philosophical approach that is not
>> >always possible to be reduced through empirical experiment. (Arguing, for
>
>> > <snip>
>
>> >These folks are interested in intelligence in the abstract, and psychology
>> >doesn't enter into it. So you have to be careful which school of thought you
>> >are doing your reading in before you start to criticize. <snip>
>
>> There are *profoundly* difficult issues here
>
>What is profound? That some people place greater value on AI
>simulating humans and thus seeking descriptions of the mind, while
>others place greater value on AI as intelligence in the abstract
>requiring little or no psychology?

No.

What is profoundly problematic is that outside of established research
programmes (cf. "intelligence" - IQ, Inspection Time etc.) most
"psychological terms" have questionable reference.

You'll find folk using one indeterminate term referencing another,
ostensibly for support, but in the end, the whole 'pack of cards'
constructs a very unstable structure - so unstable that it could be said
that folk who traffic in such terms ultimately don't know what they are
talking about. What they write and say may make for an interesting
(entertaining) flight of ideas, but that is perhaps all.

What many folk interested in AI tend to overlook is that the
"intelligent" behaviour they are so keen to model has itself been
acquired as algorithmic (extensional) skills. Where it is cost effective
to replace these skills with engineered alternatives this is generally
done. Recent years have seen quite a panic in middle management because
of the fall in cost of ICT. There's no need for the concept of AI, maybe
some compassion for those who feel 'de-skilled' - and no doubt such folk
really do lament that what they once considered worthy of admiration has
been reduced to a relatively "dumb", "mindless" but very reliable and
efficient set of operations executed by a system of algorithms. This
will continue - luddites notwithstanding.

People make the mistake of thinking that because there are artificial
'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
artificial 'intelligence'.

David Longley

sv0f

unread,
May 3, 2003, 5:21:19 PM5/3/03
to
In article <u65or2...@dtpq.com>,

cst...@dtpq.com (Christopher C. Stacy) wrote:

>Any field with the word "science" in it, isn't science.

This sentence, it isn't true.

The Christian Science Monitor

Library Science

Science Magazine (http://www.sciencemag.org/)

Scientology

Creation Science

BK

unread,
May 4, 2003, 12:39:16 AM5/4/03
to
Gorbag <gorbag...@NOSPAMmac.com> wrote ...

> ... if science is the study of the observed universe and coming to grips


> with some descriptive models that can predict what we will see,

I would also include "if not predict than at least *explain* what we
see".

> and engineering is starting with some goal behavior and creating a system
> which produces that behavior, then indeed, most of AI is about engineering
> not science.

Who cares whether AI is science or "just engineering" while the
remainder of the software field doesn't even qualify to carry the name
"engineering" anymore?!

> One can say the same about most of "computer science"

Really?

Isn't it that what one can say of most of "computer science" is not
even engineering, at least for most software related areas?!


"Descriptive models" are very hardly used at all, at least not in a
way that they would have any effect, and the average software engineer
is less skilled at "predicting what we will see" than the average
psychic with a cristal ball.

This becomes painfully apparent when something goes wrong. In any
*real* engineering field the average engineer skilled in the art will
be able to analyse the average problem, figure out what went wrong and
devise a plan what to do in order to fix the problem.

Not so in the field of "software engineering". Average problems are
widely accepted as being "too expensive to fix" because "software is
so flexible, so difficult to predict" and "practically impossible to
fix".

The average "expert advice" is widely known to be "go back to square
one and start over" which can be in various forms from "restart and
try again" to "reinstall and try again".

To add insult to injury, the overwhelming majority of those "software
engineers" don't even seem to care. Instead they hurl insults at
anybody who has the courage to ask the painful question what happened
to "engineering". No pride.

Software *engineering* ??? The term itself has become a paradoxon.

Perhaps AI needs to advertise as "true engineering" in order to
distance itself from the unscientific and un-engineering culture that
has crept up in the remainder of most of "computer science". Science
or not, as long as it remains true and good engineering it will set
itself apart already.

rgds
bk

Acme Debugging

unread,
May 4, 2003, 2:20:42 AM5/4/03
to
David Longley <Da...@longley.demon.co.uk> wrote in message news:<A1CivIA5...@longley.demon.co.uk>...

> In article <35fae540.03050...@posting.google.com>, Acme
> Debugging <L.F...@lycos.co.uk> writes

> >What is profound? That some people place greater value on AI


> >simulating humans and thus seeking descriptions of the mind, while
> >others place greater value on AI as intelligence in the abstract
> >requiring little or no psychology?
>
> No.
>
> What is profoundly problematic is that outside of established research
> programmes (cf. "intelligence" - IQ, Inspection Time etc.) most
> "psychological terms" have questionable reference.

I am able to agree with respect to IQ, otherwise unqualified to
remark. But we now seem to be in agreement on all else.

Thanks,

Larry

Eric Smith

unread,
May 4, 2003, 6:30:49 AM5/4/03
to
David Longley <Da...@longley.demon.co.uk> wrote in message news:<A1CivIA5...@longley.demon.co.uk>...

> People make the mistake of thinking that because there are artificial
> 'hearts', 'lungs', 'limbs' etc - that it makes sense to hanker after
> artificial 'intelligence'.

To prove that artificial intelligence alone is not a worthwhile goal,
consider a machine so intelligent that it finds us too dull and
refuses to communicate with us or work for us. We might be reduced to
wondering whether it's really intelligent at all.

But if we focus on the synergy of human and machine working together,
without worrying about whether the machine alone is actually
intelligent, the overall development of machine intelligence might
proceed much faster.

People who use Lisp are interested in software development, so a
natural area for us to focus on is improving software development.
Software development clearly benefits from intelligence and
machine/human synergy. If we just continue on the path of improving
that synergy, we have an excellent chance of reaching the ultimate
goals of AI sooner than anyone else, even if partly by accident.

It is loading more messages.
0 new messages