Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Weizenbaum keynote address at U of Waterloo (long)

4 views
Skip to first unread message

dave brewer, SD Eng, PAMI

unread,
Nov 1, 1986, 11:04:36 PM11/1/86
to


The Hagey Lectures at the University of Waterloo provide an
opportunity for a distinguished researcher to address the
community at large every year. This year, Dr. Weizenbaum of
MIT was the chosen speaker, and he has just delivered two
key note addresses entitled; "Prospects for AI" and "The Arms
Race, Without Us".

The important points of the first talk can be summarized as :
1) AI has good prospects from an investment prospective since
a strong commitment to marketing something called AI has
been made.
2) the early researchers did not understand how difficult
the problems they addressed were and so the early claims
of the possibilities were greatly exaggerated. The trend
still continues but on a reduced scale.
3) AI has been a handle for some portion of the US military
to hang SDI on, since whenever a "difficult" problem
arises it is always possible to say , " Well, we don't
understand that now, but we can use AI techniques to
solve that problem later."
4) the actual achievements of AI are small.
5) the ability of expert systems to continuously monitor
stock values and react has led to increased volatility
and crisis situations in the stock markets of the world
recently. What happens if machine induced technical trading
drops the stock market by 20 % in one day , 50 % in one day ?

The important points of the second talk can be summarized as :
1) not all problems can be reduced to computation, for
example how could you conceive of coding the human
emotion loneliness.
2) AI will never duplicate or replace human intelligence
since every organism is a function of its history.
3) research can be divided into performance mode or theory
mode research. An increasing percentage of research is
now conducted in performance mode, despite possible
desires to do theory mode research, since funds (mainly
military), are available for performance mode research.
4) research on "mass murder machines" is possible because
the researchers (he addressed computer scientists
directly although extension to any technical or
scientific discipline was implied), are able to
psychologically distance themselves from the end use
of their work.
5) technical education that neglects language, culture,
and history, may need to be rethought.
6) courage is infectious, and while it may not seem to be
a possibility to some, the arms race could be stopped cold
if an entire group of professions, (ie computer scientists),
refused to participate.
7) the search for funds has led to an increased rate of
performance mode research, and has even induced many
institutions to prostitute themselves to the highest bidder.
Specific situations within MIT were used for examples.
Weizenbaum had the graciousness to ignore related (albeit
proportionally smaller), circumstances at this
university.
8) every researcher should assess the possible end use of
their own research, and if they are not morally comfortable
with this end use, they should stop their research. Weizenbaum
did not believe that this would be the end of all research,
but if that was the case then he would except this result.
He specifically referred to research in machine vision, which he
felt would be used directly and immediately by the military for
improving their killing machines. While not saying so, he implied
that this line of AI should be stopped dead in its tracks.

Posters comments :
1) Weizenbaum seemed to be technically out of date in some areas,
and admitted as much at one point. Some of his opinions
regarding state of the art were suspect.
2) His background, technical and otherwise, seems to predispose
him to dismissing some technical issues a priori. i.e. a machine
can never duplicate a human, why ?, because !.
3) His most telling point, and one often ignored, is that
researchers have to be responsible for their work, and should
consider its possible end uses.
4) He did not appear to have thought through all the consequences
of a sudden end to research, and indeed many of his solutions
appear overly simplistic, in light of the complicated
world we live in.
5) You have never seen an audience squirm, as they did for the
second lecture. A once premier researcher, addresses his
contemporaries, and tells them they are ethically and morally
bankrupt, and every member of the audience has at least some
small buried doubt that maybe he is right.
6) Weizenbaum intended the talks to be "controversial and
provocative" and has achieved his goal within the U of W
community. While not agreeing with many of his points, I
believe that there are issues raised which are relevant to
the entire world-wide scientific community, and have posted
for this reason.

The main question that I see arising from the talks is : is it time
to consider banning, halting, slowing, or otherwise rethinking
certain AI or technical adventures, such as machine vision, as was
done in the area of recombinant DNA.

Disclaimer : The opinions above are mine and may not accurately
reflect those of U of Waterloo, Dr.Weizenbaum, or
anyone else for that matter. I make no claims as
to the accuracy of the above summarization and advise
that transcripts of the talks are available from some
place within U of W, but expect to pay for them because
thats the recent trend.


UUCP : {decvax|ihnp4}!watmath!watdcsu!brewster
Else : Dave Brewer, (519) 886-6657

Brad Templeton

unread,
Nov 2, 1986, 2:59:32 AM11/2/86
to
In article <26...@watdcsu.UUCP> brew...@watdcsu.UUCP (dave brewer, SD Eng, PAMI ) writes:
> 5) You have never seen an audience squirm, as they did for the
> second lecture. A once premier researcher, addresses his
> contemporaries, and tells them they are ethically and morally
> bankrupt, and every member of the audience has at least some
> small buried doubt that maybe he is right.

The audienced "squirmed" as Dr. Weizenbaum accused, not so much because
he told us we were morally bankrupt, but because he told us we might be
so.

His contstant message, or so he said, throughout the evening was,
"Computer Scientists, examine the morality of what you are doing."

I don't want to belittle this message, because it's important, but it doesn't
need a full talk. It was obvious that he had more to say than that, even
though he denied he was saying more, but he waffled about this further ground.

An audience does not expect hints at immorality, they expect to hear a speaker's
real opinions and solid arguments. That the audience wanted an answer to
the question, "what sort of work do you think people should abandon" is not
surprising. The questions that people like myself, Kelly & Ian! asked all
touched on serious aspects of this issue. Dr. Weizenbaum's answers spoke
of gray areas and the need for miracles. The only concrete things discussed
were the areas of computer vision & self programming natural language
interfaces.

Perhaps most people in this field really are moral blank slates who have to
be reminded that they should consider the risks in what they do. Those who
asked the questions certainly were not, and perhaps came seeking more.
--
Brad Templeton, Looking Glass Software Ltd. - Waterloo, Ontario 519/884-7473

Randy Goebel LPAIG

unread,
Nov 2, 1986, 1:43:11 PM11/2/86
to
Both reviews of Weizenbaum's lectures were quite polite; while it is important
to consider both the means and ends of one's work, it is unrealistic to believe
that some loosely defined community like ``computer scientists'' are all morally
bankrupt, and should collectively rethink their position. Weizenbaum's motiva-
tion is well taken...but he has no suggestion of what to do about it.

As far as science goes, Weizenbaum believes that certain things about human
intelligence are currently unexplainable, and should remain that way. There is
nothing scientific about that attitude. I don't believe that Weizenbaum created
ANY worthwhile controversy with his lectures.

Bill Trost

unread,
Nov 2, 1986, 9:48:23 PM11/2/86
to
In article <26...@watdcsu.UUCP> brew...@watdcsu.UUCP (dave brewer, SD Eng, PAMI ) writes:
>
>The main question that I see arising from the talks is : is it time
>to consider banning, halting, slowing, or otherwise rethinking
>certain AI or technical adventures, such as machine vision, as was
>done in the area of recombinant DNA.

Somehow, I don't think that banning machine vision makes much sense. It
seems that it would be similar to banning automatic transmissions because
you can use them to make tanks. The device itself is not the hazard (as it
is in genetic research) -- it is the application.
--
Bill Trost, tektronix!reed!trost
"ACK!"
(quoted, without permission, from Bloom County)

0 new messages