Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

RJ vs AI: Science vs Engineering? - (nf)

7 views
Skip to first unread message

marcel

unread,
Nov 28, 1983, 10:25:35 PM11/28/83
to
#N:uiucdcs:32300009:000:5023
uiucdcs!marcel Nov 28 13:57:00 1983

In response to Johnson vs AI, and Tom Dietterich's defense:

The emergence of the knowledge-based perspective is only the beginning of
what AI has achieved and is working on. Obvious corollaries: knowledge
acquisition and extraction, representation, inference engines.

Some rather impressive results have been obtained here. One with which I
am most familiar is work being done at Edinburgh by the Machine Intelligence
Research Unit on knowledge extraction via induction from user-supplied
examples (the induction program is commercially available). A paper by
Shapiro (Alen) & Niblett in Computer Chess 3 describes the beginnings of the
work at MIRU. Shapiro has only this month finished his PhD, which effectively
demonstrates that human experts, with the aid of such induction programs,
can produce knowledge bases that surpass the capabilities of any expert
as regards their completeness and consistency. Shapiro synthesized a
totally correct knowledge base for part of the King-and-Pawn against
King-and-Rook chess endgame, and even that relatively small endgame
was so complex that, though it was treated in the chess literature, the
descriptions provided by human experts consisted largely of gaps. Impressively,
3 chess novices managed (again with the induction program) to achieve 99%
correctness in this normally difficult problem.

The issue: even novices are better at articulating knowledge
by means of examples than experts are at articulating the actual
rules involved, *provided* that the induction program can represent
its induced rules in a form intelligible to humans.

The long-term goal and motivation for this work is the humanization of
technology, namely the construction of systems that not only possess expert
competence, but are capable of communicating their reasoning to humans.
And we had better get this right, lest we get stuck with machines that run our
nuclear plants in ways that are perhaps super-smart but incomprehensible ...
until a crisis happens, when suddenly the humans need to understand what the
machine has been doing until now.

The problem: lack of understanding of human cognitive psychology. More
specifically, how are human concepts (even for these relatively easy
classification tasks) organized? What are the boundaries of 'intelligibility'?
Though we are able to build systems that function, in some ways, like a human
expert, we do not know much about what distinguishes brain-computable processes
from general algorithms.

But we are learning. In fact, I am tempted to define this as one criterion
distinguishing knowledge-based AI from other computing: the absolute necessity
of having our programs explain their own processing. This is close to demanding
that they also process in brain-compatible terms. In any case we will need to
know what the limits of our brain-machine are, and in what forms knowledge
is most easily apprehensible to it. This brings our end of AI very close to
cognitive psychology, and threatens to turn knowledge representation into a
hard science -- not just

What does a system need, to be able to X?

but How does a human brain produce behavior/inference X, and how do
we implement that so as preserve maximal man-machine compatibility?

Hence the significance of the work by Shapiro, mentioned above: the
intelligibility of his representations is crucial to the success of his
knowledge-acquisition method, and the whole approach provides some clues on
how a humane knowledge representation might be scientifically determined.

A computer is merely a necessary weapon in this research. If AI has made little
obvious progress it may be because we are too busy trying to produce useful
systems before we know how they should work. In my opinion there is too little
hard science in AI, but that's understandable given its roots in an engineering
discipline (the applications of computers). Artificial intelligence is perhaps
the only "application" of computers in which hard science (discovering how to
describe the world) is possible.

We might do a favor both to ourselves and to psychology if knowledge-based AI
adopted this idea. Of course, that would cut down drastically on the number of
papers published, because we would have some very hard criteria about what
comprised a tangible contribution. Even working programs would not be
inherently interesting, no matter what they achieved or how they achieved it,
unless they contributed to our understanding of knowledge, its organization
and its interpretation. Conversely, working programs would be necessary only
to demonstrate the adequacy of the idea being argued, and it would be possible
to make very solid contributions without a program (as opposed to the flood of
"we are about to write this program" papers in AI).

So what are we: science or engineering? If both, let's at least recognize the
distinction as being valuable, and let's know what yet another expert system
proves beyond its mere existence.

Marcel Schoppers
U of Illinois @ Urbana-Champaign

0 new messages