New combinations for IF (was: Is the market ...)

1 view
Skip to first unread message

Jorn Barger

Dec 23, 1991, 1:23:11 PM12/23/91
Here's a little overview of AI, as I understand it. Opinions
do not necessarily reflect anyone's light but my own:

One of the big exciting ideas of the 1950s in psychology was that
Cognition is Computation. Here at ILS the view is that this
didn't turn out to get us very far, because computation is
conceptually infinite-- it can be *anything*. You can write any
kind of program under the sun, but figuring our *what kind of
computation* cognition is turns out to be a whole nother kettle of

It was hoped that machines could translate languages
automatically, but "dictionary look-up" fails miserably because
words accrete shades of meaning differently in different
languages. Way too much of AI's pot of resources have gone into
trying to extract the sentence-diagrams from sentences, but this
doesn't *begin* to solve the shades-of-meaning problem.

Theorem provers and expert systems were the great hope for a
while, but even the simplest common-sense decisions require so
much background knowledge that we are nowhere near solving this in
a general way. Doug Lenat's 5-years-along CYC project (for
enCYClopedia) is in the news more and more: he's trying to spell
out all 10 zillion bits of common sense, so that smart expert
systems can be easily built on top of this base.

There are certain areas where this may be a big help, where he's
found admirable solutions for subtle problems about representing
common sense, like: what is the difference between wood-in-general
and a piece of wood? Or: how is a person like a computer program?

But Roger Schank and others have been arguing since the early 70s that
human thinking is *story* centered, and that representing human
common sense about stories takes a different approach entirely.

People in AI recognize a distinction between "straight AI" and
"scruffy AI", where straight is more mathematical and theorem-
oriented, or grammar-oriented, and scruffy looks more at
*meanings*. Roger pioneered the scruffy school during the 70s at
Yale, and achieved the first successes in machine translation,
though admittedly in only a few narrow domains.

In the book (quite readable) "Scripts, Plans, Goals &
Understanding", Schank and Abelson proposed a set of 'primitive'
ingest, expel, mtrans, ptrans, mbuild, atrans, move, propel,
grasp, speak, attend
that they thought all stories could be reduced to. (Its original
title was "The Elements of Understanding".)

And they represented stories as "scripts" which tried to spell out
all the variants on some segment of human activity-- the classic
domain was restaurants. So in "CD notation" they'd spell out
facts like, first you get a table, and then the waiter comes, or
maybe a waitress, or maybe you get a tray and stand in line...

No one has managed to scale this trick up, though, to handle
reality in general. The money in AI now goes to the glitzy
technology of neural nets, and not to the hard, scruffy problem of
what are the real primitive verbs. That problem is seen as a dark
ocean with monsters and a deadly "Edge", over which many an
admirable AI-mariner has vanished.

ILS is both the AI department of Northwestern U, with grad
students and all, *and* a thinktank doing some government work but
more r&d on *training software* for big corporate clients: Arthur
Andersen accounting, Ameritech, etc.

We do a lot of experimenting with ways to arrange video clips of
businesspeople telling war stories, into databases that have
encoded summaries of the pertinent facts in the stories, so the
computer can retrieve the right story at the right time. (See
Schank's "Tell Me a Story" or "Connoiseur's Guide to the Mind".)
So if you're trying to teach an accounting consultant how to
handle interviews with prospective clients, you might want to show
her an initial video clip and offer her a choice of directions to
go as a followup-- hypermedia, is the usual buzzword. And rather
than having to build all those potential linkages by hand, the
hope is that there are some deep priciples of meaning that will
allow the computer to calculate what should naturally follow from

There's a deep similarity between this problem and the problem
faced by someone trying to arrange, say, a dictionary of proverbs.
Roget offered one solution that's getting pretty mouldy after 200
years, but amazingly has no challengers!

When I started working here, all the grad students were doing a
seminar with Roger where they tried to assemble a universal story-
indexing frame, or UIF-- a database format that would capture
enough story content to allow the computer to detect deep
similarites between any sort of superficially different stories--
like: do they illustrate the same proverb?

The fields in this database were for info like: who are the
characters, how are they interrelated, what are their plans and
goals, what is the outcome? I was programmer on the first UIF
project, which grew into the current Story Archive: up to 144
hours of video clips, in a laserdisc jukebox, that one can browse
via various sorts of linkages, some handbuilt and some calculated.

It's recognized as a project that flirts dangerously with the
legendary Edge, and to avoid falling off we've ended up locking
ourselves into a simplified format with categories like "plan A
fails because agent B lacks skill C". And even at this level
there are big problems, like: how can you spell out a complete
list of human *plans*?

One would like a *hierarchy* of plan-types, if possible,
preferably with less than ten branches at every node, for the
user's convenience. This of course brings us right back to the
problem of a hierarchy of story types, because every plan tells a

[Up to here I'm just requoting something I wrote for another group.]

How does this relate to IF? What do you all think? I think the
top level of this ideal hierarchy ought to be just the same biological
primitives that an ALife microworld ought to offer, and that the
simplest possible IF-world will be an ALife world where you can play
one of the life-forms.

Jorn Barger, Northwestern U., Chicago, Illinois.
"And crazyheaded Jorn, the bulweh born?" _Finnegans Wake_ 513.07

Reply all
Reply to author
0 new messages