AIList Digest V4 #11

1 view
Skip to first unread message

AIList Moderator Kenneth Laws

Jan 22, 1986, 1:07:00 PM1/22/86

AIList Digest Wednesday, 22 Jan 1986 Volume 4 : Issue 11

Today's Topics:
Query - LISP Language Standard,
Correction - Spang Robinson Report on Reasoning Systems,
AI Tools - AI and Supercomputers & MRS,
Definitions - Paradigm & Symbol,
Expert Systems & AI in the Media - Connectionist Speech Learning &
Arthur Young's System for Financial Auditing


Date: 21 Jan 86 01:18:00 PST
From: sea.wo...@ames-vmsb.ARPA
Reply-to: sea.wo...@ames-vmsb.ARPA
Subject: LISP Language Standard

I am currently involved in the definition of some loose LISP
programming standards [loose LISP sink ships], has anyone given any
thought to this, particularly as it applies to LISP environments,
or does anyone know of any articles on the topic?.
I will be happy to collect responses and send them back out on the
Thank you,

S. Engle, Informatics General Co.
NASA/Ames Research Center MS 242-4
Moffet Field, CA 95035



Date: Wed, 15 Jan 86 04:18:25 cst
From: Laurence Leff <leff%smu....@CSNET-RELAY.ARPA>
Subject: Correction

[Joseph Rockmore, vice president of Reasoning Systems, says that the
Spang Robinson report on his company's agreement with Lockheed was
correct, but that the summary in AIList incorrectly identified his
company's work with "USC Kestrel Institute". He points out that
Reasoning Systems is associated with Kestrel, but that neither is
associated with USC-ISI. Laurence Leff has provided the following
additional summary in the course of resolving this matter. Contact
rock...@kestrel.ARPA for further information. -- KIL]

In my abstracts of Spang Robinson Report, I reported parenthetically
that Reasoning Systems is commercializing the work of [...] Kestrel Institute.
That parenthetical statement was based on my own analysis of the
situation and was not included in the Spang Robinson report. My apologies
for any confusion created.

Its was based on what I perceived to be a similarity between the work
and the fact that one person has moved from that organization over to
Reasoning Systems (as indicated in the address of authors section of
IEEE Transactions on software Engineering). Also, quoting from
"Software Environments at Kestrel Institute" in the November 1985
Volume Se-11 No 11, "One of the authors (G. B. Kotik) is currently with
Reasoning Systems, a company founded in 1984 in order to apply the body
of basic research in knowledge-based programing to commercial problems.
Reasoning Systems develops special-purpose knowledge-based program
generators and programming environments for various domains." and
later in the same article "Toward these ends, Reasoning Systems has
developed a system called REFINE," "Although REFINE derives its
inspiration from many sources, it utilizes the principles and system
structure laid out in the CHI project."


Date: Tue 21 Jan 86 13:52:27-CST
From: CMP....@R20.UTEXAS.EDU
Subject: AI and Supercomputers

On January 17, UCSD offered a one-day program, called "Capabilities and
Applications of the San Diego Supercomputer Center", in conjunction with
the opening of their new center. One of the talks was "AI and Expert Systems
on Supercomputers" by Dr. Robert Leary, a Senior Staff Scientist at the
San Diego Supercomputer Center. I didn't attend the course but heard that
Leary's talk was preliminary and did not present any significant applica-
tions. Further information can probably be obtained from SDSC on the UCSD
campus or from UCSD Extension. The address of UCSD is La Jolla, CA 92903.

Dallas Webster


Date: Tue, 21 Jan 86 09:52:16 est
From: Walter Hamscher <hams...@MIT-HTVAX.ARPA>
Subject: MRS

Date: Fri, 17 Jan 86 15:04:24 est
From: Tom Scott <scott%bgsu....@CSNET-RELAY.ARPA>
Subject: Two questions on knowledge-engineering software

1. Rick Dukes from Symbolics recently gave an interesting talk on
AI/KE to the Northwest Ohio chapter of the ACM. He mentioned an
expert-system-building tool, MRS, from Stanford.
* * *
Can anyone tell me about the system? What does it do?
What representation and search techniques are available through it?

It's a logic programming system written in Lisp. The principal
underlying inference engine is resolution, you can also do forward &
backward chaining. The name means `Metalevel Reasoning System'
because you can write meta-level axioms, axioms about the base level
knowledge -- usually these meta axioms are used to guide the
search-based inference procedures. I hear the latest version lets one
write meta-meta-axioms, meta-meta-meta-axioms, etc ("Anything you can
do, I can do Meta," as Brachman says).

For background see "An Overview of Meta-Level Architecture" Genesereth
AAAI-83. Stanford Heuristic Programming Project probably has some
kind of MRS manual; there's also an `MRS Dictionary' but that's really
more of a reference tool.

Can it handle frames? Semantic networks? Certainty factors?

It can `handle' anything you can write in lisp... does it provide
any of these facilities, No, I don't think so.

How does it work as an expert-system development environment?

Good question. How does Lisp work as an expert-system environment?

For applications to troubleshooting & test generation see Genesereth,
AAAI-82; Yamada, IJCAI-83; Singh's PhD thesis from Stanford (1985);
Genesereth in AI Journal V 24 #1-3 or `Qualitative Reasoning about
Physical Systems', ed. Bobrow. It's NOT a traditional expert-system
envirionment ala KEE, ART, S1, DUCK, etc.

Most importantly, how does a university acquire MRS?

Jane Hsu (HSU@SCORE) should be able to tell you all about this. I
believe she's charge of maintenance & distribution. She may refer you
on to Arthur Whitney, but try Jane first.

I think
Rick told us that it was available to universities essentially for
free. If that is true, then where can we send a tape?

For some reason the figure $500 sounds right, but don't quote me.


Date: Fri, 17 Jan 86 10:43:33 PST
From: kube%cog...@BERKELEY.EDU (Paul Kube)
Subject: What's a paradigm?

A classic attempt to figure out just what the devil Kuhn means by
`paradigm' is Margaret Masterman's `The nature of a paradigm' (in
_Criticism and the Growth of Knowledge_, I. Lakatos and A. Musgrave,
eds.). She finds 21 ("possibly more, not less") senses of the term
in the first edition of _The Structure of Scientific Revolutions_;
take your pick.


Date: Wed, 22 Jan 86 02:16:17 PST
From: kube%cog...@BERKELEY.EDU (Paul Kube)
Subject: Re: What is a symbol?

>.... Newell and Simon's Physical Symbol
>System Hypothesis, that a machine that carries out processes operating on
>symbol structures has the necessary and sufficient means for general
>intelligent action, seems to be an expression of the underlying assumptions
>of the majority of work in AI.
> A symbol is a formal entity whose internal structure
> places no restrictions on what it may represent in the
> domain of interest.
>Unfortunately, when combined with the Physical Symbol System Hypothesis,
>this notion of symbol creates a problem with regard to so-called
>"connectionist" systems.

I think at least two concepts, not just one, need some work here: it
would help to have a better idea not only of what symbols are, but
also of what operating on a symbol is.

Under what one might call the Turing conception of `operating on a
symbol'-- a strong, agentive interpretation: symbols are objects that
get manipulated by a processor, e.g. written on and erased from a
tape, or shuffled from location to location--I think that it's
probably true that connectionist systems do not `operate' on symbols
that have interesting external referents. But I doubt that the
majority of workers in AI believe that in this sense `operating on
symbols' is necessary for the production of intelligent action, and so
there is no conflict with connectionism; that construal of the PSSH is
easy enough to give up. (That `operating on symbols' in the Turing
sense be sufficient for the production of intelligent action is,
however, pretty clearly an underlying assumption of work in the field;
but of course this doesn't conflict with connectionism either.)

On the other hand, a weaker interpretation of what operating on
symbols amounts to gives a PSSH that is compatible with connectionism,
not to mention being more likely to be true. Certainly what's
important about symbols for theory construction in AI is that they
have formal properties which determine their interactions with other
symbols without regard to any semantic properties they might have,
while being susceptible of being assigned semantic properties in a way
that is dependent on these interactions. (Anyway I don't think it's
helpful to require of a symbol that its `internal structure places no
restrictions on what it may represent', at least without further
specification of what counts as internal structure. Take an English
word: `symbol', say. What's between the quotes is a symbol, I'd
think, but intuitively its internal structure places pretty strong
restrictions on what it represents: try composing it of six different
letters, for example.) But then they don't need to be objects;
symbols can be states, and the formal properties which determine their
interaction (`operations' on them) can be identified with certain of
their causal properties. Now, one way a system can be in symbolic
states is to operate on symbols in the strong, Turing sense; but this
is only one way. Symbolic states can also be emergent states of a
connectionist system.

Paul Kube
Computer Science Division
U.C. Berkeley
Berkeley, CA 94720


Date: Mon, 20 Jan 86 16:57:43 mst
From: ted%nmsu....@CSNET-RELAY.ARPA
Subject: today show segment

I think that the work that was mentioned recently in the digest
from the today show (which I didn't see) was the speech synthesis
work which was described earlier on the aidigest (sketchily). I
don't remember the contact (sejnowski??), but the machine was a
neural analog network that modified it's own weights when given a
training corpus of textual english with correct voice synthesizer
outputs. Then, when given more english (it wasn't clear that this
new text had not appeared in the original training corpus) the
machine produced coherent control inputs for the voice

Claims that ``it learns to speak the way that human babies do''
and so on are obviously bunk since people don't learn initially
to read text and because people also have to derive the
correlation between their motor stimulation (essentially the
voice synthesizer control level), the sound thereby produced and
the percepts that are returned via their ears. A measure of the
comparative difficulty is that programs which do text to speech
conversion extremely well have been existence for several years
(DECtalk is the current avatar), but no program can yet even
reproduce an infant's use of auditory language. Certainly, no-one
can be claiming that a program that can learn to do the former
must be able to consequently be able to learn to do the latter,
much less that the acquisition method that would be used is the
one used by human children.

The most interesting thing is that my original contact with the
author of the project in question (I think), is that he never
mentioned this sort of comparison.

sigh....the original work was interesting, possibly even
progressive. But then here comes the today show interviewers
looking for a BREAKTHROUGH. So they find (make) one and we hear
about another case of ai-hype. Everybody get ready for another
wave of flames.


Date: WED, 10 JAN 84 17:02:23 CDT
Subject: News Flash

Source: January 16 Wall Street Journal FIRST Page

"CPA firm Arthur Young unveils a computer system today that uses expert
systems to help the auditor focus on areas where risk of error is greatest.
The system could mean average savings of 10% in time and money, says
Arthur Young's Robert Temkin"


End of AIList Digest

Reply all
Reply to author
0 new messages