Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Reasoning agents

10 views
Skip to first unread message

Phil Goetz

unread,
Jan 27, 1994, 12:00:02 PM1/27/94
to
Notes on Using SNePS for Interactive Fiction
Phil Goetz go...@cs.buffalo.edu
Dept. of Computer Science, SUNY, Buffalo NY 14260
December 14, 1993

(ASCII version)

Comments craved.

CONTENTS

1. SNePS (skip this if it bores you)
1.1 Semantic networks
1.2 Philosophical commitments of SNePS
1.2.1 Propositions are represented by nodes
1.2.2 Intensional representation
1.2.3 Uniqueness principle
1.2.4 UVBR
2. The Goals of Using SNePS for Interactive Fiction
2.1 Interactive fiction as it could be
2.2 Interactive fiction as it is
3. SNePS and the Knowledge Representation Needs of Interactive Fiction
3.1 Interactive fiction programming methodologies
3.1.1 Object-oriented programming
3.1.2 Rule-based systems
3.2 Belief spaces
3.2.1 Viewpoint persona
3.2.2 Different beliefs
3.2.3 Hypothetical reasoning
3.3 Beliefs and actions
3.3.1 Integrating acting and inference
3.3.2 Belief revision
3.4 Disadvantages of SNePS
4. Conclusion
References


1. SNePS (skip this if it bores you)

_SNePS_, a _S_emantic _Ne_twork _P_rocessing _S_ystem
developed primarily at the University of Wisconsin, Indiana University,
and the University of Buffalo over the past 25 years,
is a knowledge representation (KR) and reasoning system.
It uses semantic networks to represent knowledge.

1.1 Semantic networks

A semantic network is usually a directed graph in which the directed edges,
or _arcs_, specify relationships between the entities represented by the
vertices, or _nodes_. In SNePS, a proposition is a closed (no
unquantified free variables) predicate logic formula, and is represented
in the network by a node. The set of arcs leading from the node
forms a caseframe, which identifies the logical predicate
in the proposition.

A semantic network representation displays a system of knowledge in
a form easier to understand than a list of propositions, because each
object of a proposition is only written once, and all the rules
concerning a given object can be found merely by noting what is connected
to that object. The distinction between semantic networks
and logic is merely one of notation.

1.2 Philosophical commitments of SNePS

1.2.1 Propositions are represented by nodes

Every proposition that can be asserted by the system must be subject to
modification. Most semantic network representation for "Jack is a fish"
would consist essentially of a node representing Jack, a node
representing the class of fish, and an arc labelled "isa" going from
<Jack> to <fish>. But how do you say "Jack is not a fish"? The SNePS
solution is to represent "Jack is a fish" by the network consisting of
a node <Jack> representing Jack, a node <fish> representing the class
of fish, and a node <M> representing the proposition that Jack is a fish.
An arc labelled "member" leads from <M> to <Jack>, and one labelled
"class" leads from <M> to <fish>.

In SNePS we can represent any proposition by a list of pairs,
where each pair is an arc label leading from the node representing the
proposition, followed by the representation for the proposition or base
node (something like <Jack> or <fish>) or other type of node that the arc
leads to. There is no need to label a propositional node in specifying
a network. So, the proposition "Jack is a fish" can be represented by
a node whose structure is described as

(member <Jack> class <fish>)

Now you can negate the proposition by asserting

(not (member <Jack> class <fish>))

1.2.2 Intensional representation

An unusual feature of SNePS is its commitment to intensional representations
[Maida and Shapiro 1982]. Roughly, an extensional object is one in the real
world. An intensional object is an object of thought. The _SNe_PS
_R_esearch _G_roup (_SNeRG_) argues that knowledge representations
should be intensional for several reasons. One is that unicorns and other
imaginary objects have no extensions, yet we think and talk about them.
Another is that you may have different beliefs about objects of thought that
actually, unbeknownst to you, are the same extensional object. For example,
if a man has just won the lottery but doesn't know it yet, and you ask
him "Are you rich?", he will say no, but if you ask him "Is the man who
just won the $10 million lottery rich?", he will say yes.

1.2.3 Uniqueness principle

Every concept represented in the network is represented by one and only
one node [Shapiro 1986].

1.2.4 UVBR

The _Unique Variable Binding Rule_, or _UVBR_, states that
no instance of a rule is allowed in which two distinct variables
of the rule are replaced by the same term, nor in which any variable
is replaced by a term already occurring in the rule.
The reason is that different variables used in a single rule are meant
to represent distinct concepts. Therefore, it would be an abuse of the
intended meaning of the rule to instantiate it so that two distinct concepts
were collapsed into one [Shapiro 1986]. If Jack, a single extensional object,
is thought of as filling both the agent and object roles in a rule,
then he is represented by two different intensions: one representing
Jack the agent, and one representing Jack the object.


2. The Goals of Using SNePS for Interactive Fiction

2.1 Interactive fiction as it could be

Here is a possible interactive fiction version of
_Of Mice and Men,_ mangled for my own didactic purposes:

>enter barn

The barn is quiet now. Lenny is standing near the doorway, crying.
"I didn't mean to do it, George," he sobs. The straw has been
thrown up all around him. Something like a large sack lies on the
floor behind Lenny, but you can't tell what just yet.

This is canned text, since this scene is central to the plot.

>Ask Lenny "What did you do?"

Lenny stops crying.
Lenny says, "George, nothing."

Assume that "What did you do?" is parsed so that Lenny comes up
with a proposition like (agent Lenny action kill object Sarah) as the true
answer. Lenny creates a simulated belief space of George, adds that
proposition to it, and sees that that knowledge may cause George to
hit him. This contradicts with his own preservation goals.
Lenny decides to return no proposition: "George, nothing."
The act of speaking has as a precondition that the speaker must not
be crying too hard to speak, so Lenny first stops crying.
This action is reported.

>go behind Lenny

Lenny stops you.

Assume that the spatial representation of the barn is such that
"go behind Lenny" makes sense. It may be best to allow each nearby
character to examine each request for action by the participant before
it is executed, in case that character wants to interfere. In this case,
Lenny simulates a world like the present one except in which George is
behind him. Lenny doesn't have to consciously realize that George will
then see the body; he simply runs the simulation: the proposition
(object Sarah property dead) will be added to the simulated George,
setting off a goal of finding a plausible cause. Assuming
(agent Lenny action kill object Sarah) can be hypothesized, this
hypothesis might cause the simulated George to hit Lenny, or shout at
Lenny. Lenny merely sees the resulting action in the simulated George,
and decides that he will prevent that.

Note the the participant (George) is simulated by pretending he is
a non-player characters like Lenny.

>push Lenny

Lenny staggers back.
Behind Lenny you see dead Sarah.
Lenny says, "George, don't yell at me."

Lenny sees you see Sarah, and derives that you may yell at him.
He tries to prevent this by a request.

>hit Lenny

Lenny runs out of the barn.

Lenny is a reactive agent. When something hits him, he runs away.

2.2 Interactive fiction as it is

>enter barn

The barn is quiet now. Lenny is standing near the doorway, crying.
"I didn't mean to do it, George," he sobs. The straw has been
thrown up all around him. Something like a large sack lies on the
floor behind Lenny, but you can't tell what just yet.

>Ask Lenny "What did you do?"

Lenny isn't interested in talking.

Lenny can't talk.

>go behind Lenny

You are now behind Lenny.
You see Lenny and dead Sarah.

Lenny doesn't care what you do.

>hit Lenny

That wouldn't be nice.
You are in a barn.
You see Lenny and dead Sarah.

"That wouldn't be nice" means "I can't let you do that because it
would make me look silly." Lenny wouldn't react.
Nothing interesting happens in a world with static characters
unless you do it yourself.


3. SNePS and the Knowledge Representation Needs of Interactive Fiction

3.1 Interactive fiction programming methodologies

3.1.1 Object-oriented programming

The first adventure to use object-oriented programming was DUNGEON,
which collected special-case code around objects instead of in verb handlers.
Recently, many programmers, using systems like TADS,
have come to appreciate that object-oriented simulated worlds are easier to
maintain and debug, since items can be removed and added more easily.
Also, objects can send messages to each other.
Instead of needing a central control program to handle interactions
between objects, each object can have its own methods of dealing
with different signals. So, for example, if the signal is speech,
most objects ignore it, non-player characters (NPCs) respond to it,
and portals (e.g., doors) may pass a "muffled speech" signal
to the room they connect to, upon which the room passes the message
to every object in it.

3.1.2 Rule-based systems

There is an old argument in KR over whether procedural
or declarative knowledge is better [Winograd 1975].
In the eighties, the AI community in general moved toward declarative
representations [Brachman 1990]. I favor declarative knowledge for IF.

Non-player characters should be able to reason about the IF world and
formulate plans on the fly to accomplish their goals. This requires
understanding the world and being able to predict the results of their
actions. The code that specifies what effects an action will have must be
available to that character for introspection at the knowledge level
[Allen 1981], meaning the most abstract level of representation at which
precise results can be predicted, or roughly the level at which humans
symbolize problems. That is, though a character doesn't need access
to the compiler, she needs access to rules like "If X is in Y and drops Z,
then Z is in Y."

The source code of a procedural language is very difficult to reason with.
To use traditional AI reasoning techniques, which use logics,
the procedures which specify how to update the world in response to
actions should be _rules_ in a rule-based system like Prolog.

SNePS has all the advantages of rule-based systems, as well as a
procedural attachment mechanism. This lets you define predicates
with Lisp code, not just in terms of other rules.
There is a way to interface rules with Lisp code, so that you can call a
Lisp function to answer a question in such a way that a reasoner need not
understand the Lisp code.

Rule-based systems can be written in an object-oriented manner. The rules
can be entered in groups organized around objects. Message handlers
can be expressed with rules. Very general inheritance hierarchies
are easy to make using inheritance rules.

3.2 Belief spaces

3.2.1 Viewpoint persona

I must digress to explain what the "viewpoint persona" is.
The participant interacts with the IF world through the point of view
of one character. Adventures usually address the player in second
person, as if the player were identified with that character.
A more powerful technique is to address the player
in first person, postulating a viewpoint "puppet" which
the player controls. This lets the player ask questions which the program
can answer ("Where is Belboz?"), lets the program offer advice ("Maybe
we should let the nice dragon sleep,") and lets the puppet respond to
commands with phrases such as "No way I'm going into a pool of molten lava!"
I refer to this puppet as the _viewpoint persona._

The use of a viewpoint persona is more straightforward than second-person
address. A viewpoint persona obviously requires a cognitive model of that
persona, whereas supposedly second-person address lets the participant be
his own cognitive model. That is not true. If the participant
is in a library with a door to a hallway and an undiscovered secret door
behind a bookcase, and types "Open door," you don't want the program to
ask, "Which door do you mean, the library door or the secret door?"
It must keep track of what the participant knows.
Second-person address is also odd in that the commands given by the
participant are also in second person. Typing "[You] eat the plant"
and being told "You don't like its taste" raises the eyebrows of
people unacclimated to this convention: who ate the plant, the viewpoint
persona or the participant?

3.2.2 Different beliefs

All interactive fiction architectures written to date have only one
representation of the world, which is interpreted as complete and
correct. All characters, including the viewpoint persona, refer to this
knowledge base (KB). This is inadequate.
Not all the characters in the IF world should have the same beliefs.
Some have knowledge that others don't. None, including the viewpoint
persona, should have complete knowledge. It is quite jarring, in
text adventures, to ask "Where is the green jar?" and be told
"Zane has it" if your persona hasn't yet met Zane.

The minimal architecture needed would have a KB representing the IF world,
and separate cognitive architectures for each character and for the
participant's persona. Each cognitive architecture would consist of
at least a KB mostly duplicating the world KB (hence an efficient
method of referring to instead of duplicating items in
the world KB or in other agent's minds is needed), plus any
special information that character has; and a planning/acting component
which moves perceived information from the world KB into its private KB,
and responds to changes in its private KB with purposive action. The
user interface would simply be an English parser which places goals on a
goal stack of the viewpoint persona. Having an agent architecture for
the viewpoint persona that can make and execute plans also lets the
participant give orders such as "Enter the kitchen" and have the persona
handle details such as unlocking and opening the door. Once we have
such a planning and acting viewpoint persona, we need it to have a
private KB, because we don't want the participant to type
"Save Marty from the flood" and have the persona
come up with the entire sequence of moves which the participant was
expected to discover.

In SNePS, this can be accomplished by giving the viewpoint persona
and all NPCs a separate belief space from the knowledge in the world model.
Every perceptual action of any agent transfers information found into
that agent's belief space. When the participant enters a room, he
implicitly performs a "look around." If it is dark, nothing is reported.
If he can see, every piece of information that is reported is also
entered into the participant's belief space, with a rule of the form

(assert forall $dest $agent $agent-context
&ant ((build member *dest class thing)
(build object *agent context *agent-context))
cq (build
act (build action (build lex "look") agent *agent dest *dest)
plan (build
action (build lex "snsequence")
object1 (build action (build lex "print")
object (build lex "You see "))
object2 (build action (build lex "withall")
vars $obj
suchthat (build object *obj in *dest)
do (build action (build lex "snsequence")
object1 (build action (build lex "print")
object *obj)
object2 (build action (build lex "believe")
object1 (build
object *obj
in *dest
*agent-context)))))))

where the final "(assert object *obj in *dest agent-context)" says that
the knowledge that <*obj> is in <*dest> is to be added to the agent's belief
space. I will use this same code for "look in the barrel," "look through
the window," etc., so all knowledge and only that knowledge reported to
the participant is available to the viewpoint persona. With the use of
object-oriented methods, this can even report false knowledge to the
viewpoint persona. For instance, if Greg is disguised as an old woman,
the "look" code would ask the object <Greg> to report its identity,
which would be given as "old woman."

3.2.3 Hypothetical reasoning

SNePS attaches to each proposition in its knowledge base a set of
hypothesis. If that set is nil, the proposition is always true.
If a character wonders what will happen if she performs an act,
a new context will be created which inherits all the knowledge of
the character, and adds the hypothesis that the character performed
that act. If she had complete knowledge of the world, reasoning in
that new space would produce the same results that performing the
action would in the IF world, but the IF world is not affected since
the derived results have an additional hypothesis in their hypothesis set.
If the character doesn't have complete and correct knowledge, or if the
world changes after her planning, she may reach a conclusion which will
prove false when she executes her plan.

3.3 Beliefs and actions

3.3.1 Integrating acting and inference

NPCs need to be able to plan and act on plans to do anything interesting.
Decomposable plans, which consist of subplans, allow characters to have
reactive plans, meaning they can use different subplans depending on
the environment. This is necessary in order for them to overcome attempts
by the participant to foil their actions.

The participant's persona can also benefit from planning capability.
The planning system would take care of details like automatically picking up
a book you ask to read, or opening a box you want to look inside.

I favor a rule-based system so characters can have maximal knowledge
available to them for reasoning. For the same reason, plans and
beliefs should be integrated. Traditional planners use three different
representations -- one for the world model, one for operators or actions,
and one for plans. As a result, the system has to perform three types of
reasoning: reasoning about the world, about actions, and about plans.
The result is that the inference engines for reasoning about actions and
plans get shortchanged, and don't have the power of general inference.
In the _SN_ePS _A_cting and Inference _P_ackage (_SNAP_),
plans and actions, though distinct from beliefs and having different
denotations, are represented in the same language; hence, the same
inference rules can operate on them [Kumar and Shapiro 1991a].

So, using SNAP, you can not only execute plans,
you can reason about them, e.g., to decide which plan is best
or to guess when another character is executing a plan.
Since you can reason about plans, and plans are decomposable,
a character can take a collection of facts and derive a plan
which the IF designer never foresaw.

In most architectures, reasoning is performed by an inference engine
and acting is done by an acting executive.
The SNAP approach uses only one component. Belief is viewed
as an action, and other types of actions are supposed to be
executed in a similar manner to believing by an external acting
component [Kumar and Shapiro 1991a]. SNAP integrates beliefs
and action with the use of _transformers_. Belief-belief
transformers are ordinary inference rules. Belief-act transformers
under forward chaining have either the interpretation "once the agent
believes <B>, it intends to do <A>", e.g., "if the building is on fire,
exit," or "if the agent wants to know the truth value of <P>, perform <A>".
Under backward chaining, they specify preconditions: "If you want
to play soccer, you need to have a ball." The transformers
for forward and backward chaining are distinct. (You wouldn't want
your character to set fire to his house because he wanted to leave it.)
Action-belief transformers represent effects of actions and plan
decompositions. Action-action transformers represent sequences of actions.
Together, these transformers allow a character's beliefs to affect his
actions in ways that take into account all the rules governing the IF world,
and they let actions be governed by goals.

In this way SNAP is like Meehan's TALESPIN.
Both have distinct representations for plans and beliefs, and a
means of executing plans, although SNAP uses a STRIPS planner and
TALESPIN does not [Kumar and Shapiro 1991b p. 129, Meehan 1980 p. 52-53].
Both have a class of inferences that connect beliefs to the acts
from which the beliefs result, inferences that connect acts to the
beliefs that enable them, inferences that connect acts to reasons why
those acts might be done, and so on [Meehan 1980 p. 42]. But TALESPIN
characters cannot reason about their plans, since rules of inference
do not operate on plan components.

The main motivation for using this planning system
in IF is so that the non-player characters can make and execute plans,
detect the player's plans, and form plans to assist or thwart the player.
What is needed is a more selective method of detecting plans that another
agent might be following (currently
SNAP returns all that are consistent with the actions taken -- which is
possibly infinite), and a way for an agent to consider what consequences
the success of another agent will have for his own goals and to choose to
help or hinder the other agent. It's not clear whether this can be done
with SNAP as it exists, merely by adding rules.

3.3.2 Belief revision

A truth-maintenance system (TMS) is neeeded to maintain consistency
in a rule-based world representation. Inferences will be drawn from facts;
when those facts change, those inferences must be withdrawn.
For example, I may have a rule that states that the effective weight of
an object is its intrinsic weight plus the weight of all objects on or in
it. I might add an effect to the action (put object *obj dest *dest)
which says to add the weight of <*obj> to that of <*dest>, and to subtract
the weight of <*obj> from that of its previous container. But that
side-effect approach wouldn't be consistent. If A were put in B, B were
put in C, and A were removed from B, A's weight would at first be added to
but later not subtracted from C's. A procedural world simulation would make
recursive calls to maintain consistency. To implement that recursion in
a rule-based approach, we need truth maintenance. C's weight depends on B's
weight. When B's weight changes, the assertion about C's weight will be
removed. When we once more want to know C's weight, it will be re-derived.
_SNeBR_, the _SNe_PS _B_elief _R_evision System, does this.

This same action takes effect on higher levels.
Say your NPC Annie has decided to throw herself off the cliff based on her
belief that Jethro doesn't love her anymore, and Jethro comes and tells her
he loves her. Without a belief revision system, Annie will say "Good"
and throw herself off the cliff. Her plan to do so was originally
derived from a belief B, but removing B doesn't remove the inferences and
plans based on it. With a belief revision system, all the inferences and
plans based on B will also be removed when B is removed, so Annie won't
throw herself off the cliff.

Actually, this example won't work yet, because SNeBR
can't decide which belief to remove when two beliefs clash.
I hope to add a few heuristics to the belief revision
system so it can replace old information with new information.
Now it will stop and ask the user which to believe.

3.4 Disadvantages of SNePS

I've been trying to implement an IF engine in SNePS 2.1.
I've come across several problems:

1. SNePS cannot easily implement a default logic.
Many times I want to say, "Do B unless C". For example,
the following rules say,
"If character <C> asks to move west, and <by(C,X)>, and <Y> is west of <X>,
retract <by(C,X)> and assert <by(C,Y)>. If that rule is not applicable,
but <in(C,L)> and <M> is west of <L>, retract <in(C,L)> and assert <in(C,M)>."

1. west(C) & by(C,X) & westof(X,Y) -> not(by(C,X)) & by(C,Y)
2. Default: west(C) & in(C,L) & westof(L,M) ->
not(in(C,L)) & perform(in(C,M))

There is no mechanism for default logic in SNePS 2.1.
There is an older version of SNePS, SNePSwD, with default rules,
but it exploits bugs that have been corrected in SNePS 2.1,
so porting it to SNePS 2.1 would not be easy.

SNePS is _monotonic_: Any derived proposition will still be true
when other non-contradictory information is added to the knowledge base.
I can't say "If A then by default B, unless C." If <A> then <B>, period.
<C> doesn't have anything to say about it.

When we have a finite number of rules, we should be able to implement a
default logic if we _assume closure_: Assume that if we can not derive
<P>, <not(P)> holds. Then we can provide disjoint antecedents for each of
the rules. In the example above, the default rule would have the added
antecedent _not (exists (X Y) (by(C,X) & westof(X,Y)))_.

I can't ask if _exists (X Y) P(X,Y)_ because the existential
quantifier hasn't been implemented yet in SNePS, and probably won't be
for at least two years. The workaround is to use Skolem functions.
But they can only be used to derive the existence of an object,
not to derive the fact that no such object exists. So I still can't
ask if _not (exists (X Y) ...)_.

There is an alternative form of _not(exists(X) P(X))_ :
_forall (X) not(P(X))_. But I can't use such a rule in SNePS, because
SNePS doesn't assume closure. Just because it can't prove <P> doesn't
mean it will assume <not(P)>. So I generally can't effectively query
whether <not(P(X))> holds. You can prove <not(in(C,M))> if you know
<in(C,L)> and <in(C,X) -> not(in(C,Y))> (this latter rule is correct
due to UVBR). But you can't prove <not(by(X,Y))> because there is no
necessity for X to be <by> anything.

There is a need for a form of _autoepistemic logic_. An autoepistemic
logic assumes closure in an agent's mind for propositions about that agent,
since the agent should know things about himself. What we need is a logic
that would assume closure for propositions which the believing agent could
verify empirically. In this case, the character can verify that he is not
by any object by looking around himself.

Although SNePS can say
_forall (x) [P(x) -> Q]_, SNePS apparently can't say
_[forall (x) P(x)] -> Q_. So even if I could query whether
<not(by(C,X))> held, I couldn't express the correct meaning of the rule
_[forall (X) not(by(C,X))] -> P_.

I can't assert two separate rules for the two cases. There is no
guarantee what order rules will be executed in, and in the case above,
executing the wrong (default) rule (1) first destroys the preconditions
for the correct rule (2). (This is a side-effect of the "perform"
predicate, which invokes a procedure.)

I can't combine the two rules into one, because of the way variables
must be mentioned in a SNePS rule.
All variables in SNePS rules (that aren't Skolem functions) must be
universally quantified. You must place restrictions on them
in the rule antecedent, or else the variables will be matched to every
node in the database, which makes rule use very slow. (Also, the rule
is not guaranteed to work if the variable is not restricted.)
But this means that, in order for a rule to fire, bindings for all free
variables must be found. The two rules have different free variables.
If we combined them into one, it could not fire unless bindings for
_all_ the variables were found, even though they may be irrelevant.
I can work around this by saying everything
is a <thing>, and using <thing(X)> in the antecedent. However, this
makes inference very slow, since every <thing> will be matched to <X>.

There is a way to use a limited default logic. I can use SNAP to say,
"When you want to know if <not(P)> is true, do action <A>." Action <A> is
Lisp code which tries to derive <not(P)> assuming closure. This is similar
to the way Cyc implements default logic [Guha 1990]. But there are
limitations on when this can be used. First, at present a different action
must be written for each type of node <P>, since you have to know some arcs
coming into <P>. Second, the problems concerning parenthesizing
quantifications and implications still hold.

The tremendous difficulties that the combination of monotonic logic
and lack of closure create make me think that knowledge representation
researchers would not have wasted so many decades on such logics
if they had tried to use them for IF.

2. Like all logics, SNePS is poor at performing procedures on
collections. You can say "For all objects X in object Y perform Z."
But you can't say "Add the weight of all objects X in object Y."
Adding iteratively to a variable is impossible since a variable in
logic has one value, and is not free to change with every iteration
of a rule firing. You can't say "If W=N then retract W=N and
assert W=N+X" because retracting the antecedent of the rule will cause
belief revision to retract its consequent. You can write a procedural
attachment to compute the sum of the weights of all objects in object Y,
but if you then assert that the effective weight of Y is its intrinsic
weight plus that sum, that assertion depends on the effective
weights of all the objects in Y. Since you used a procedure to compute
that sum, all the assertions you used about the weights of various
objects will not be added to the support set of the derived proposition
about the effective weight of Y. This means that this latter proposition
will not be retracted if an object is removed from Y, or if the effective
weight of an object in Y changes. Sung-Hye Cho is working
on an extension of SNePS to handle reasoning about collections.

3. Preconditions and effects are given for actions. Currently I
represent "John hit the ball with the bat" by
(act (action hit agent John object ball instr bat)).
This is because many actions either have preconditions concerning the agent,
or affect the agent, so we need to talk about the agent in the action.
But an action is something that can be done by many people,
and we want to be able to say Jean did the same act as John
if every slot except <agent> has the same filler.
So perhaps we should have (agent John act (action hit object ball instr bat))
and have a means of giving preconditions and effects for a different type
of action represented by that entire proposition.

4. John can't be bound to both agent and object due to UVBR.
This means that we need separate rules for the cases "John hit Sue
with the paddle" and "John hit himself with the paddle." It is significant
that our language often uses different words for reflexive actions; possibly
they do have different meanings to us. But writing multiple rules for every
action that can be performed with oneself as one of the non-agent
slot-fillers is inefficient in terms of memory and makes the knowledge base
hard to maintain. On the plus side, it automatically prevents actions such
as "Hit the hammer with the hammer."

5. SNeBR currently needs user intervention to resolve conflicts.
An NPC would not be able to retract old information if it conflicted
with new information. This would not be a problem if we used only
(infallible) deduction. But then a character could never make use
of any information you told her, because the truth value of a proposition
$P> cannot be deduced from the proposition _told(John, me, P)_.
SNePSwD can resolve conflicts without user intervention.

6. If an agent tries to perform an action, and reaches a precondition
that has no plan specifying how to satisfy it, SNAP goes into an endless
loop. The problem is that there will always be such a final precondition
unless the action ultimately succeeds. This is not likely to be fixed soon
since Deepak Kumar, the author of SNAP, claims it is a feature.
However, I think I have convinced SNeRG that you cannot restrict acting
to situations where you are guaranteed to succeed, so it will probably
eventually be fixed.

7. It is hard to program in logic. A 3-line procedure can become
a 30-line rule.

7. When SNePS crashes, it in some cases crashes at the LISP level
instead of at the SNePS level. Debugging is then impossible unless
you know how SNePS works, which you don't want to know.

8. SNePS requires Common Lisp. Public domain versions (such as
Kyoto Common Lisp) are available in the public domain, but any version
probably needs 10M RAM. A Scott-Adams-sized IF world would need perhaps
an additional 10M (based on extrapolation from what I've implemented).
I anticipate that within 3 years it will be common for home systems
to have this much memory.

9. With the very simple rulebase I have created, and no NPCs,
the prompt-to-prompt time (between when you enter a simple command and
when you get the next prompt) is about ten minutes at first, though
it goes down to around three minutes after common inferences have been
made once. (SNePS remembers the results of past inferences,
even when they result in rules rather than definite propositions
[Choi and Shapiro 1991].) With a minimal world simulation
that dealt with sizes, capacities, stacks, and weights,
this time might increase by a factor of less than ten.
A fifteen-minute wait (longer if you don't have a 10 MIPS Sparcstation)
is not acceptable. Also, if a rule is coded inefficiently, its firing time
increases exponentially with the number of objects in the database.

On the plus side, SNePS parallelizes naturally, since
variable bindings for rules are meant to be tested in parallel.
A near-future desktop computer with 32 processors each equivalent
to a DEC Alpha might run it comfortably.


4. Conclusion

There are many reasons why SNePS in its current incarnation
is not ready to be used as an IF engine.
But I believe that systems like SNePS will supersede traditional programming
languages in the near future for interactive fiction.
All of the knowledge-representation problems in SNePS that I have listed
(except for logic being difficult to program in)
are slated to be corrected within the next 3 or 4 years,
so perhaps someday soon we will see SNePS IF.


References

Ronald J. Brachman (1990). "The future of knowledge representation,"
_Proceedings Eighth National Conference on Artificial Intelligence_
(AAAI-90), Cambridge, Mass.: MIT Press, p. 1082-1092.

Joongmin Choi and Stuart C. Shapiro (1991). "Experience-based
deductive learning," _Proc. of the 1991 IEEE Int. Conf. on Tools
for AI,_ San Jose, CA.

Ramanathan V. Guha (1990). "The representation of defaults in Cyc,"
_Proceedings Eighth National Conference on Artificial Intelligence_
(AAAI-90), Cambridge, Mass.: MIT Press, p. 608-614.

Deepak Kumar and Stuart C. Shapiro (1991a). "Architecture of an intelligent
agent in SNePS," _SIGART Bulletin,_ vol. 2 no. 4, Aug. 1991, p. 89-92.

Deepak Kumar and Stuart C. Shapiro (1991b). "Modeling a rational cognitive
agent in SNePS," _Proceedings of EPIA 91, the 5th Portugese
Conference on Artificial Intelligence,_ eds. P. Barahona and
L. Moniz Pereira, Heidelberg: Springer-Verlag, p. 120-134.

Anthony S. Maida and Stuart C. Shapiro (1982). "Intensional concepts in
propositional semantic networks," _Cognitive Science,_ vol. 6,
Oct.-Dec. 1982, p. 291-330. Reprinted in _Readings in Knowledge
Representation,_ eds. R.J. Brachman and H.J. Levesque, Los Altos, CA:
Morgan Kaufmann, 1985, p. 169-189.

James R. Meehan (1980). _The Metanovel: Writing Stories by Computer._
NY: Garland.

Allen Newell (1981). "The knowledge level," _AI Magazine,_ vol. 2 no. 2,
p. 1-20.

Stuart C. Shapiro (1986). "Symmetric relations, intensional individuals,
and variable binding," _Proceedings of the IEEE,_ vol. 74 no. 10,
p. 1354-1363, Oct. 1986.

Terry Winograd (1975). "Frame representations and the declarative/procedural
controversy," _Representation and Understanding: Studies in Cognitive
Science,_ eds. D.G. Bobrow and A.M. Collins, NY: Academic Press, 1975,
p. 185-210. Reprinted in _Readings in Knowledge
Representation,_ eds. R.J. Brachman and H.J. Levesque, Los Altos, CA:
Morgan Kaufmann, 1985, p. 357-370.

Jason Noble

unread,
Jan 31, 1994, 6:18:35 PM1/31/94
to

Just a brief question: why does no-one follow up on, or make comments on,
the excellent postings of Phil Goetz? Is it just because he reveals many
shortcomings of the sort of games we're probably all writing? Just because
the implementation of the type of game Phil describes would be difficult
(perhaps too difficult for the unpaid IF author to bother with) doesn't mean
we can't (a) take a few things from his ideas for implementation into our
work, and (b) comment on Phil's work.

For those of you who are unfamiliar with Phil's work (go...@cs.buffalo.edu),
back up a few articles to `Reasoning agents' and have a read.

Thankyou.

---------------------------------------------------------------------------
Jason Noble |
National Centre for HIV Social Research | jno...@bunyip.bhs.mq.edu.au
Macquarie University, Sydney, Australia |
---------------------------------------------------------------------------

Phil Goetz

unread,
Feb 6, 1994, 4:37:36 PM2/6/94
to
In article <whittenC...@netcom.com> whi...@netcom.com (David Whitten) writes:

>On a related note, I don't see why the tradition implementation of
>default reasoning doesn't work, ie:
>for each rule, have a rule number. have a predicate named ab, which gets
>the rule number as the first argument, and all the other free variables
>as the rest of the arguments.
>
>Instead of coding rule R1 : ANTECEDANT => CONCLUSION,
>you code : ANTECEDANT && NOT(AB(1,...)) => CONCLUSION.
>
>Any set of arguments that should not get the default rule behaviour would
>have the AB predicate definded, and thus regardless of the order of the
>evaluation of the rules, the default conclusions wouldn't be derived.

The main problem is that you have to assume closure in order to prove
NOT(AB(1,...)). That is, assume that if you can't prove AB(...), that
NOT(AB(...)) is true. It's called "closure" because it assumes that
your reasoner is dealing with a "closed world": everything which is
true in the world is known to the reasoner, or would be derivable
(given an infinite amount of time).

If you assume closure when this
isn't true, your logic will be inconsistent. If your logic isn't set
up to deal with inconsistencies, the program will crash or behave in
a truly bizarre way (it will think everything is both true and false
at the same time). You can use a wimpier logic, called a "relevance
logic", to avoid catastrophic failure. They are strange creatures,
and anyway they're intended (I think) for situations where you _don't_
have closure.

Since the world is simulated, it really is a closed world, at least
physically. (There are things that you know that the program doesn't
know you know, so it can't reason about your thought and assume closure.)
But there are still problems.

One is that you can't model the thought-processes of agents if you
assume closure, unless you're willing to let them know everything.
Hence you can't have ignorant agents; they must know everything.
If you hide anything from them, the system may crash.

Another has to do with that phrase "given an infinite amount of time."
Closure can only be assumed when you first make every possible inference
that can be made. This is impractical, since everything the player
does can set off a VERY LONG string of inferences about what he might be
planning to do. ("Leave building." The player might be planning to
travel to Rome, or to go to a party, or to blow up the World Trade Center...)
Also, if you implement numbers with the use of a predicate numberp(n),
which returns true if n is a number (e.g. for the rule
NUMBERP(a) & NUMBERP(b) => WhenDo(sum(a,b), lisp::+(a,b)) ),
you can make an infinite number of inferences: 1 is a number, 2 is a
number, ... 234324 is a number,... Then you conclude that 1+1=2, 1+2=3,
... 3+3=4, 3+4=7,... What you want is a lazy evaluation
which doesn't try to deduce the sum of two numbers unless it's asked to.

Finally, you're not allowed to do anything in a closed world.
If your character does something, that's adding new knowledge to the
database. Then all the previous inferences may become invalid.
You can guard against this by timestamping every fact in a way
such that inconsistencies can't be introduced.

>I guess if you have a pre-processor anyway, the problem of not
>being able to reason from a FORALL or a THERE-EXISTS wouldn't be
>a problem either since a FORALL is equivalent to an Infinite AND
>and a THERE-EXISTS is equivalent to an Infinite OR,(IE: apply the
>logical operator to all entries of a set, but a preprocessor could
>just replace the FORALL or THERE-EXISTS by explicit references to
>every element in the set, with either ANDs or ORs linking the predicates
>together.
>
>ie: FORALL x: PERSON => HUMAN(X) when the only PERSONS are bob and sally
>could be preprocessed into
>
>PERSON(X)
>&& (X !=bob || (X==bob && PERSON(bob)))
>&& ( X != sally || (X==sally && PERSON(sally))) =>
>HUMAN(X)
>
><I think...>

Essentially that's how FORALL works, at least in SNePS.
It doesn't precompile; it just tries every node in the network
out as X. If you try to precompile that rule, how could it
deal with dynamic object creation? Those objects won't be mentioned
in the rules.

The problems I'm having with forall is that the forall statement itself
is a strange sort of proposition: I can assert that FORALL(X)
PERSON(X)=>HUMAN(X), but if I ask whether that forall statement is
true, SNePS doesn't know. That's because if it asserted that the statement
were true, and then I added Mr. Spock to the database, the statement
would become false. That can't happen in a monotonic logic.
Only questions like "Does P(X) => H(X) at time T?" can be answered.
That would mean that if you formed a plan based on the answer to that
question, you would have to recheck its validity every moment.

I think it should be obvious by now that, to simulate the reasoning
of a person in a complex world, the traditional closed-world
monotonic deductive logics are useless. In fact, ANY deductive
logic is insufficient, since we make plausible inferences,
not provable ones. But that's another story...

The bigger problem related to FORALL is trying to set
a variable to a function of all members of a set,
something like "Let X be the number of people in the boat."
I'm not aware of any way to do that in first-order predicate logic.
FORALL(X) IN(BOAT,X) => EQUALS(COUNTER, +(COUNTER,1))
doesn't work because that EQUALS statement is logically wrong.
"Let A=A+1" is strictly for imperative languages, not logical ones.

Thanks for responding! Now we're having fun!

Phil go...@cs.buffalo.edu

Joanne Omang

unread,
Feb 8, 1994, 6:26:44 PM2/8/94
to
In fear and trepidation, I ask whether it's possible for a non-programmer
who is interested in interactive fictions ever to learn to write one. That
is, do authoring programs exist that have already worked out the kind of
logical problems you are discussing, perhaps in a kind of template that
someone such as myself could use, or does one have to begin at scratch? I
gather you are all writing games and that these logical statements are your
tools. I refer back to a posting from Jorn a few weeks ago about the
possibility of including literary merit in these fictions, and wonder if
this is relevant here. I am a novelist with very modest computer skills but
a great deal of interest in what is clearly a new form, and I go along with
Henry James' view that art is the search for form. Is your form also in
search of art?
Joanne Omang

David Whitten

unread,
Feb 9, 1994, 11:02:26 AM2/9/94
to
Joanne Omang (oma...@delphi.com) wrote:
: In fear and trepidation, I ask whether it's possible for a non-programmer
:

My form is in search of art, however I freely admit that the art I pursue
is the elegance of a well-written program and the beauty of a wide-coverage
ontology. Some of my goals are to determine the best way of laying out a
template so that people who are literately creative have the tools to create
interesting IF without worrying about the kind of low level interactions
we are discussing here.

Joanne, you seem to be trying to actually write stories rather than games.
In your opinion, if you had a list of 'one-dimensional' characters that
you could use to build up your story character from, would that make your
writing task easier or would you want to create your story characters
totally from prose ?

What I'm envisioning is a list of base characteristics that can be used
to build up a believable non player character (NPC). Each characteristic
would make the NPC act in certain predicatable ways, and draw conclusions
based on being that kind of 'person'. For example: if the character is
a DITCH-DIGGER, you wouldn't expect him to reason using long chains of
deduction, whereas you would expect a PROFESSOR to do so.

(Note: I'm using vocation of a person here as a way to describe mindset
not saying a DITCH-DIGGER would, by default even 'know' what a shovel is.)


Food for thought...

David (whi...@netcom.com) (214) 437-5255

David Whitten

unread,
Feb 11, 1994, 4:48:17 PM2/11/94
to

Thinking about this last night, I was wondering if focusing on
reasoning agents makes more sense or would focusing on reacting agents
make more sense.

IE: a reasoning agent would try to model the thought processess,
a reacting agent could just be a set of reaction-stimuli pairs

If the goal is to make a suspension of belief actor, maybe a focus
on bundling together reactions and stimuli would make a more believable
NPC with less work and a more composible approach to making new NPC's

Hey if the Behaviourists believed it for years, we might be able to
get a person to believe it while they play a game!

dave (whi...@netcom.com) (214) 437-5255

Neil K. Guy

unread,
Feb 12, 1994, 6:45:30 AM2/12/94
to
whi...@netcom.com (David Whitten) writes:

>Thinking about this last night, I was wondering if focusing on
>reasoning agents makes more sense or would focusing on reacting agents
>make more sense.

>If the goal is to make a suspension of belief actor, maybe a focus


>on bundling together reactions and stimuli would make a more believable
>NPC with less work and a more composible approach to making new NPC's

I think that makes sense to me. Realistically I don't think any
software today can reasonably simulate the incredibly complex
behaviours exhibited by people. In fact, for various reasons, I rather
hope it won't, but that's another story.

Anyway, in my game in progress I'm working on simulating a little
dog, for the simple reason that dogs don't speak. My dog will do
things in response to the player's action, and does a number of things
on its own, but since it's not called upon to simulate dialogue it's
moderately more convincing than the cardboard "Mary doesn't seem to
want to talk about that" kind of pseudo-human text adventure actor.

I coded up some simple routines so that the dog has different
internal states (fearful, angry, bored, etc.) depending on various
external conditions with calls to the random number generator to make
things a little more interesting and less predictable. So the dog
starts out afraid of the player and always runs away, but eventually
gets used to her. The dog can also get bored and wanders off and
explores on its own, picking up anything small that's on the ground
and dropping it elsewhere, etc. (I think I've mentioned this in a post
to r.a.i-f before)

I guess in this respect it follows a fairly traditional approach to
behaviour - the overly simplistic (IMHO) notion that organisms are
just bundles of behaviour wired to internal registers. I don't think
complex animals like dogs in real life really work this way (and I've
no idea how they do) but this is a modestly believable little
simulation. Works for me, anyway! :) Now I just have to plug in enough
random idle remarks ("The dog scratches its ear.") so that it doesn't
repeat itself too often and bore the player...

- Neil K.

Stephen R. Granade

unread,
Feb 12, 1994, 4:01:45 PM2/12/94
to

In a previous article, whi...@netcom.com (David Whitten) says:

>
>Thinking about this last night, I was wondering if focusing on
>reasoning agents makes more sense or would focusing on reacting agents
>make more sense.
>
>IE: a reasoning agent would try to model the thought processess,
>a reacting agent could just be a set of reaction-stimuli pairs

I've been thinking along the same lines myself, as a sort of middle
ground between agents who follow a set script and agents who reason.

Eventually (when I get the time :) I would like to try to implement
something like that using TADS. The actors could respond to certain
words/items, etc. It seems like a useful compromise between the
two extremes, and might be readily programmed, considering how in IF
we artificially restrict the player's movements already.

Stephen
--
_________________________________________________________________________
| Stephen Granade | "My research proposal involves reconstructing |
| | the Trinity test using tweezers and |
| sgra...@obu.arknet.edu | assistants with very good eyesight." |

Eric Hoffman

unread,
Feb 12, 1994, 4:43:57 PM2/12/94
to

>
>>IE: a reasoning agent would try to model the thought processess,
>>a reacting agent could just be a set of reaction-stimuli pairs
>
>I've been thinking along the same lines myself, as a sort of middle
>ground between agents who follow a set script and agents who reason.

actually, there is a rich body of literature on this topic in fields
such as robotics, motion control for computer graphics, artificial
intelligence and control theory.

a 'reactive' model doesn't need to operate at such a high level,
kicking off complicated scripts because of some occurance, but can
generate suprisingly complicated behaviours as the synthesis of a set
of smaller reactive behaviours. These behaviours would be phrased in
terms of 'follow the player', 'avoid fire', 'grab things that look
like food', etc.

Some framework based on goals and priorities needs to be built
to stop conflicts between behaviours, prevent cycles, and to allow
higher level goals to inhibit irrelevant actions.

The nice thing about this approach is that instead of building
a huge monolithic behavioural model (or reasoning system),
small, easy to understand behaviours can be constructed and
layered to produce the desired effect. I know that at least
some of the 'Oz' work uses this approach.

Joanne Omang

unread,
Feb 12, 1994, 4:48:27 PM2/12/94
to
David Whitten (whi...@netcom.com) writes:
>Some of my goals are to determine the best way of laying out a
>template so that people who are literately creative have the tools to create
>interesting IF without worrying about the kind of low level interactions
>we are discussing here...

>if you had a list of 'one-dimensional' characters that
>you could use to build up your story character from, would that make your
>writing task easier or would you want to create your story characters
>totally from prose ?
>What I'm envisioning is a list of base characteristics that can be used
>to build up a believable non player character (NPC). Each characteristic
>would make the NPC act in certain predicatable ways, and draw conclusions
>based on being that kind of 'person'.

Yes!! That's exactly the sort of question I need in order to begin to
understand IF. As a writer I of course want my characters as complex as real
people, with many motivational characteristics and contradictory drives.
However, even the most complex characters have primary, secondary, tertiary
levels of motive. You say "base characteristics that can be used to build up
A (emphasis added) believable nonplayer..." Yes, exactly, a character with
more than one trait. A "loading" for trait dominance I assume is the next
step, with action to be taken on the basis of C if B or A aren't factors,
and/or if external circumstances are D, E, F...it appears to get out of hand
in size almost immediately, from my dim perspective. No?
You know, I can envision virtual reality games in which a user gets to
pick five or six characteristics he or she will have, five or six the
opponent will have, the scene, the weather, the era, etc...and
eventually everybody gets to be a writer in cyberspace...
Your response is exactly, exactly the kind of thing I was hoping for, an
interest in putting together the technology to enable me and other writers
to use this new form somehow. I want very much to put together stories about
people with multiple values who make realistic decisions based on valid
aspects of their personalities and histories, which then interact with
similar people's decisions in ways I as the writer won't be able to predict
in advance until I he and then can move...it's very absorbing to
think about. Family sagas, classmates, neighbors in a stricken town, or just
a group of friends growing older... Think "The Group" in hyperfiction.
Where do we start? Can I draft something you might use to begin with, or
can you give me something to play around with first? Let's go.
Joanne Omang


Jorn Barger

unread,
Feb 12, 1994, 5:31:35 PM2/12/94
to
Stephen R. Granade <bz...@cleveland.Freenet.Edu> wrote:
>In a previous article, whi...@netcom.com (David Whitten) says:
>>IE: a reasoning agent would try to model the thought processess,
>>a reacting agent could just be a set of reaction-stimuli pairs
>
>I've been thinking along the same lines myself, as a sort of middle
>ground between agents who follow a set script and agents who reason.

Speech-generation 'toys' like Racter are capable of some surprisingly
intelligent-appearing reactions, using incredibly simple S/R formulae.

I was shocked, talking to Racter, when I wrote "I'm bored" and it replied,
"Would you like to quit now? (y/n)" ;^/

jorn (who'd love to see some collective effort towards implementing a
Racter toolkit for TADS)
---
Here's my RacterFAQ, in case I haven't posted it lately...

[This was posted to the comp.ai.* hierarchy in June 1993, and reprinted
in the August 1993 issue of The Journal of Computer Game Design under
the title: "'The Policeman's Beard' Was Largely Prefab!"]

RACTER FAQ by Jorn Barger
-Intro
-Ordering info
-Related software
-Sample-output analysis
-Inrac compiler code sample
-Basic Inrac commands

Intro

In 1984, William Chamberlain published a book called "The Policeman's Beard
is Half Constructed" (Warner Books, NY. 0-446-38051-2, paper $9.95).
The introduction claims: "With the exception of this introduction, the
writing in this book was all done by computer."

The authorship is attributed to RACTER, "written in compiled BASIC on a Z80
with 64k of RAM." Racter (the program) was co-authored by Chamberlain and
Thomas Etter.

The introduction goes on to claim that Racter strings together words
according to "syntax directives", and that the illusion of coherence is
increased by repeated re-use of text variables.

Only the most generous interpretation of these claims will hold up under
close scrutiny. None of the long pieces in the book could have been
produced except by using elaborate boilerplate templates that are *not*
included in the commercially available release of Racter. Nor does the
Inrac language include any sort of 'syntax directive' powerful enough to
string words together into a form like the published stories.

So Racter never *adds* any coherence to the templates-- it's text-template
'degeneration' more than text generation. And this truth is further
disguised by using templates that are themselves 'wacky', leading one to
attribute to Racter a style that's really Chamberlain's.

Still, it's a fine piece of work! The Macintosh version (at least)
includes speech synthesis, and is lots of fun. And the Inrac compiler, for
inputting your own templates, is quite an elaborate achievement that
deserves considerable honor for pioneering this genre, and will surely
*someday* inspire better successors.

------------------------------------------------------------------------
Ordering info

RACTER is still available in MS-DOS and Macintosh versions for about $50,
from:

John D Owens
INRAC Corp./ Nickers International Ltd.
12 Schubert Street, Staten Island NY 10305
718-448-6283 or (fax) 718-448-6298

The Inrac compiler is $200 (MS-DOS only), and the Inrac manual alone is
$25.

Be warned that the Mac version is an orphaned, *single-sided* 400k disk
that still bears the 1985-era Mindscape label, and is copy-protected
(meaning it must always be booted-from), with an *ancient* version of
System, that works fine on SEs or 512s, but not on my SE/30 at all. (I will
do more tests. Owens says nobody's complained, but I have to wonder.)
Owens says there's little hope of a Mac update because those source files
were lost. I expect when the Mindscape edition runs out, there'll be some
way to copy it exactly, but it will likely still be 400k, etc.

Also, the print driver on the Mac version didn't do linefeeds right with
the Imagewriter 2 I tested it on, so there may be no easy way to save your
output.

But the voice-synthesis (Mac only? I haven't checked this) is splendid, and
this is still great fun, despite all the caveats.

The Inrac compiler comes with a disastrous antique text editor called "E".
The manual is about 100 unindexed pages in a 3-ring binder, but reasonably
well-written, especially by 1985 standards. Anyone interested in AI-
history should grab a copy.

There's plenty of untapped capability in Inrac, too, I think, if you're
patient enough to master the messy syntax. (see samples below)

Owens claims (phone conversation, June 25 1993) that the program he's
selling as Racter is capable of generating all the stories in "Policeman's
Beard". Examination of the included text files indicates very strongly
that this cannot be true. However, using Inrac with the right
'boilerplate' text certainly *could* do so, but Chamberlain's claims about
the minimal degree of boilerplating still appear highly exaggerated.

-------------------------------------------------------------------------
Related software

There's a mail-order software retailer called "Mindware" whose address
is: Mindware, 1803 Mission St, Ste 414, Santa Cruz, CA 95060-5292,
phone numbers 800-447-0477 or 408-427-9455 (fax: 408-429-5302).

They sell a range of 'human potential' software, entirely for MS-DOS and
Windows (no Mac), including Racter, but also the winners of the 1991
and 1992 Loebner Prizes for AI, a 'Turing test' hosted I believe by
the Boston Computer Museum.

The 1991 winner was PC Therapist III, which "learns everything you
say to it." It sells for $54.95, and is available with "a cute
talking head and voice synthesis" for $5 more, as PC Therapist IV.
"PC Therapist IV is an excellent learning aid. Type in 50 or 100
sentences about a topic and the software becomes a suitable teacher on
that topic." (Yeah, right... ;^)

The 1992 winner was PC Therapist by Joseph Weintraub, which "draws on
a knowledgebase of 6000 sentences... that specifically deals with
[human] *relationships*." This one sells for $89.95 and includes
the "original" Eliza, for comparison.

-------------------------------------------------------------------------
Actual Racter output

Babbitt, along with other enthusiasts, married a runner, and consequently
L. Ron Hubbard married Schubert, the confused feeler, himself who was
divorcing L. Ron Hubbard's Tasmanian devil. Then elegance prevailed. Poor
Babbitt! But that's how enthusiasts are. I wonder if muddleheads like
strength?

Policeman's Beard 'output'

At all events my own essays and dissertations about love and its endless
pain and perpetual pleasure will be known and understood by all of you who
read this and talk or sing or chant about it to your worried friends or
nervous enemies. Love is the question and the subject of this essay. We
will commence with a question: does steak love lettuce? This quesion is
implacably hard and inevitably difficult to answer. Here is a question:
does an electron love a proton, or does it love a neutron? Here is a
question: does a man love a woman or, to be specific and to be precise,
does Bill love Diane? The interesting and critical response to this
question is: no! He is obsessed and infatuated with her. He is loony and
crazy about her. That is not the love of steak and lettuce, of electron
and proton and neutron. This dissertation will show that the love of a man
and a woman is not the love of steak and lettuce. Love is interesting to
me and fascinating to you but it is painful to Bill and Diane. That is
love!

Actual underlying template, reconstructed

Key:
<text variables>
(redundancies explicitly added by Chamberlain, by repeating a text-
variable type, apparently for camouflage)

<Intro phrase> my own (essays) about love and its (endless) pain and
pleasure will be (understood) by all of you who read this and (talk) about
it to your (<worried> <friends>). Love is the (subject) of this
<essay>. We will <begin> with a question: does <meat> love <vegetable>?
This quesion is (<implacably> <hard>) to answer. (Here is a question:
does a man love a woman or, (to be specific), does <man> love <woman>?)
The (interesting) response to this question is: no! (He is (infatuated)
with her.) That is not the love (of <meat> and <vegeable>). This
<essay> will show that the love of a man and a woman is not the love of
<meat> and <vegetable>. Love is (interesting) to me and you but it is
painful to <man> and <woman>. That is love!

-------------------------------------------------------------------------
Inrac compiler code sample

Here's a code sample from the compiler book, followed by a detailed trace
of what it should do. It *doesn't* illustrate all of Inrac's functions. {I
don't have the compiler, just the manual, so in places I've had to guess
its output):

*story code section
a %PEOPLE #
b >HERO*person[&P] >VILLAIN*person[&N] #
c $VILLAIN #RND3 bit robbed hit $HERO , #
d but $HERO just #RND3 smiled laughed shrugged . #
' new:
e $VILLAIN snarled >X=Saint,HERO "> $X , I presume <*. #
f "That's a !Y=VILLAIN;esque remark" replied $HERO .
g >X*person !Y=X,hoo

=== *story code section =================================================

This is just a header.

=== a %PEOPLE # =========================================================

This temporarily loads the "PEOPLE" file, a long list of names tagged for
gender, goodness or badness, and field-of-relevance (politics, literature,
etc.)

These tags are implemented as short strings like "MPn2", called
'identifiers', where the first letter is always reserved for (eg) gender,
the second for goodness, etc.

=== b >HERO*person[&P] >VILLAIN*person[&N] # ============================

The ">"s create two variables (called "cells") named HERO and VILLAIN, and
initializes them with a randomly chosen good person ("P" for positive) and
bad person ("N"). (It's also easy to request two good people, with
assurances that the second will be different from the first.)

"[&P]" is an identifier-patternmatch command. The "&" is a wildcard,
needed because the property of goodness/ badness is assumed as assigned to
the *second* letter.

Inrac supports a maximum of 128 variables.

=== c $VILLAIN #RND3 bit robbed hit $HERO , # ==========================

Joseph Stalin robbed Mother Theresa,

The witty PEOPLE file helps a lot.

#RND3 means choose one of the next three words, randomly.

=== d but $HERO just #RND3 smiled laughed shrugged . # ==================

but Mother Theresa just laughed.

=== ' new: ==============================================================

??? I haven't found the explanation of this yet.

=== e $VILLAIN snarled >X=Saint,HERO "> $X , I presume <". # ============

Joseph Stalin snarled, "Saint Mother Theresa, I presume".

">" assigns a new variable X, and $X prints it. This is demo code, it
could be done more simply.

=== f "That's a !Y=VILLAIN;esque remark" replied $HERO . ===============

"That's a Joseph Stalinesque remark" replied Mother Theresa.

The "!" is for inline assignment *and* printing. Y now equals "Joseph
Stalinesque"

=== g >X*person !Y=X,hoo ===============================================

X is reassigned to some new person. I *think* this will print
Jane Fonda hoo
(Don't ask me why!)


--------------------------------------------------------------------------
Basic Inrac commands

#RND2
#RND[any integer]
#RND [no integer indicates entire rest of sentence]

From the indicated set of words, choose one randomly.

# goto next line (normal continue)
* insert indicated line(s) (or randomly chosen line from range)
#* goto indicated line (or randomly chosen line from range)
## repeat same line (rarely used, because of infiniteloop danger)

?? get input from user
?pattern search inputline for pattern
/iffound \ifnotfound if-then-else syntax
?pattern+ search forward from current point
?pattern- search backward from point
?pattern+[integer] compare pattern to specific word relative to point
?&pattern match any word ending in pattern
?pattern& match any word beginning with pattern

If a search is successful, one can assign variables to the two sentence-
fragments in the split, or to the word-part matched by "&".

variable.offset

Conjugations of verbs (and 'conjugations' of nouns, too) are handled by a
p-list:
have 1 has 3 had 5 had
where the numbers stand for properties, preset to represent first person,
plural, past tense, or whatever. Various tricks allow one to maintain
agreement among conjugable forms.


=----------=- ,!. --=----=----=----=----=----=----=----=----=----=----=
Jorn Barger j't Anon-ftp to genesis.mcs.com in mcsnet.users/jorn for:
<:^)^:< K=-=:: -=-> Finnegans Wake, artificial intelligence, Ascii-TV,
.::.:.::.. "=i.: [-' fractal-thicket indexing, semantic-topology theory,
jo...@mcs.com /;:":.\ DecentWrite, MiniTech, nant/nart, flame theory &c!
=----------= ;}' '(, -=----=----=----=----=----=----=----=----=----=----=

Steven McQuinn

unread,
Feb 12, 1994, 6:10:15 PM2/12/94
to
Is anybody applying case-based reasoning to IF?

Steven....@m.cc.utah.edu

Phil Goetz

unread,
Feb 12, 1994, 9:27:36 PM2/12/94
to
In article <neilg.7...@sfu.ca> ne...@fraser.sfu.ca (Neil K. Guy) writes:
>whi...@netcom.com (David Whitten) writes:
>
>>Thinking about this last night, I was wondering if focusing on
>>reasoning agents makes more sense or would focusing on reacting agents
>>make more sense.
>
>>If the goal is to make a suspension of belief actor, maybe a focus
>>on bundling together reactions and stimuli would make a more believable
>>NPC with less work and a more composible approach to making new NPC's

The SNAP (also known as SNeRE, BDI, and OK) acting system, which integrates
reasoning and acting in SNePS, lets you write reactions: A WhenDo(X,Y) rule
says that "when X happens, do Y." Since this is in a forward- and backward-
chaining logic, X can be complicated if you like (e.g. "when the first
woman is elected president"), and Y can be a plan ("escape from Alcatraz").
When applied to robots, the rules can be very simple, e.g.
"WhenDo(value(sensor X) > value(sensor Y), increment(output1))",
or they can involve abstract reasoning.

> I coded up some simple routines so that the dog has different
>internal states (fearful, angry, bored, etc.) depending on various
>external conditions with calls to the random number generator to make
>things a little more interesting and less predictable. So the dog
>starts out afraid of the player and always runs away, but eventually
>gets used to her. The dog can also get bored and wanders off and
>explores on its own, picking up anything small that's on the ground
>and dropping it elsewhere, etc. (I think I've mentioned this in a post
>to r.a.i-f before)

Two good articles to read would be:

Pattie Maes (1993). "A bottom-up mechanism for behavior selection
in an artificial creature." In _From Animals to Animats 2_,
Proc of the 2nd Internat. Conf on Simulation of Adaptive Behavior,
ed. Meyer, Roitblat, & Wilson, MIT Press, p. 238-246.
Pattie's address is pat...@ai.mit.edu
She uses spreading activation in a network of behaviors and emotions.

Joseph Bates, A. Bryan Loyall, & W. Scott Reilly (May 1992).
"An architecture for action, emotion, and social behavior."
Carnegie Mellon Tech Report CMU-CS-92-144, expected to appear
in Proc of the 4th Euro Workshop on Modelling Autonomous Agents
in a Multi-Agent World, S. Martino al Cimino, Italy, July 1992, published
by Elsevier/North Holland. You can buy this from
School of Comp Sci, Carnegie Mellon, Pittsburgh PA 15213, for about $2?

Jocelyn Paine

unread,
Feb 13, 1994, 10:20:42 AM2/13/94
to
In article <HOFFMAN.94...@tesuji.cmf.nrl.navy.mil>,
hof...@cmf.nrl.navy.mil (Eric Hoffman) writes:
[ in reply to suggestion that a reasoning agent might be implemented
as a set of stimulus-reaction pairs ]

>
> actually, there is a rich body of literature on this topic in fields
> such as robotics, motion control for computer graphics, artificial
> intelligence and control theory.
>
> a 'reactive' model doesn't need to operate at such a high level,
> kicking off complicated scripts because of some occurance, but can
> generate suprisingly complicated behaviours as the synthesis of a set
> of smaller reactive behaviours. These behaviours would be phrased in
> terms of 'follow the player', 'avoid fire', 'grab things that look
> like food', etc.
>
> Some framework based on goals and priorities needs to be built
> to stop conflicts between behaviours, prevent cycles, and to allow
> higher level goals to inhibit irrelevant actions.
>
> The nice thing about this approach is that instead of building
> a huge monolithic behavioural model (or reasoning system),
> small, easy to understand behaviours can be constructed and
> layered to produce the desired effect. I know that at least
> some of the 'Oz' work uses this approach.
>
Yes. Some people term this "nouvelle AI", in contrast with "classical
AI". The classical AI approach was that rational thought (as in
game-playing, language translation, theorem-proving) was the most
important mental activity to understand and imitate. If you wanted to
build an autonomous agent, be it a robot vacuum cleaner or an animated
Lenny, you'd start by deciding on a "language of thought" in which its
goals and beliefs could be expressed. Usually, this would be some kind
of logic. However, you would disguise that fact by the habit, when
publishing your papers, of _drawing_ propositions as collections of
nodes, boxes, and arrows, rather than writing them in standard
mathematical notation. This, combined with frequent use of ill-defined
words like "frame", "is" and "a", would ensure continual and rather
fruitless debate about whether your notation really _was_ logic. Still,
it helped to keep the AI journals in business.

Your agent's goals and beliefs would look very similar to one another in
that they both described states of the world. The difference would be
that beliefs described the world as it was now; goals described it as
the agent "wanted" it to be. Actually, the average agent had very few
goals of its own, preferring to derive them from the analysis of
sentences like "Put the large red thing on the other thing below it on
the small green wobbly thing on the blue thing under it".

Then you'd design a reasoning engine which could, when required, pull
old goals and beliefs out of memory and infer new ones. Some of these
new beliefs would be "plans" - beliefs about sequences of actions which,
if the agent obeyed them, would change the world in such a way that one
or more of its goals became true. You'd soon discover that planning,
being combinatorially explosive, would blow up your garbage collector
unless you restricted the agent's world to no more than five primitive
actions, four objects, and six locations. You'd also discover that your
agent faced great difficulty in distinguishing relevant inferences from
irrelevant ones. Lure it into a room containing a bomb on a cart, and it
would still be working out whether the wheels could rotate faster in
radians per second than than the bomb weighed in pounds when ...

To the reasoning engine, you'd then attach two subsidiary modules. One
would read in the agent's raw sense data, resolve uncertainties and
inconsistencies, and encode the results in the internal logical
language, merging them with the rest of the agent's beliefs. One-third
of this work entailed proving that new percept DUSTSPECK-#05632
(t=12:03:00) was the same object as DUSTSPECK-#05631 (t=12:02:55). The
other two-thirds entailed garbage-collecting dustspecks #00000 through
#05630.

The other module would take the agent's plans and convert them into
motor actions, running millisecond-to-millisecond control to ensure
that, despite friction, hysteresis, and motor slip, all the limbs ended
up where the plan said they should. (True, one can ignore most of this
in IF.) Both modules relied on large amounts of parallel
number-crunching, and hence were only of peripheral interest to the true
AI-er - especially because your hardware was not yet fast/cheap enough
to run anything more complex than the version 0.01 proof-of-concept
demo. However, that didn't matter: you were certain that within ten
years, hardware advances would guarantee more than enough power for all
your agent's perceptual and motor needs. In the meantime, you could
always resort to the miracle of video-splicing.

OK, well maybe I'm caricaturing a bit. But then, the point of a
caricature is to emphasise differences. In general, the nouvelle AI
stuff tries to do away with internal representations, the cognitive use
of logic, and the complete decoding of perceptions - not least, the need
to identify every object in sight. I don't think much of it has reached
the standard AI textbooks yet, but it is available in numerous technical
reports and conference proceedings. One chapter of "Artificial Life" by
Steven Levy gives a non-technical account of the origin of these ideas
about reactive agants, and of "subsumption architectures", one of the
earliest implementations. Even if you're not into AI or Alife, I'd
recommend this book just for the thrill of discovering something new.

However, I'm not convinced that it's any more productive to apply
nouvelle AI to the design of IF agents than it is to apply classical
symbolic AI. It's not too hard to take the behaviours "follow light",
"wander randomly", "steer out of corners" and "seek food" and combine
them into a robot which can wander around the carpet and avoid getting
stuck under the sofa. But it's a much bigger jump to go from "follow the
player", "avoid fire" and "grab things that look like food" to
Machiavelli. And it's an even bigger jump to decide _what_ your
primitive behaviours should be in the first place.

This point is implicit in a lot of nouvelle AI research when the
researchers talk about their systems displaying emergent behaviour.
After all, this phrase is usually taken to mean behaviour that can't be
predicted (for various reasons, possibly including limits to simulation
speed) from that of the system's components. I recently went to a talk
given by Dave Cliff of Sussex University. According to him, many of the
nouvelle AI enthusiasts (he mentioned MIT) have discovered that the
"programming" involved in building useful robots from simple behaviours
is just too complex to be feasible. His response is that the only
alternative is to _evolve_ his robots; and he's had some interesting
results. I'm not using this as a proof that your suggestion is
infeasible, but I'd like to see some ideas about design methods.

Jocelyn Paine

Avrom Faderman

unread,
Feb 13, 1994, 4:30:22 PM2/13/94
to

I've lost large bits of the original posts, so forgive me if I make
some references without quotation or citation.

In article <HOFFMAN.94...@tesuji.cmf.nrl.navy.mil> hof...@cmf.nrl.navy.mil (Eric Hoffman) writes:
>
>>
>>>IE: a reasoning agent would try to model the thought processess,
>>>a reacting agent could just be a set of reaction-stimuli pairs
>>
>>I've been thinking along the same lines myself, as a sort of middle
>>ground between agents who follow a set script and agents who reason.
>
>actually, there is a rich body of literature on this topic in fields
>such as robotics, motion control for computer graphics, artificial
>intelligence and control theory.
>
>a 'reactive' model doesn't need to operate at such a high level,
>kicking off complicated scripts because of some occurance, but can
>generate suprisingly complicated behaviours as the synthesis of a set
>of smaller reactive behaviours. These behaviours would be phrased in
>terms of 'follow the player', 'avoid fire', 'grab things that look
>like food', etc.

I think this idea is very likely to work for _many_
"characters"--animals and monsters, in particular. When someone
mentioned that the S-R theory convinced the behaviorists for all those
decades, I think much of what did it was their observations of
animals--together with their knowledge of our own evolutionary
history. Certainly, at least unless you _know_ the animal well, it's
fairly easy to fooled into regarding just about all animal behavior as
simple tropism--and fooling the player (or at least a player already
fairly willing to suspend disbelief) is all that needs to be done.

I think we'll want more, though, even in the near term, from our more
complex characters. I think that, as intelligence of action and
complexity of apparent emotion goes up, what can be done "easily" with
a reactive agent would quickly be overtaken by what could be done with
the same amount of difficulty with reasoning agents. Anything that
needs to talk, anything that needs to cooperate (with either the
player or other NPCs), anything that needs to solve puzzles, anything
that needs to undergo emotional development over the course of the
story--they all call for something deeper than S-R.

(Question: Has anyone tried to incorporate a modification of a
program designed for conversation--an Eliza-type program--into IF as
an NPC? How big and unwieldy are the state of the art Turing Test
programs--like the one that won that competition last year? Would it
be impractical to use them to handle conversation? It would certainly
be more convincing than "XXX has nothing to say about that" alternated
with set scripts).

--
Avrom I. Faderman | "...a sufferer is not one who hands
av...@csli.stanford.edu | you his suffering, that you may
Stanford University | touch it, weigh it, bite it like a
CSLI and Dept. of Philosophy | coin..." -Stanislaw Lem

Jocelyn Paine

unread,
Feb 13, 1994, 5:44:32 PM2/13/94
to
In article <CL55I...@acsu.buffalo.edu>, go...@cs.buffalo.edu

(Phil Goetz) writes:
> In article <neilg.7...@sfu.ca> ne...@fraser.sfu.ca
> (Neil K. Guy) writes:
>
>> whi...@netcom.com (David Whitten) writes:
>>> [ Is it easier/more useful to build reactive agents or reasoning
>>> agents? ]

>>
>
> The SNAP (also known as SNeRE, BDI, and OK) acting system, which
> integrates reasoning and acting in SNePS, lets you write reactions: A
> WhenDo(X,Y) rule says that "when X happens, do Y." Since this is in a
> forward- and backward- chaining logic, X can be complicated if you
> like (e.g. "when the first woman is elected president"), and Y can be
> a plan ("escape from Alcatraz"). When applied to robots, the rules can
> be very simple, e.g. "WhenDo(value(sensor X) value(sensor Y),
> increment(output1))", or they can involve abstract reasoning.
>
Several expert system toolkits also provide facilities for writing
"when-do rules", particularly those that advertise themselves as
including truth maintenance systems or frame-based systems. It's also
possible to implement rule-based systems of this kind in Prolog (or
Lisp, though I think it's less work in Prolog), though you need good
knowledge of compiling methods to make them efficient.

Incidentally, Prolog would seem to have other advantages for IF; is
anyone using it?

>> I coded up some simple routines so that the dog has different
>>internal states (fearful, angry, bored, etc.) depending on various
>>external conditions with calls to the random number generator to make
>>things a little more interesting and less predictable. So the dog
>>starts out afraid of the player and always runs away, but eventually
>>gets used to her. The dog can also get bored and wanders off and
>>explores on its own, picking up anything small that's on the ground
>>and dropping it elsewhere, etc. (I think I've mentioned this in a post
>>to r.a.i-f before)
>
> Two good articles to read would be:
>
> Pattie Maes (1993). "A bottom-up mechanism for behavior selection
> in an artificial creature." In _From Animals to Animats 2_,
> Proc of the 2nd Internat. Conf on Simulation of Adaptive Behavior,
> ed. Meyer, Roitblat, & Wilson, MIT Press, p. 238-246.
> Pattie's address is pat...@ai.mit.edu
> She uses spreading activation in a network of behaviors and emotions.
>

I implemented this (in Prolog) when building a simple reactive agent as
part of my AI teaching kit. Maes' spreading-activation architecture is
very easy to understand, and not much harder to implement. However, be
warned! The rate at which activation spreads and decays depends on
several parameters, and small changes in their values can result in
large changes to the network's behaviour. For example, the extent to
which it is sensor-driven vs goal-driven; the extent to which it is
influenced by its past history vs by changes in sensory input.

Unfortunately, the effect of these parameters also depends on the
configuration of your network. This means that a given ratio of (say)
the alpha and gamma parameters might make your simple "find food if
hungry" network almost entirely sensor-driven (reducing the influence of
the goals almost to zero), but not have the same effect on your more
complex "betray king when seeking power" network. In other words, you
must tune the parameters individually for every network you build.
Furthermore, although it's possible to say qualitatively how changing a
parameter in a given direction will affect behaviour, the only way to
make more detailed predictions is to try out the network itself.

This is not a criticism of Maes' work. She mentions these problems in
another paper (I don't have the reference, but I can post it later).
This describes how she made the tuning automatic: by building another
network whose job was to tune the first one. However, it does indicate
that you will find it very difficult to (for example) strike the right
balance between your agents' impulsively acting on new data vs staying
faithful to their original goals. I think it's difficulties like this
that make so many people prefer classical AI.

> [ Phil then cites a second reference, not to Maes' work. ]
>
Jocelyn Paine

David Whitten

unread,
Feb 13, 1994, 10:59:52 PM2/13/94
to
Joanne Omang (oma...@delphi.com) wrote:


I think a good starting point is to get a list of "stereotyped" characters.

Then we get a list of what characteristics those characters have that
make them fit the stereotype.

Then we can figure out how their characterstics can shape their reactions.

I must admit I'm a better programmer than novelist, perhaps you can start to
draw up some of these lists, and I can start to figure out how to program
them. Of course, any input from the other r.a.if people would be great.

Some adjectives that immediately come to mind are: madcap, brainy,
high-society, bohemian, organized, scandulous, diplomatic, materialistic,
authoritarian, sentimental, opportunist, mischievous, collaborater, risque,
dupe, steadfast, ostentatious, idiot

Maybe some of the pop-psychology books would give us some good lists
of stereotypical behaviour....

Lets do it!
Dave (whi...@netcom.com) (214) 437-5255

Jocelyn Paine

unread,
Feb 14, 1994, 7:32:33 AM2/14/94
to
In article <1994Feb13.2...@Csli.Stanford.EDU>,
I think I agree. Much of the impetus for nouvelle (reactive) AI came
from roboticists and computer-vision people. Their main concerns were
the difficulty of perception and action, and these are almost absent in
IF. As I mentioned in an earlier post, one of the assumptions behind
(most) classical AI was that the agent's perceptions could deliver a
complete logic-based description of the world, down to the identity of
each and every object. Which is still impossible, in all but the most
contrived settings, for the current state of computer vision and robot
sensing.

However, IF is different. Firstly, the world is less detailed. Secondly,
all IF engines already have knowledge about the world encoded in just
the kind of logic-based relational form that classical AI agents need.
They need this in order to be able to run their simulation - to ensure
that if X is in Y, and you move Y, X goes with it. So the IF engine can
work out what it is possible for each agent to see, hear etc; can bias
the result according to the agent's character (a paranoid might believe
that whenever he hears a distant bang it's a gunshot or a bomb); and can
then insert the results into the agent's memory. I think Phil described
something like this in his original posting on Sneps.

Also, action is less complex. Absent are problems of calculating joint
positions and compensating for motor inaccuracies. Instead, when the
agent acts, he will usually either be speaking, or performing an atomic
action like "pick up" or "shoot". So the classical AI assumption that
perception and action are subsidiary modules which just feed input to,
or output from, the reasoning engine, is more justifiable than it is in
the real world.

But there are still difficulties, and one of these is planning. Another
assumption of early classical AI was that one could write a
general-purpose algorithm which could generate a sequence of actions to
accomplish any reasonable goal, given a table of primitive actions and
their effects together with a description of the current state of the
world. Such planners turned out to be very difficult to write (one gets
lots of problems with interaction between subgoals), and were
combinatorially explosive in the time they took to generate plans.

One alternative approach is for the agent not to solve each problem from
scratch, but to work from experience. Every time the agent solves a
problem, it stores its solution. When it encounters a a new problem, it
looks for already-solved problems that were analogous, and tries to
adapt its old solution. That's case-based reasoning.

Another is a kind of hybrid between classical planning and reactive AI.
When the agent encounters a problem, it makes a "sketchy plan" -
stringing together, as it were, just a few ideas about what to do next.
But within that framework, any more detailed selection of actions is
driven reactively, within preset constraints about which actions it's
sensible to combine with which. James Firby at Yale has done work along
these lines, and I've tried re-implementing some of it in Prolog.

Yet another is to change the assumption that the agent build a
viewpoint-independent model of the world. Instead of allocating a unique
token to each apple or each assassin, you instead have a "diectic"
representation: "the apple I'm holding now", or "the assassin who is
threatening me now". That's not unreasonable; the agent will probably
react to all apples or all assassins in much the same way, so further
distinction between them is pointless. The book "Vision, instruction and
action" by David Chapman describes an agent that fights and survives
inside a video game, and that relies on this kind of representation.

To summarise, I think there's still a place for rational agents which
build models of the world, reason using logic, and so on. The
difficulties of perception and action which made them infeasible in the
real world are absent in IF. But if you apply classical methods of
planning and controlling inference, you'll hit a combinatorial
explosion. Researchers have realised this, and some of them have
investigated systems that are not purely reactive, but that still differ
from the classical AI model. These are worth following up.

>
> Avrom I. Faderman | "...a sufferer is not one who hands
>

Jocelyn Paine

Jason Noble

unread,
Feb 17, 1994, 12:09:54 AM2/17/94
to
I agree with a previous poster that you can do a great deal with simple
reactive agents rather than reasoning agents, because you have a player who
is usually quite willing to suspend disbelief. (Think of your own first
experience of Zork or even Eliza - a simple program is convincing until
something drastic goes wrong). I also think that very simple reasoning of
sorts is not out of the question in a home-brewed TADS (or other system)
game.

With this in mind, I would like to offer up for debate the following piece
of pseudo-code (probably a daemon, in TADS terms) for controlling actors
(note that `me' in the pseudo-code refers to the actor):

==========================================================

1. Get list of all items and other actors in the room. Add each of these
(plus location and time seen) to a personal memory list, possibly bumping
other items out of a finite memory space.

2. Consider the player's last command: am I an object of it? Store the
player command in memory also, to build a simple model of player behaviour
(if `draw sword' always preceeds `attack Bob with sword', the actor should
learn to run away when the player draws his sword).

3. Examine own status (this would be a goal stack of sorts): return the next
action in the current goal sequence, or the first action of the next
sequence in the stack.

4. Calculate weights for all my possible behaviours (all of the calculations
in this stage would be subject to modification by emotional and other
indices of the actor):

4a. One group of possible behaviours (PB's) would be simple responses to a
player command; these are especially likely if the player has addressed or
referred to me in his/her command. For example, if the player said `Ask Bob
where the screwdriver is', and I know where it is (through my memory list),
a highly likely response should be "Err, I think it's on the kitchen table.
Was yesterday, anyway." Another feature at this level would be monitoring
of the player's last 10 or last 50 commands, to check for repetition. If
the player asks the same thing twice in a row, the actor should be very
likely to say "Didn't you just ask me that?" If I am already annoyed with
the player, I'll probably say "Look, will you just get lost!".

4b. Another group of PB's would be assessed by little mini-daemons: if I am
hungry I will probably pick up and eat fooditems, if I am short of cash I
may take or steal valuable items, if I find the player in the Temple of
Arghh I will almost certainly initiate my `threaten and harass' script. In
a more complex fashion, if the player has just typed `draw sword', and I
have learned where that's likely to lead (or a friendly fellow actor has
told me, ie. shared experiences of the player with me) then I'll be likely
to execute a `run away from player' script. This would be subject to
modification by scores like loyaltyToPlayer, though. To extend the example
above, let's say that early in the game the player and I had a scuffle and I
have learned to run away when he draws his sword, but then the player
befriended me with gifts of food and gemstones, so now I'm not so alarmed
when he draws his weapon.

4c. Finally, the third group of PB's is a set with one member: the next item
in the current script from the goal stack, such as "move one room further on
my journey to the west gate", or "say to the player, `Die, infidel!'", or
"tell any allies in the room that the player has murdered Prince Foobar", or
"attack the strongest actor in the room". I am using scripts here to mean
something a lot like the Schank and Abelson term; also a lot like scripts in
ALAN.

5. Once the weights for the various possible behaviours have been
calculated, basically choose the one with the highest weighting, with a
small chance for choosing the second one, a smaller chance for choosing the
third, etc. This degree of randomness would vary by actor: Mr Spock or Mr
Data would probably not incorporate randomness at all, whereas Charles
Manson certainly would. (Makes it easy to simulate insane characters: just
make them do less likely things consistently: > kill krusty / You strike
Krusty a vicious blow. He offers you a handkerchief.).

6. Execute the appropriate behaviour.

6a. If this behaviour invokes a full script, two things may happen. If its
a script of medium personal significance, then it will probably push the
previous script down the stack, for later resumption. For example, I am
"walking to the west gate" (a long journey), I'm hungry, I come to a food
shop, I execute my `buy food' script over a number of turns and then resume
my walk script. If the script is sort of catastrophic, such as "go quickly
to Castle Anthrax and warn the king of the player's treachery", then it will
probably push everything out of the goal stack, and become THE script for
now. Note that the goal stack probably only needs to run 2-4 levels deep,
because people don't stack like computers do.

6b. If the behaviour is a simple response (ie. takes one turn to do) then it
is executed and the next turn the actor will (probably, barring new
circumstances) resume the previous script. For example, I am walking to the
west gate, the player greets me, I simply greet him back and next turn keep
walking.

6c. If the `best', ie. highest scoring, behaviour turns out to be simply
executing the next step in a script, then it is executed, but there is a
quick check and maybe some output if the player addressed or referred to me.
For example, I am hurrying to the west gate with great urgency, and the
player throws me a frisbee. Normally, I'm an easy-going guy and I would
catch it, but my urgent `go to west gate' script overrides this impulse.
The player would see "You throw the frisbee to Bob. Bob strides off down
the west gate road. He's clearly to busy to catch frisbees."

7. Update all counters, emotional state indicators, etc. This is
particularly relevant if the player has addressed or referred to me in their
command. For example, if the player has asked me to do something, my
annoyanceWithPlayer score might be incremented (whether I carry out the
request or not?). If the player just snatched an item from me, or insulted
me, my loyaltyToPlayer score would go down. Also run all the physical
stuff, I suppose, like hunger and tiredness and whatever.

8. Actor daemon complete, pass control on.

Fred M. Sloniker (L. Lazuli R'kamos)

unread,
Feb 18, 1994, 3:51:55 PM2/18/94
to
I won't use the terms 'reasoning agent' or 'reactive agent', mainly because
I don't know what they mean. However, if what I think they mean is true,
this is a topic I'm interested in; I want to write an adventure game in which
the characters interact with each other even without 'prompting' from the
player. (I've abandoned ADL as an authoring system because it won't allow
'actors' to, if they don't have anything currently queued to do, run a routine
to determine what they 'want' to do; I pondered Inform, but regretfully
declined, partially because I'd probably have to code it myself, and partially
because interpreters for the Amiga require WB2.0, which I don't have-- yet.
I'd like to try TADS, but $40 is expensive for me, and I'm not confident about
their Amiga support... but I digress.)

Jason Noble wrote:

>With this in mind, I would like to offer up for debate the following piece
>of pseudo-code (probably a daemon, in TADS terms) for controlling actors
>(note that `me' in the pseudo-code refers to the actor):
>
>==========================================================
>
>1. Get list of all items and other actors in the room. Add each of these
>(plus location and time seen) to a personal memory list, possibly bumping
>other items out of a finite memory space.

You could implement this by having the 'examined' attribute of rooms, items,
and actors be a list, not a simple flag: mark on the room what actors have
been there, on the object what actors have seen it, and on the actor what
actors know, say, his name. (If the actor has some dark secret, make an
object to represent it, which only that actor (probably) 'knows' at the start
of the game.)

[Point 2, how the actor reacts to the player's actions, snipped]

Something I'm looking for in my own games is actor parity; that is, that the
only difference between the actor the player controls and the other actors,
out of character, is the actor listens to the keyboard rather than an internal
program. If properly coded, this means two things:

(1) The player can choose to play any of the characters (or, at least, any
the programmer deems interesting), and the other characters will continue
to function without player input. Perhaps the player can even jump from
character to character...

(2) NPA's (non-player actors) can interact with *each other*. Tell Bob to
get the wrench; Bob doesn't mind, but it's Harry's wrench, and you can watch
the two of them have an argument over the wrench before Harry gets pointed in
your direction...

>3. Examine own status (this would be a goal stack of sorts): return the next
>action in the current goal sequence, or the first action of the next
>sequence in the stack.

I'd prefer a weighted goal system over a stack. (Sure, the captain is
*supposed* to have been on the bridge five minutes ago, but getting shot at
by terrorists sort of takes precedence, don't you think?) The character
would start with certain motivations, but as he learned more about the
adventure, priorities would be established, or even usual activity altered
(if the captain learns the first mate is one of the terrorists, after you've
heroically taken them out, his first action is going to be to call the bridge
and have him relieved of duty. And when the first mate learns he's been
found out, he may well decide holding the bridge crew at gunpoint with his
fellow conspirators takes precedence over compiling ship status reports. At
which point *you* get to decide what takes precedence for your secret agent:
defeating the mutineers, possibly blowing your cover, or hiding out like the
scared tourist you're supposed to be and possibly getting killed?)

>4. Calculate weights for all my possible behaviours (all of the calculations
>in this stage would be subject to modification by emotional and other
>indices of the actor):

Oops. You do have a weight system. (:3 In that case, as long as the goals
are a list, not a stack, we're jake (you can order things by giving appropriate
priorities).

>For example, if the player said `Ask Bob where the screwdriver is', and I
>know where it is (through my memory list), a highly likely response should be
>"Err, I think it's on the kitchen table. Was yesterday, anyway."

That's a kettle of fish I hadn't considered (knowledge being out of date).
Possibly there should be a time-out on knowledge of some topics, or a marking
of the knowledge as 'old'. For instance, if the media darling hasn't seen
the screwdriver:

"Screwdriver? Whatever are you blithering about? I can't be bothered with
such details. Ask my cameraman."

If he has recently:

"Oh, that old thing. Try in the banquet hall, that's where I saw it last."

If he hasn't:

"That old thing? It's around here somewhere, I'm sure. Have you tried the
tool shed? Carl, do you know where it is?"

"I've got it in my toolkit, Lara."

(Another example of NPA interaction.)

>Another feature at this level would be monitoring of the player's last 10 or
>last 50 commands, to check for repetition. If the player asks the same thing
>twice in a row, the actor should be very likely to say "Didn't you just ask
>me that?" If I am already annoyed with the player, I'll probably say "Look,
>will you just get lost!".

Emotion arrays! (:3 How the actor feels about each of the other actors,
starting with some initial value (depending on in-character experience) and
modified by interactions between the characters. (If the player's actor is
particularly handsome, and successfully flirts with the shy wallflower, she's
more likely to climb out on that lightning rod to save him later. If, on the
other hand, he flirts with the ice princess, she'll decide he's just the
patsy she needs to recover the Eye of Pork Gumbo... and what happens if the
player talks someone *else* into chatting her up? He may have to rescue the
poor victim later, if he's at all a gentleman...

Any other comments? (:3

---Fred M. Sloniker, stressed undergrad
L. Lazuli R'kamos, FurryMUCKer
laz...@u.washington.edu

"Fascinating." "Intriguing." "*Fascinating*." "*INTRIGUING*." ...

Steven McQuinn

unread,
Feb 26, 1994, 9:45:59 PM2/26/94
to
In article <1994Feb14.123233.20325@oxvax>
po...@vax.oxford.ac.uk (Jocelyn Paine) writes:


> One alternative approach is for the agent not to solve each problem from
> scratch, but to work from experience. Every time the agent solves a
> problem, it stores its solution. When it encounters a a new problem, it
> looks for already-solved problems that were analogous, and tries to
> adapt its old solution. That's case-based reasoning.


What appeals to me about case-based reasoning is its use of scripts for
expected actions in specified scenes. Of course, scripts can be juggled
and overlaid to deal with unexpected action.


> Another is a kind of hybrid between classical planning and reactive AI.
> When the agent encounters a problem, it makes a "sketchy plan" -
> stringing together, as it were, just a few ideas about what to do next.
> But within that framework, any more detailed selection of actions is
> driven reactively, within preset constraints about which actions it's
> sensible to combine with which. James Firby at Yale has done work along
> these lines, and I've tried re-implementing some of it in Prolog.


On the strength of your valuable postings, I am reading your book on
Prometheus, the Prolog based toolkit for expert system development.
Could Prometheus be adapted to serve as an IF engine?


> Yet another is to change the assumption that the agent build a
> viewpoint-independent model of the world. Instead of allocating a unique
> token to each apple or each assassin, you instead have a "diectic"
> representation: "the apple I'm holding now", or "the assassin who is
> threatening me now". That's not unreasonable; the agent will probably
> react to all apples or all assassins in much the same way, so further
> distinction between them is pointless. The book "Vision, instruction and
> action" by David Chapman describes an agent that fights and survives
> inside a video game, and that relies on this kind of representation.


The above almost sounds like a hybrid between a rule based system and a
case-based reasoning system. Is this accurate? Is such a concept
feasible?

> To summarise, I think there's still a place for rational agents which
> build models of the world, reason using logic, and so on. The
> difficulties of perception and action which made them infeasible in the
> real world are absent in IF. But if you apply classical methods of
> planning and controlling inference, you'll hit a combinatorial
> explosion. Researchers have realised this, and some of them have
> investigated systems that are not purely reactive, but that still differ
> from the classical AI model. These are worth following up.

> Jocelyn Paine


Wonderful stuff. Thank you, Ms. Paine. All I ask for is, More...More...

Steven....@m.cc.utah.edu

Phil Goetz

unread,
Feb 28, 1994, 5:32:14 PM2/28/94
to
>In article <1994Feb14.123233.20325@oxvax>
>po...@vax.oxford.ac.uk (Jocelyn Paine) writes:
>
>> Yet another is to change the assumption that the agent build a
>> viewpoint-independent model of the world. Instead of allocating a unique
>> token to each apple or each assassin, you instead have a "diectic"
>> representation: "the apple I'm holding now", or "the assassin who is
>> threatening me now". That's not unreasonable; the agent will probably
>> react to all apples or all assassins in much the same way, so further
>> distinction between them is pointless. The book "Vision, instruction and
>> action" by David Chapman describes an agent that fights and survives
>> inside a video game, and that relies on this kind of representation.

How is this different from the ordinary predicate logic approach,
in which you have rules that say "If X is an assassin and X is
attacking me now, then do such and such about X"? In neither approach
is the original, general knowledge associated with a particular assassin.

Phil go...@cs.buffalo.edu

Jocelyn Paine

unread,
Mar 2, 1994, 10:42:14 AM3/2/94
to
In article <CLyH9...@acsu.buffalo.edu>, go...@cs.buffalo.edu (Phil Goetz) writes:
>>In article <1994Feb14.123233.20325@oxvax>
>>po...@vax.oxford.ac.uk (Jocelyn Paine) writes:
>>
>>> Yet another is to change the assumption that the agent build a
>>> viewpoint-independent model of the world. Instead of allocating a unique
>>> token to each apple or each assassin, you instead have a "deictic"

>>> representation: "the apple I'm holding now", or "the assassin who is
>>> threatening me now". That's not unreasonable; the agent will probably
>>> react to all apples or all assassins in much the same way, so further
>>> distinction between them is pointless. The book "Vision, instruction and
>>> action" by David Chapman describes an agent that fights and survives
>>> inside a video game, and that relies on this kind of representation.
>
> How is this different from the ordinary predicate logic approach,
> in which you have rules that say "If X is an assassin and X is
> attacking me now, then do such and such about X"? In neither approach
> is the original, general knowledge associated with a particular assassin.
>
Let me sound a caution first. I haven't read very much on deictic
representations; in particular, I have not attempted to do a formal
logical analysis of the things. Neither have I read one, though they are
starting to appear. So comments from anyone with more experience than
mine are welcome.

However, part of the answer (I think) is implicit in your question. You
give a rule which contains a variable X. This will, at various points in
the agent's life, be bound to one or another tokens representing an
assassin. Note that I said "one or another". The classical AI assumption
has been that you might want several assassin-tokens to coexist. So you
need to distinguish between them. This has usually been done by making
the tokens contain two parts: a type-indicator ('assassin'), and a
number (or atom, or whatever) which is different for every token. So, in
Prolog, you might create terms such as
assassin(0), assassin(1), ...
or perhaps
individual(assassin,0), individual(assassin,1), ...

The numbers are not completely arbitrary, but tie in with the assumption
that your agent should build an absolute (God-centered, not
agent-centered) world-model. This keeps track of all the objects the
agent has perceived so far, and each token points at a particular object
(always the same one) within this model. When the agent perceives
something new, it has to work out whether any of its perceptions refer
to objects already described by this model. I.e. suppose the model
contains an assertion of the form
in( assassin(1), room(27) ) ,
and the agent moves into room 28 and sees an assassin, it has to work
out whether this is assassin(1) or not. (Phil probably knows this
already, but it may be of interest to some of the other readers.) If so,
it can update the assertion to
in( assassin(1), room(28) ).
If not, it must create a new token for the new assassin, and add a new
assassin:
in( assassin(2), room(28) ).

A big problem with this approach is that you don't always have enough
information to work out whether the newly-seen object really is new or
not. So perhaps you make an assumption about its identity (that it's
assassin 1 in room 28). To be consistent, you then have to remove
assassin 1 from room 27; if it actually was assassin 2 in room 28,
you've created an incorrect model. Or you try to find some way to
represent the fact ``it's either assassin 1 or assassin 2 in room 28 but
I don't know which''. This leads to all sorts of difficulties with
representing and inferring from disjunctions.

Now to the deictic approach. I'll quote from the end of {\it Situated
Agents Can Have Goals} by Pattie Maes, from {\it Designing Autonomous
Agents} edited by Maes (MIT press 1990; ISBN 0-262-63135-0).

We try to avoid the need for variables altogether by using {\it
deictic representation}, i.e. using only {\it indexical-functional
aspects} to describe relevant properties of the immediate
environment: objects in the environment are internally represented
relative to the purposes and circumstances of the agent. The
spray-painting module for example only has to be instantiated with
one parameter, namely 'the-sprayer-I-am-holding-in-my-hand'. ...

The idea of indexical-functional aspects is particularly interesting
for autonomous agents because it makes more realistic assumptions
about what perception can deliver. In particular, it does not demand
that the perceptual system produces the identity and exact location
of objects. The absence of variables [like those in your rule] does
constrain the language one can use to communicate with the system
[i.e. the rules or other program] but not in too strong a way. All
it requires is a new way of thinking about how to tell an agent what
to do. More specifically, one does not use unique names of objects
when specifying goals. Instead, goals are represented in terms of
indexical or functional constraints on the objects involved. For
example, one would not tell the agent to go to location(X,Y), but
one would tell it that the goal is to be a location that is a
doorway (a small area where it is possible to "go through" a wall).

This quote comes at the end of an article on a spreading-activation
planner. One of the differences between it and classical planners (e.g.
Strips) is this. In a classical planner, the pre- and post-conditions
would be built from propositions; each proposition being a predicate
whose arguments are variables. These variables would, during planning,
be bound to particular doors, rooms, paint-sprayers, etc. In Maes'
planner, the propositions are no-argument predicates such as
sander_on_table
sander_in_hand
Because they are relative to the agent, they may actually refer to
different objects at different times. However, the functional role of
these objects is always the same, so it doesn't matter.

> Phil go...@cs.buffalo.edu
>
Jocelyn Paine

Jocelyn Paine

unread,
Mar 2, 1994, 11:45:49 AM3/2/94
to
In article <2kp1h7$5...@u.cc.utah.edu>,

steven....@m.cc.utah.edu (Steven McQuinn) writes:
> In article <1994Feb14.123233.20325@oxvax>
> po...@vax.oxford.ac.uk (Jocelyn Paine) writes:
>
>> One alternative approach is for the agent not to solve each problem from
>> scratch, but to work from experience. Every time the agent solves a
>> problem, it stores its solution. When it encounters a a new problem, it
>> looks for already-solved problems that were analogous, and tries to
>> adapt its old solution. That's case-based reasoning.
>
> What appeals to me about case-based reasoning is its use of scripts for
> expected actions in specified scenes. Of course, scripts can be juggled
> and overlaid to deal with unexpected action.
>
It seems to me that there are at least three ways to think of scripts:
1) As a set of instructions - a program - which the agent executes
directly in a given situation. This program will contain sequencing,
conditionals, and loops (``shampoo, rinse, and repeat until
clean. Apply conditioner if hair too dry.'').
Perhaps the program statements are annotated, e.g. with pre- and
post-conditions or some other way of specifying their purpose.
2) As a template which can be matched against a partially-perceived
situation to help fill in gaps. Classical example: ``John went down
the aisle and put a can of tuna in his basket''. Matching this
against a supermarket script tells us that the basket was probably
made of wire, etc. There has been some work on using scripts for
understanding newspaper stories, for example.
3) As a sketchy description of how to behave in a situation, to be
filled in by some run-time planning.

If you think of scripts as programs, you can see that overlaying them
won't be all that easy. After all, if we have programs P and Q, and we
want to achieve some combination of their effects, what general rules
exist for doing so by merging the code of P and Q?

>> Another is a kind of hybrid between classical planning and reactive AI.
>> When the agent encounters a problem, it makes a "sketchy plan" -
>> stringing together, as it were, just a few ideas about what to do next.
>> But within that framework, any more detailed selection of actions is
>> driven reactively, within preset constraints about which actions it's
>> sensible to combine with which. James Firby at Yale has done work along
>> these lines, and I've tried re-implementing some of it in Prolog.
>
> On the strength of your valuable postings, I am reading your book on
> Prometheus, the Prolog based toolkit for expert system development.
> Could Prometheus be adapted to serve as an IF engine?
>

To prevent anyone being mislead by this, I'd better say a bit about the
history of Prometheus. It was developed in 1987 or so by an Oxford
company called Expert Systems International. Main market: people wanting
to write expert systems, but needing (a) an easy way to build the user
interface (menus, etc), and (b) a language higher-level than Prolog.
Prometheus was designed by Tony Dodd, not by me: Tony did a nice job of
finding a higer-level notation that would map to Prolog and that could
be implemented without too much inefficiency. My role was to write a
user guide and reference manual and to do some debugging. This I carried
out as a piece of consultancy. At that time, the manual was only
intended for people who'd bought a copy of the system.

ESI was owned by another company, Chemical Design Ltd. Shortly after I
finished the manual, CDL put them into receivership, and obtained the
rights to the manual and much else. (CDL also ended up owing me several
thousand for the consultancy - still not paid, on the grounds that they
are a different company from ESI. Perfectly legal, but not nice.) What
happened next was that ESI revived itself as an new, independent,
company. This new company wanted to ensure that customers could still
get copies of the manual, and arranged for it to be printed as a book by
Intellect, the publisher. Somewhere along the way, this history failed
to find its way into the book, thus leaving readers somewhat uncertain
about the purpose of the book and the status of Prometheus. So: the book
is actually a tutorial guide and reference manual for a commercial
expert systems toolkit.

If you want a copy of Prometheus, you could try ordering it from what is
now Expert Systems Ltd, Oxford. If not, just treat the book as a useful
example of how to take some of the pain out of Prolog, or of how to
integrate object-oriented programming with logic. It is by no means a
final answer to the latter problem - but in my view, no-one has managed
to do this properly yet, because almost no-one understands what objects
are. It's not too hard, by the way, to write a ``compiler'' that will
convert a reasonable subset of Prometheus notation into Prolog.

>> Yet another is to change the assumption that the agent build a
>> viewpoint-independent model of the world. Instead of allocating a unique

>> token to each apple or each assassin, you instead have a "deictic"


>> representation: "the apple I'm holding now", or "the assassin who is
>> threatening me now". That's not unreasonable; the agent will probably
>> react to all apples or all assassins in much the same way, so further
>> distinction between them is pointless. The book "Vision, instruction and
>> action" by David Chapman describes an agent that fights and survives
>> inside a video game, and that relies on this kind of representation.
>
> The above almost sounds like a hybrid between a rule based system and a
> case-based reasoning system. Is this accurate? Is such a concept
> feasible?
>

I've put another posting on this, as a followup to a question by Phil
Goetz. Briefly, I think the dimension of deictic vs.
viewpoint-independent representation is ``orthogonal to''
(language-designer's jargon for ``independent of'') the dimension of
case-based vs. rule-based. In a deictic representation, your
propositions describe objects via their role relative to the agent:
``the apple I'm holding now'' or ``the apple I last ate''. In a
viewpoint-independent representation, you describe objects as unique
individuals: a God's-eye view. As far as I can see, you could decide to
use either in (for example) a case-based reasoner or a Strips planner;
the enthusiasts for deictic representations say that they involve less
search and less perceptual decoding than viewpoint-independent
representations.

>> To summarise, I think there's still a place for rational agents which
>> build models of the world, reason using logic, and so on. The
>> difficulties of perception and action which made them infeasible in the
>> real world are absent in IF. But if you apply classical methods of
>> planning and controlling inference, you'll hit a combinatorial
>> explosion. Researchers have realised this, and some of them have
>> investigated systems that are not purely reactive, but that still differ
>> from the classical AI model. These are worth following up.
>
>> Jocelyn Paine

> Wonderful stuff. Thank you, Ms. Paine. All I ask for is, More...More...
>

Thanks for the vote of confidence. Could we have some remarks from other
AI-ers...?

> Steven....@m.cc.utah.edu
>
Jocelyn Paine

Alan Cox

unread,
Mar 2, 1994, 3:37:45 PM3/2/94
to

I did a totally script based approach to the characters in Personal Nightmare
basically it doesn't work except for triggering goals. Things like a top
level script of 5pm go to pu, drink, 7pm leave pub work. When you script
things like
east, open door, east, throw up

What happens when J random player turns up is

Hall.
Mr Smith is here
>
Mr Smith goes east
>e
Passage. The door is shut
Mr Smith opens the door
>close door
Mr Smith looks terminally confused and writes the programmer a bug report.
>
Mr Smith is violently ill
>

By the time you cover these cases (especially once objects are in) - eg
your detective strides in announces the murder and the finds a player
stole his handcuffs 6 moves ago so can't lock away the prisoner its easier to
a) Do it goal directed
b) help write free Unix systems instead

Alan
iii...@pyr.swan.ac.uk
Linux Networking Co-Ordinator

Phil Goetz

unread,
Mar 3, 1994, 6:10:06 PM3/3/94
to
In article <1994Mar2.2...@swan.pyr>, Alan Cox <iii...@swan.pyr> wrote:
>
>I did a totally script based approach to the characters in Personal Nightmare
>basically it doesn't work except for triggering goals. Things like a top
>level script of 5pm go to pu, drink, 7pm leave pub work. When you script
>things like
> east, open door, east, throw up
>
>What happens when J random player turns up is
>
>Hall.
>Mr Smith is here
>>
>Mr Smith goes east
>>e
>Passage. The door is shut
>Mr Smith opens the door
>>close door
>Mr Smith looks terminally confused and writes the programmer a bug report.
>>
>Mr Smith is violently ill
>
>By the time you cover these cases (especially once objects are in) - eg
>your detective strides in announces the murder and the finds a player
>stole his handcuffs 6 moves ago so can't lock away the prisoner its easier to

These problems arise if you use the standard STRIPS planner, which plans an
entire sequence of moves, then executes them. I used a

<<< talking about _Inmate_ again mode ON >>>

hierarchical planner in _Inmate_ (1986). Say the detective wants to
arrest the criminal. He has a script which looks like this:

get handcuffs
find criminal
put handcuffs on criminal

He pushes the first subplan, "get handcuffs", onto his plan stack.
When it's his turn to do things, he pops the first element from his
plan stack and tries to do it. If the handcuffs are in the room,
he does "get handcuffs". If that fails, he makes a plan to get the
handcuffs, which looks like

find handcuffs
get handcuffs

(I let the characters know where everything is in _Inmate_.)
When he pops "find handcuffs", he runs a program which calculates
the shortest path to the object the handcuffs are in. If there
is no path, this fails, and all plans with this as a subplan fail.
(I didn't allow for opening doors, but if I had, it would try
opening a door as an alternate plan.) If there is a path, he pushes
the whole sequence of e.g. "w. n. e. s. u. e. d. e." on his plan stack.
When he arrives there, he tries "get handcuffs". If they're not there,
he tries "find handcuffs" again. If they're in a locked container,
he tries "unlock <container> with its key". If he doesn't have the
key, he pushes on his stack a plan "find key; get key".
(The characters also know which keys go to which locks.)

<<< talking about _Inmate_ again mode OFF >>>

The basic idea is that you push high-level plans on the stack,
and when obstacles get in the way, you replan or abort.

Phil go...@cs.buffalo.edu

Phil Goetz

unread,
Mar 3, 1994, 6:53:16 PM3/3/94
to
In article <1994Mar2.154214.20825@oxvax>,

Jocelyn Paine <po...@vax.oxford.ac.uk> wrote:
>When the agent perceives
>something new, it has to work out whether any of its perceptions refer
>to objects already described by this model. I.e. suppose the model
>contains an assertion of the form
> in( assassin(1), room(27) ) ,
>and the agent moves into room 28 and sees an assassin, it has to work
>out whether this is assassin(1) or not. If so,

>it can update the assertion to
> in( assassin(1), room(28) ).

In IF we cheat and let the agent know the token identity
of the objects it sees. :)

>If not, it must create a new token for the new assassin, and add a new
>assassin:
> in( assassin(2), room(28) ).
>
>A big problem with this approach is that you don't always have enough
>information to work out whether the newly-seen object really is new or
>not. So perhaps you make an assumption about its identity (that it's
>assassin 1 in room 28). To be consistent, you then have to remove
>assassin 1 from room 27; if it actually was assassin 2 in room 28,
>you've created an incorrect model. Or you try to find some way to
>represent the fact ``it's either assassin 1 or assassin 2 in room 28 but
>I don't know which''. This leads to all sorts of difficulties with
>representing and inferring from disjunctions.

Side note not really related to agents for IF, but to the long-term
goals of AI: Yes, horrible difficulties, but they are similar to
difficulties that you find in reading natural lang text when you come
across an ambiguity that will be resolved later. So any complete
cognitive architecture must be able to deal with these problems anyway.

All this stuff involving figuring whether assassin 1 is still
in room 28 is above and beyond anything deictic processing can do.
You can hack your reasoner to conclude "maybe assassin 1 is in room 27,
and maybe he's in room 28" with a good logic, and leave it there.
Whatever you do is icing on the cake compared to what you get with
deictic reasoning.

>Now to the deictic approach. I'll quote from the end of {\it Situated

>...


>However, the functional role of
>these objects is always the same, so it doesn't matter.

I think it is evident that insects use deictic planning (except
maybe honeybees when they communicate the direction to some flowers).
If food appears in front of them, they eat it. They don't
mate for life; whenever female insect F is near a male insect M,
F = mate(M). They can't reason their way out of a paper bag
(as evidenced by japanese beetles in Maryland, which we
trap in hanging plastic bags which hundreds of them go into and
stay in until they starve), because they don't have a concept of
places other than their present location.

It is evident that people don't always; I may want
to give a Christmas present to my brother, not to just anybody;
I may want to go to Duff's, not just any white adobe building.
If you want to build one reasoning system, not two, and you want
it to exhibit behavior other than "out of sight, out of mind",
you'll have to use a more powerful representation than deictic rep.

I'm not saying that deictic reasoning doesn't have a place in IF.
But remember that most of the work done by "animats" researchers is geared
towards developing insects, and might not apply to designing characters for IF.

>Jocelyn Paine

Phil go...@cs.buffalo.edu

0 new messages