Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

NPC planning

2 views
Skip to first unread message

Steven Grady

unread,
Mar 22, 1994, 2:03:34 PM3/22/94
to
(My apologies to comp.ai readers for a neophyte's question.)

Can anyone give me a reference on the traditional AI approach
to the following problem? I remember discussion of this stuff
when I was into reading AI stuff a few years ago, but I don't know
what the current thinking is.

(Summarized, the problem is how to find the best solution to
a subgoal, when the subgoal interacts with later goals in an overall
plan.)

Suppose the NPC problem domain is that of a butler in a house.
The master of the house (presumably a player) asks the butler
to bring him an orange. The house is laid out as follows:

-----------------------------------------
| Living Room | Hallway | Dining Room | M: Master (player)
| | | B: Butler (NPC)
| ------ -------| *: orange
| M B | * |
| | + | +: door
| | | Kitchen |
-----------------------------------------

The obvious implementation is to decompose into sub-goals, e.g.: Go to
kitchen, get orange, go to living room, give orange to Master. Fine.
But an additional requirement is that the Butler should come up with
the shortest plan possible. If the door is open, it's easy -- LR
(Living Room) -> H -> K, get orange, K->H->LR, give orange. If the
door has to be opened every time it is to be passed through, then
(ignoring the orange itself) either LR->H->open->K->open->H->LR or
LR->H->DR->K->DR->H->LR will be equally efficient (assume it takes the
same amount of time to open a door as to go between a room), and thus
the NPC can legitimately choose either path. But if the door stays
open, then the fastest path is clearly: LR->H->open->K->H->LR, because
the opening saves time when coming back.

My question is how to generate that plan. If the satisfaction of the
original goal is by decomposition to subgoals, and the traversal from
place to place is by something like an A* algorithm, then the
relationship between the two subgoals of going from LR to K and from K
to LR is lost. So, for instance, to get from LR to K, the NPC may
choose to go by way of the dining room, as that is just as fast as
going through the door. Such a plan is reasonable from the limited
view of the subgoal, but it is incorrect for the overall goal.

What I don't understand is how to incorporate information about later
goals while solving an earlier one. I suspect that something like a
"serial A*" algorithm, which would find the best path through state
space to reach a _sequence_ of interdependent, dynamic nodes, is what I
want. I'm also not sure it's possible. If not, perhaps I have to
use an alternative mechanism.

Obviously, a brute force approach would "work" here (i.e. doing a
breadth-first search of all possible actions until the goal of the
Master having the orange is satisfied), but it's both inelegant and
probably computationally infeasible. A blackboard approach makes some
sense to me, but I don't know enough about that type of algorithm to be
able to say whether it is applicable here.

I assume this is exactly the kind of thing AI folks were dealing
with 10 or 15 years ago, so my guess is there are solutions out there.
Please let me know. Thanks.
--
Steven
gr...@xcf.berkeley.edu
"Oh oh! No more buttered scones for me, Mater,
I'm off to play the grand piano!"

Kutluhan Erol

unread,
Mar 23, 1994, 11:40:35 AM3/23/94
to
Although AI planning systems worry about finding "good"
plans, finding optimum plans is usually deemed to be too difficult.

Optimal plans can be found by using an A*-like search
algorithm; it does not really matter whether the search is on
state space or on plan space--as long as a complete strategy is used.
However, in most cases, the available admissible heuristics might
degenerate to breadth first search, expanding a huge number of nodes.
AI planning systems usually prefer to expand nodes close to a solution,
not necessarily to the optimum solution.

Existing AI planning systems usually hope that the interactions among
subgoals will be limited, and they look for quick "fixes" whenever
harmful interactions are detected, as in the case of "critics" used in
hierarchical task network(HTN) planning. HTN planners try to find
relatively shorter plans by using goal phantomization and task
merging, but they do not try to find optimal plans. As Steven Grady
points out in his posting, optimum planning requires dealing with not
only harmful interactions, but also positive interactions, or rather
"enabling condition interactions" as they are called in the
literature. Gupta and Nau have a nice paper that discusses enabling
conditions in detail.

In a complexity analysis of STRIPS-style planning, Nau, Subrahmanian
and I have examined the complexity of finding plans of length at
most k, where k is an integer encoded in binary, and presented to the
planner as part of its input. Except for the case where operators are
restricted to have empty delete-lists and no negative
preconditions(thus no harmful interactions between operators at all),
complexity of whether there exists a plan of length k was no more
difficult then complexity of whether any plan exists. However note
that even when two problems are of the same complexity class, one
might still take considerably longer time to solve than the other.


Here are some papers that discuss these issues in detail:

Q.~Yang, D.~S. Nau, and J.~Hendler.
\newblock Merging separately generated plans with restricted interactions.
\newblock {\em Computational Intelligence}, 8(2):648--676, February 1992.

Naresh Gupta and Dana~S. Nau.
\newblock On the complexity of blocks-world planning.
\newblock {\em Artificial Intelligence}, 56(2-3):223--254, August 1992.


K.~Erol, D.~Nau, and V.~S. Subrahmanian.
\newblock Complexity, decidability and undecidability results for
domain-independent planning.
\newblock Technical Report CS-TR-2797, UMIACS-TR-91-154, SRC-TR-91-96, Computer
Science Department and Institute for Systems Research, University of
Maryland, College Park, MD, 1991.

Kutluhan Erol, Dana S. Nau, and V. S. Subrahmanian.
\newblock On the Complexity of Domain Independent Planning.
\newblock In {\em Proc. AAAI-92}, 1992,
pp. 381---387.


Kutluhan Erol.

Ben Bongalon

unread,
Mar 24, 1994, 12:54:13 PM3/24/94
to

One way to solve the problem while minimizing computational complexity is to
intersperse planning and plan execution. In your example, instead of generating
a plan that solves the problem completely, you can first create a plan to get
the orange, execute that plan, create another plan to bring the orange to the
master, and execute the second plan. Another benefit of this approach is that
it takes advantage of side-effects caused by previous plans without explicitly
specifying those side-effects to the planner.

Going back to your example, if the plan that was generated to get the orange
is: LR->H->openKitchenDoor->K->pickupOrange, excuting this plan would yield a
state where B (butler) is in the kitchen holding the orange and kitchen door is
open. Now, when you generate the plan to bring the orange to the master, you
take advantage of the fact that the kitchen door is already open and generate
the plan: H->LR->giveOrangeToMaster, then execute that plan.

Notice that the planner didn't have to explicitly import from the 1st plan the
fact that the kitchen door is open. This knowledge was implicitly derived from
loading the world state after the 1st plan was executed.


As a starting point, you way want to look into Peter Norvig's book, "Paradigms
of AI Programming" for examples on how to implement a planner. See the section
on GPS.


regards,

Ben Bongalon

Steven Grady

unread,
Mar 24, 1994, 8:02:53 PM3/24/94
to
bong...@td2cad.intel.com (Ben Bongalon) writes:
>Going back to your example, if the plan that was generated to get the orange
>is: LR->H->openKitchenDoor->K->pickupOrange, excuting this plan would yield a
>state where B (butler) is in the kitchen holding the orange and kitchen door is
>open. Now, when you generate the plan to bring the orange to the master, you
>take advantage of the fact that the kitchen door is already open and generate
>the plan: H->LR->giveOrangeToMaster, then execute that plan.

If the plan you specified above were generated, this would be fine.
The problem is that by considering the goal of getting the orange
in isolation, an equally likely initial plan would be:
LR->H->DR->K->pickupOrange. Unfortunately, although this is
an optimal plan for the subgoal of getting the orange (as is the
plan you described above), it is _not_ part of the optimal plan of
bringing the orange back to the master. This is exactly the problem
I wish to solve -- by breaking the problem down into subgoals, the
relationship between the steps of the goals is lost.

I spent a few hours in a computer bookstore or two today, and bought
Winston's book on AI, so I have a slightly better understanding of the
state of the art. (They also had MIT's collection of Planning papers,
and if I continue to pursue this problem, I'll probably pick that up..)

As I suspected, the problem of interdependent subgoals in planning has
been recognized for quite a while (in particular, Sussman's anomaly
seems relevant), but I'm surprised that relatively little progress
has been made (and that which has seems off-hand to require painfully
convoluted implementations to solve, but perhaps I just had naive
expectations...). But I'll still think about this problem.
--
Steven
"The great thing about Ken is that he's almost totally stupid."

Erik Max Francis

unread,
Mar 24, 1994, 10:47:23 PM3/24/94
to
bong...@td2cad.intel.com (Ben Bongalon) writes:

> One way to solve the problem while minimizing computational complexity is to
> intersperse planning and plan execution. In your example, instead of generat

> a plan that solves the problem completely, you can first create a plan to get
> the orange, execute that plan, create another plan to bring the orange to the
> master, and execute the second plan.

This makes sense, anyway, as since interactive fiction is dynamic,
another character or actor is likely to interfere with the original
plan, intentionally or unintentionally.


Erik Max Francis, &tSftDotIotE ...!uuwest!alcyone!max m...@alcyone.darkside.com
USMail: 1070 Oakmont Dr. #1 San Jose, CA 95117 ICBM: 37 20 N 121 53 W __
AGCTACTGTACGTACGTTTGCACGTATGCTGTGCAXTGCATACTGACATCGTGACTGATCTGCATGACTTGCA / \
"Omnia quia sunt, lumina sunt." (All things that are, are lights.) \__/

Erik Max Francis

unread,
Mar 27, 1994, 9:19:05 PM3/27/94
to
gr...@sting.Berkeley.EDU (Steven Grady) writes:

> This is exactly the problem
> I wish to solve -- by breaking the problem down into subgoals, the
> relationship between the steps of the goals is lost.

Someone else a little while ago was talking about something which
might be interested to you. It involved an "action stack," where the
desired actions were pushed onto the stack, and popped off as they
were being executed. More general actions like GET TO THE KITCHEN
would be supplanted by pushing the sequence of moves onto another
stack. When popping these moves off and executing them, if there was
a problem, you'd alter the sequence by pushing further actions onto
the action stack. For instance, if you bump into a door, you push
OPEN DOOR onto the action stack. If you don't have the key to that
door, then you push FIND KEY onto the action stack. If the key is in
hidden, you push SEARCH onto the action stack, and so on.

This is a general overview of the concept, naturally, but the basic
idea is still there. If the original poster would follow-up and give
more details (or correct any misperceptions that I produced) . . .

Ben Bongalon

unread,
Mar 28, 1994, 12:40:10 PM3/28/94
to
gr...@sting.Berkeley.EDU (Steven Grady) writes:

>My hope is to find an algorithm which will allow the NPC to have
>information about later steps available while planning its first step.
>
>And yes I know that always achieving an optimal plan is "hard" (probably
>NP-hard), but I'm hoping that I don't have to be quite as suboptimal
>as my above problem suggests.


try Non-linear planning.


good luck,
Ben Bongalon

Wes Modes

unread,
Mar 29, 1994, 11:38:19 AM3/29/94
to
gr...@sting.Berkeley.EDU (Steven Grady) writes:

>My hope is to find an algorithm which will allow the NPC to have
>information about later steps available while planning its first step.

>And yes I know that always achieving an optimal plan is "hard" (probably
>NP-hard), but I'm hoping that I don't have to be quite as suboptimal
>as my above problem suggests.

I've been lurking on this thread and wondering about the AI
concepts that were being bandied about. For IF NPCs, you
do not have to have true intelligence, only the
_appearance_ of intelligence.

For instance, rather than having the NPC compute the
optimum or near-optimum path to a goal, why can't the
programmer provide the NPC with scripts that map out
reasonable (even optimum) behavior? In most IF, the
domains are so limited that it is possible to work out a
large number of the possible goals and sub-goals ahead of
time.

I am interested in talking about how to code this knowledge
in IF. For instance:

What properties would an NPC have that would influence
his/her behavior? (food frequency, drink frequency, etc)

What would be the best way for a game to code an NPC's
goals and subgoals? (lists of actions, tables that tell
the optimum way to get from here to there, etc)

How would the game store an NPC's multiple goals?
(stacks or lists, etc)


Wes Modes
mo...@mport.com
Santa Cruz, California

Fredrik Ekman

unread,
Mar 30, 1994, 8:10:03 AM3/30/94
to
In article <CnFq7...@admin.mport.com> mo...@admin.mport.com (Wes Modes) writes:

For IF NPCs, you
do not have to have true intelligence, only the
_appearance_ of intelligence.

What is the difference between a really intelligent NPC and one that
does only appear intelligent? And what is the optimal planner but an
appearance of intelligence? Remember, the intelligent human seldom
(if ever) plans optimally.

With the standard of AI today, it is simply not possible to make
anything more than a mockup of intelligence. And it will always be
very easy to see through this illusion. Thus, every step towards
making the NPC more intelligent should be investigated.

I'm not saying that optimal planning is the way to go, but settling
for less just because we don't need real intelligence is NOT an
option.

/F

Steven McQuinn

unread,
Apr 7, 1994, 10:58:11 PM4/7/94
to
In article <D91FE.94M...@krokis.pt.hk-r.se>
d9...@krokis.pt.hk-r.se (Fredrik Ekman) writes:

> What is the difference between a really intelligent NPC and one that
> does only appear intelligent? And what is the optimal planner but an
> appearance of intelligence? Remember, the intelligent human seldom
> (if ever) plans optimally.
>
> With the standard of AI today, it is simply not possible to make
> anything more than a mockup of intelligence. And it will always be
> very easy to see through this illusion. Thus, every step towards
> making the NPC more intelligent should be investigated.
>
> I'm not saying that optimal planning is the way to go, but settling
> for less just because we don't need real intelligence is NOT an
> option.


In fiction, an illusion may well be "less" than it pretends to be, but
it is also quite _different_ than what it pretends to be. Take for
example the Eliza program: it certainly provides less than an
intelligent processing of input, but it is also very different from
being a therapeutic listener.

At essence, Eliza is a synthetic actor of limited range, not a
synthetic therapist of limited ability. The program works for people
who play along with the premise, becoming actors themselves in a joint
improvisation. (It also works for narcissists who regard the rest of
humanity as merely an audience.)

I think AI would have more to offer interactive fiction if it
concentrated on developing intelligent synthetic actors rather than
intelligent characters. I'd like to see AI focus on the ingenuity of
effective illusion, and I am less interested and less optimistic about
how AI can create an authentic alternative reality.

Traditional fiction, most would agree, creates a representation of
life, not a laboratory for artificial life. Interactive fiction may
seem to blur this distinction. Whereas the reader or viewer accepts the
author's representation in the spirit of suspended disbelief, the IF
participant wants to plunge into an interactive world and help shape
its destiny. Still, this is the plunge of playfulness into the pool of
Let's Pretend.

My hunch: For the near future, the best route to that playful,
interactive world will be carefully crafted illusion aided and abetted
by AI.

Steven McQuinn

0 new messages