It's an interesting look at "simulationist" interactive fiction (as we
throw the term around here) -- how the designers of Deus Ex tried to
use that concept, how they succeeded and failed, what they've
learned, and what they're trying for Deus Ex 2.
We bat the ideas around a lot, but this article has a lot of detail
from an actual published game, which is why I bother to mention it.
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
* Make your vote count. Get your vote counted.
Fascinating article. Given the scope of a textually-described world, I
wonder how much the advanced world model of TADS 3 will evolve to
incorporate more of the "simulationist" approach? This would certainly
produce a schism between the "canned" response crowd and the deeper
simulationst crowd - certainly you wouldn't be able to play these games on a
palm pilot - and the more cycles you can squeeze in the better.
But the biggest difference to me between the two approaches is that sims are
driven in either turn-based or realtime, while IF is turned-based with a
very narrow response time window (hence the accusations that WorldClass was
slow). Graphical games, such as Call To Power II, are turn-based, and as the
game grows in complexity the turn stretches out to incorporate more game
object activity, some of which can stretch out for several seconds, even to
minutes per turn. To the IF community this kind of response time is
On the other hand, limiting behaviors to locales means passing messages to
each object within the locale to see if it wants to "react". Stimulus /
response then becomes more complicated as each individual object determines
its own response and the number of objects increases. Some of this could be
reduced, perhaps, by a "lemming" effect, in which a class of objects
responds similarly to a given stimulus and this reaction is cached by the
first individual object to make its response determination - such as a flock
of birds reacting to the report of a gun - but then how do we explain the
odd straggler that lags behind, only to be caught, cooked, and eaten?
Plenty of food for thought here.
So we need VMs and world models that are written to be
>On the other hand, limiting behaviors to locales means passing messages to
>each object within the locale to see if it wants to "react". Stimulus /
>response then becomes more complicated as each individual object determines
>its own response and the number of objects increases.
The mud system I worked on in '93 addressed this problem with
a generic "subscription list", whichis faster if most objects
don't react to any given stimulus (on average).
Every time an object moves, it registers itself with
its parent container for all the stimuli it wants
to react to. Object movements are actually very
rare in MUDs (except player movement) and in IF--typically
at most one per turn, except the odd TAKE ALL.
Note that if you move an object which contains other objects,
those other objects don't reregister, so it becomes the job
of the container to register for all stimuli that its children
are interested in.
More food for thought: the puzzles in my game "The Weapon" are
almost all based around NPC sensing, and this was directly influenced
by my experience with sense-passing in "Thief: The Dark Project"
(which is referenced several times in the aforequoted presentation
for its simulationist NPC sensing). However, I had to hack it all
since Inform's has no NPC sense model. I'm hoping that with
something like Tads3's sense-passing model these sorts of behaviors
will be more consistent and predictable and yet still be usable
for puzzles like that. If so, it points out the advantage of a
more complex world model--more space in which to create puzzles
which fit naturally with the world (by 'world' I mean the dynamic
interactive world the player perceives, rather than that descibed
by the static text)--an advantage Harvey describes but that you
might wonder whether it's applicable to IF.
True, many objects would be relatively uninvolved iin the environment in
which they exist. Most of the time "stimulus" is below the "response"
threshold and so we have continuity. For example, sunlight falling on a rock
might warm the rock, which might be apparent to the the NPCs senses, but a
breeze blowing over the rock might not register any response at all. I know
it's a poor example, but there are physical thresholds to stimulus/response
that would make this registration approach very practical.
> Every time an object moves, it registers itself with
> its parent container for all the stimuli it wants
> to react to. Object movements are actually very
> rare in MUDs (except player movement) and in IF--typically
> at most one per turn, except the odd TAKE ALL.
The TADS 3 model is similar to this, I believe. This sounds like a very good
> Note that if you move an object which contains other objects,
> those other objects don't reregister, so it becomes the job
> of the container to register for all stimuli that its children
> are interested in.
This is certainly a model worth looking into.
> More food for thought: the puzzles in my game "The Weapon" are
> almost all based around NPC sensing, and this was directly influenced
> by my experience with sense-passing in "Thief: The Dark Project"
> (which is referenced several times in the aforequoted presentation
> for its simulationist NPC sensing). However, I had to hack it all
> since Inform's has no NPC sense model. I'm hoping that with
> something like Tads3's sense-passing model these sorts of behaviors
> will be more consistent and predictable and yet still be usable
> for puzzles like that.
We're getting closer to finding out.
>If so, it points out the advantage of a
> more complex world model--more space in which to create puzzles
> which fit naturally with the world (by 'world' I mean the dynamic
> interactive world the player perceives, rather than that descibed
> by the static text)--an advantage Harvey describes but that you
> might wonder whether it's applicable to IF.
Yes, since the "dynamic interactive world" is conveyed to the player's
imagination through the static text - static in the sense that gradiations
of pixels to give the illusion of light, color, shadow, form, and substance
are more easily updatable than textual components whose contexts are not
graphical images, but sentences and paragraphs.
It's far easier to explicitly outline the specific responses the
author wants to bring forth from their planned "interactive events" to
further the story, and that's ok if they're comfortable with canned
content and (multi)linear stories. This "scripting" technique is
convenient for non-professional programmers that really just want to
tell their tale in a fun, interactive way.
I know a lot of people might get mad at reading what I just said, so
let me just say that I like to play text-based adventures (IF). I play
my fair share, but I don't see them as "interactive fiction"; they're
just canned linear text adventures, sometimes with puzzles and
dungeons, and others, with more npc interaction. They're fun and they
have nice ideas, but I don't want to mimic what these games do.
I'd like to see a text-based world where the story was dynamic. There
might be an overlying plot, some urgent crisis going on, but where I
can pretty much do what I want. I might choose to do something with
the Crisis, but I might just choose to do some kind of side quest
instead, and by that time one major Crisis arc has been completed by
someone else, or maybe the bad guy get to keep the diamonds because of
my inaction. The story plot is a chain of dynamically linked nodes,
where the nodes are similar to scripted sequences but are able to be
pieced together in a number of ways. I complete node D, and the plot
is rehashed to reflect this. I don't know how to do this yet, I
haven't looked into it and it may require more resources that what
Inform or TADS can provide to do it competently, but I still want to
I want to see the text generated. Just like the parser takes in a
natural language sentence from the player and tokenizes it to derive
meaning, I want the npc to be able to take a few tokens, choose the
delivery form and generate a natural language response. A lot a work,
sure. But I'd like to see the technology for this, so that I have a
These are the hardest parts of what I'd like to see. They require
competent programmers to come up with viable frameworks and such. I
realize most of the readers of this group either like the current form
of the text games being written to date and don't want them change, or
do not feel they have the skills to implement advances on their own.
If anyone has been thinking of doinf these things and want a hand,
write to me and say so. I don't use TADS that often and I'm not
involved in AI programming at all, but I'd like to see these things
being worked on and can help somewhat. If anyone has come across any
neat algorithms/ideas that would be help in these areas, can you point
them out to me?
In the meantime, I'm gonna concentrate on experimenting with
stimuli/response based object interaction, and npc awareness. And a
few other things.
Sean, how does a "subscription list"? I've just downloaded "The
Kevin, if you're reading this, is realtime interaction going to be
easy to implement in TADS-3? Do you have any idea what kind of load
the VM can take if realtime processing was occuring, with say, npc
needs-based goal-oriented decision-making and pathfinding taking place
and object stimuli/response checking, and whatever else? Would we be
talking 10s of seconds per click on a Pentium III?
Anyone else have a game based on npc sensing?
Do you have any decent links to npc awareness (sensing) and/or
stimuli/response based interaction articles/algorithms? Would you like
to outline a rough sketch of what idealized "aware" objects would be
like and how they interact witrh each other? I like the idea that
Kevin discussed about having a thresold for the responses.
ps DX2 and Thief3 sound like they're going to be very good. Probably
18 months away though, which is aways the problem when reading
technology articles. :)
> I want to see the text generated. Just like the parser takes in a
> natural language sentence from the player and tokenizes it to derive
> meaning, I want the npc to be able to take a few tokens, choose the
> delivery form and generate a natural language response. A lot a work,
> sure. But I'd like to see the technology for this, so that I have a
Of course, I wish you the best of luck with this, but I think
I will be old and wrinkled by the time this is achieved. I'm
a student of Computer Science, majoring in Language Technology
and NLP (Natural Language Processing) and as a supposed expert
on the subject, I don't these goals stand a chance of being
achieved anytime soon.
Not that I'd want to stop you trying of course, scientists are
usually wrong anyway (I'm serious, really). The main problem in
generating language that actually means something is just that:
meaning. Your idea of 'choosing a few tokens' and 'generating
a natural language response' is feasible in some ways, but the
problem is that's only the start of it... (apart from the
question -how- this supposed NPC is going to choose)
Curious about your ideas though.
"The Weapon" is all hacks. I wouldn't really want to try doing
subscription lists in Inform; building real data structures is
a little too complicated.
The system we used on the mud in '93 wasn't very ideal, but basically
you could say to any object "subscribe to stimulus X":
this would cause someobj to add 'foo' to a list of objects
associated with '$someStimulus'. Then later someone would
someobj.stimulate($someStimulus, ...other info...)
and someobj would iterate through its list of things that
receive it, and make it call them like so:
We used this system for the following behaviors that I can recall
(we'd only implemented a small part of our world model when we stopped):
object begins flying
object stops flying
flaming object arrives
liquid object arrives
Objects that were flammable would watch for a flaming object
to arrive. Objects that needed to know about liquid (such
as flaming objects, liquid objects, or things like paper that
would get soaked) would watch for liquid objects. We built a
pressure-plate system that would watch how many items were in
a room, subtracting out the ones that were flying, using the
first four--indeed, we allowed any object to subscribe to these
lists without being part of the contents, but normally you
subscribed to your container's lists.
I can easily imagine expanding this to have objects request
a threshhold--"it must be THIS hot before you should tell
me about it". The containers could track these threshholds
and only call if the threshhold is exceeded.
The commercial game Thief actually had a little system called
"Act/React" which was organized around the same issue: allowing
objects to affect other objects, instead of being player-centric.
It was used for things like letting water extinguish torches, and
it used a generic stimulus-response model, although it was a 3D
game so there were different models of spatiality. (Actually I'm
not sure it was ever used for anything other than when two things
came in contact via collision.)
Back to the mud, we actually used a separate system for passing sense
data--sense data doesn't tend to have "physics-y" cause-and-effect so
much as NPC reactions, and that system allowed propogating sense data
from room to room which was rather more complex. The main important
compontent was to pack up the information describing some event
which had occured (the player put the ball in the box) not just
as a string, but as a little object which knew how to print the
string from the player's pov, possibly from others' povs, and
also could provide the raw info in an easily digestible form to
the NPC. They wouldn't describe what action the player had
*attempted*, the way before/after/react_after do in Inform;
instead they would describe each physical thing that occurred
that was visible. If attempting to 'get X' causes a collapse
that drops a boulder into the room, an event, say,
"moved(boulder,room,player,TRUE)" , would be generated:
moved ! an object moved to a new location
boulder ! the object that moved
room ! the place where it ended up
player ! the actor who triggered it
TRUE ! TRUE if it was unintentional (usually false)
and then you could write some code along the lines of
if (agent == player && unintentional)
"^", (The) self, " says, ~Way to move ", (adjthatorthose) obj,
", you loser.~";
except that this comment would actually be issued as ANOTHER little
packet of sense data, and the player would only get to hear it if
the sense data managed to propogate to the player via sound propogation.
I think a major issue is that the world model is very simple and depends
heavily on textual description and narrative that only the player readily
understands. The data structure that an NPC relies on for their knowledge of
the world is quite limited. Narrative in most adventures is written as blocks
of text. If the world model was rich enough for a narrator routine to
generate narrative on the fly it would be a different matter. But producing
interesting narrative from data is quite a challenge. Mind you if you crack
it you will have a major start on getting NPCs to come out with reasonable
As for NPCs behaving in response to stimuli, NPCs need to have goals and
common sense. Perhaps a simple scenario is to escape from some danger.
Survival overrides any other more subtle goals that are harder to program.
The NPC still needs to know what sort of things are dangerous and what
actions or inactions are dangerous though. Unfortunately people have vast
amounts of knowledge that we take for granted. Giving an NPC the bits that
fit a simplified world model is a massive task.
For example suppose you are in a room and the game tells you that you smell
smoke. Then if you look that you see smoke coming under a door. How do you
represent smoke in a game and make an NPC wary of it? Or a dog growling? Or
all sorts of other stimulii that you might want to put into an adventure?
Perhaps some dangerous stimulus object that gives the appropriate text
message to the player and tells NPCs I'm dangerous. Or I'm a warning of a
situation that is about to become more dangerous.
You mentioned diamonds above. Would the villain refuse to give them to you
because you programmed in don't give the diamonds, or because he knows they
are valuable and covets them? As opposed to a piece of coal which is a less
valuable form of carbon.
Tim Partridge. Any opinions expressed are mine only and not those of my employer