Modeling Personality

Skip to first unread message

David Graves

Jan 4, 1991, 1:25:34 PM1/4/91
A little while back I received e-mail from Charles Hughes, asking about a
posting I made to this notes group. With his permission, I am posting his
question and my reply so that we can all participate.

In reference to my posting on Artificial Personality, Charles asked: "How big
do you think the model has to be to sufficiently simulate interaction in a
limited world? (Lets make the limit the rules of a game say along the lines
of an InfoCom adventure)".

In my opinion, this is an excellent question. There is no simple answer,
so we may develop some good debates around this one. (Perhaps "De Bates" at
CMU will even offer an opinion). :-)

It seems to me that the most critical aspect in simulating interpersonal
interaction in a limited world is to have a consistent model. If you choose
to represent some physical, mental, or emotional phenomena in your computerized
world, you should "achieve closure" by being reasonably complete. If you can't
implement a consistent and complete (although simplified) model of that
particular phenomena, then you should leave it out completely. An incomplete
model serves only to frustrate the user, when they can't perform activities
that seem reasonable in that context.

Let me give an example. Let's say you want to represent "fire" in your
computer model. You would need to implement semantic routines to handle
user initiated commands like "light" and "extinguish". You would need to
alter the logic of your "look" routine (assuming that you decided to model
illumination and darkness). You would need to add a "flammable" attribute
to all objects in your world that could be burned, or, if you have a class
hierarchy, you could attach the "flammable" attribute to the reasonable
classes of objects. You need a "burning" attribute, to indicate when an
object is on fire. You probably would want to model reduction in mass
over time, since nothing burns forever. This implies the creation of a
routine to finally dispose of the burning object, when it is completely
consumed. It might be reasonable to create an "ashes" object as burning

Lots of work, eh? This is the bare minimum! You user is going to think of
things you didn't, so you can't just write a few hard-coded rules about
specific objects. You need general rules that can be applied in ways you
didn't think of. How about of someone lights a rope tied at the edge of the
cliff? Your "burn-completion" routine had better know that the rope has
been "cut" and call the routine that moves the load at the other end of the
rope down to the bottom of the chasm. (Assuming, of course, that you decided
to model "rope", which is another huge can of worms).

Okay, back to the original question. How big should the model be to
sufficiently simulate interaction in a limited world? Perhaps now my point
is clearer: it must be sufficiently big to thoughly implement the features
you have selected. (When Lincoln was asked "How long should a man's leg's be?"
he replied "Long enough to reach the ground").

I can anticipate your next question. "Okay, Dave, I promise to implement
the features of my interpersonal interaction software in a though manner. But
what features should I implement? How do I begin to construct a model of
personality and interaction?" I can't tell you the details about the model
of personality which Tim Brengle and I developed, but I can tell about other
peoples models! In his "Trust and Betrayal" game, Chris Crawford used three
attributes to model the emotional state between the player and the simulated
characters: affinity, trust, and fear. Each actor had a record of how much
it loves/likes/dislikes/hates you, how much it trusts you, and how much it
it feels vulnerable to you. Chris designed an icon-based language which allowed
limited interpersonal communication between actors. You could talk about
feelings or pass information. ("I like you. I trust you. I'll tell you
a weakness of my enemy Chucky, if you tell me about that creep Freddy").
Connecting the data model with the communication model was a rule base of
about 80 rules, like "If I don't trust this guy, then I will refuse to
exchange information with him" and "If he insults me, then I will like him
less; if I am not fearful of him and I hate him, then I will insult him back".
This game was well designed -- you can do the things that you would expect to
do, without running into the "dead ends" that show so frequently in poorly
designed games.

In summary, the world is big, and you can only program for about 2,000 hours
a year, so chose your model carefully. Start as small as possible, because
you need to implement features thoroughly if you want the result to be a
true "system".

The Sanj-Machine aka Ice

Jan 6, 1991, 10:49:08 PM1/6/91

WHo has source code for Infocom-Style interactive fiction?

There was some talk in about this. Adventure
Definition Language and one other system came up as development systems.

Also, is there a good, reputable and reliable book that explains how
to implement MUDS? If you can't create believable fake people, how about
allowing real ones to join you? Why not both?



"No one had the guts... until now!"
$anjay $ingh Fire & "Ice" ssingh@watserv1.[u]waterloo.{edu|cdn}/[ca]
ROBOTRON Hi-Score: 20 Million Points | A new level of (in)human throughput...
"The human race is inefficient and therefore must be destroyed."-Eugene Jarvis

David Graves

Jan 8, 1991, 2:39:11 PM1/8/91
Since posting this basenote on models of personality, I've been asked to
provide references for more information on Chris Crawford's game, described
above. The best reference I can recommend is the game itself. It is called
"Trust and Betrayal". It runs on the Mac. Look it up at your local software

(I have no vested interest in this game. I recommend it only as an example
of a software model of personality and interpersonal communication).

Reply all
Reply to author
0 new messages