Reasoning Agents

Skip to first unread message

Jocelyn Paine

Mar 1, 1994, 9:28:46 AM3/1/94
I've just been reading a technical report from Sussex University
entitled "Multiple Personality and Computational Models" by Margaret
Boden. Well, one obvious idea: why not build a game in which there's an
NPC which suffers from Multiple Personality Disorder, and where you have
to psychoanalyse and cure it? Engage in an interview; try to tease out
by indirection, those traumatic events such as parental sexual abuse
which may have caused our NPC, twenty years ago, to split off part of
itself into a separate personality. (You can have the idea free, but if
you make a game based on the concept, I want part of the profit.)

But perhaps that's premature until we know how to build believable
single-personality NPCs. One line of work that I haven't said anything
about in my AI postings is that on motivations, AI, and the cognitive
effect of emotions. It originated (as far as I know) with Aaron Sloman,
Keith Oatley, and Philip Johnson-Laird, and there are now several
research groups working on it.

A small example. We always have several goals in mind, although some of
them may be "latent" much of the time. Physiology-driven goals such as
sex, food, and sleep; other goals concerning career prospects,
secure accomodation, and so on. (There are some AI-ers who believe that
we don't really represent goals at all, and that the notion is merely
one derived from our naive "folk-psychological" understanding of our own
minds. But I shan't take that stance here.) In normal situations, we're
free to focus on one goal, pursuing it to some depth without

But if danger appears, then things change! There's been one threat;
others may follow. We become anxious, and (as everyone knows) the
feeling of anxiety saps our concentration. We twitch from goal to goal,
and from this part of the environment to that, being now unable to keep
our mind on any one problem. This can be explained by saying that if
there's been one threat, others may follow. Hence it's sensible for any
rational agent to be alert to anything that indicates a new threat; to
anything that offers a way out from present and future threats; and to
any pending solutions that could be suspended (to make more time for
this attention-switching) or modified (for use in the new problems). So
the cognitive function of anxiety is to change the behaviour of our
current planning, making it more suitable for this new, more dangerous,
environment. The emotion and motivation people have tried to explain
other emotions in the same way.

The report I was reading ties this up with the problem of multiple
personality; but it's also worth reading as an entry to the emotion and
motivation field (it gives a bibliography). You can obtain it by
anonymous ftp from (or Please FTP
outside English working hours, to reduce the load on their machine. The
report is in pub/reports/csrp/csrp299.txt.Z, and is compressed plain
text. Here are a few paragraphs.

A human mind includes a number of motives, often competing for the
person's attention and for the use of their hands and time. Certain
general questions therefore arise, about how the limited mental
attention and bodily resources can be allocated between the various
motives, and how these motives can be scheduled relative to each other.
We must distinguish motives of differing urgency, insistence, and
importance: speaking of "strong" and "weak" motivation is not enough.
Compare, for instance, the urgency of a motive to buy bread just as the
bakery is about to shut, the insistence of a motive to locate a shop in
the neighbourhood selling one's favourite type of bread, and the
importance of a motive to eat some food every day.

These distinctions, and many others, are made in a recent
computational account of multiple motivation [Beaudoin & Sloman, 1993;
Sloman, 1990, 1991]. Aaron Sloman discusses a control system for
scheduling motives in a teleologically consistent way, one which is
sensitive also to mere preferences, to changes in belief, to
teleologically appropriate emotions, and to shifting moods. His theory,
and similar ones [_ e._ g. Johnson-Laird, 1988], cannot yet be fully
implemented in a computer model. But the computational concepts used in
it help us to think clearly about what is involved in choosing between
our many aims and preferences.

Sloman outlines mechanisms for selecting between competing motives,
mechanisms which take seriously distinctions such as those listed above.
That's not to say that detailed problem-solving goes on whenever there
is a motivational clash. On the contrary, an urgent motive is one which
has to be satisfied quickly, so that careful consideration of evidence
and alternatives would be out of place. If the bakery is about to shut,
you have no time to luxuriate in thinking just what sort of bread you
want to buy: simply, you must get in there. If the motive is both urgent
and important (escaping from an approaching tiger), the time available
for thought is even shorter.

On the other hand, an important but non-urgent motive may
continually be placed at the head of a priority-queue, for action or for
deliberation. (The more insistent the motive is, the more often this
will happen.) Accordingly, when the system has no more urgent need, it
will consider how the relevant goals might be achieved. This process may
require complex problem-solving, involving evaluations of various kinds
(such as personal preferences and moral codes), inference from stored
beliefs, means-end analysis, and contingency-planning.

Means-end planning allows for one and the same action to be done,
at different times, under the control of different motives. So buying
bread, for instance, may be done to appease one's own hunger, to seduce
a hungry girl, to make money, to compete for a prize, to glorify God, or
even to court a princess [Boden, 1981].

To discover which motive is underlying the observed action, we need
to put it into an intentional context that provides a narrative covering
the person's behaviour. Does the bread-buyer immediately eat the bread
himself? Does he carry it round the corner with a lascivious smirk,
depositing it in the lap of a beautiful starving girl? Does he try to
sell the loaf (and others) at a profit to people who were unable to go
to the bakery? Does he eat fifty loaves at one sitting, to get into
``The Guinness Book of Records''? Does he bless the bread, and use it as
the Eucharist? Or does he recount the tasks set to him by the king,
whose daughter's hand is promised to the first suitor who can complete
them? Such intentional patterns must be identified in someone's
behaviour if we are to understand what moves them, what they are really
doing, in behaving in "one and the same" way at various times. The more
often these patterns are repeated, the more we see them as stable
dispositions, aspects of character or personality.

Moreover, in normal minds (and in well-designed computational
architectures), two or more motives may be approached, or even achieved,
by the same activity. A man who wishes to seduce a starving girl may be
peckish himself, and share some of the bread as she wolfs it down. A
system which wanted many different things, but could pursue only one
goal at a time, would be unable to achieve this sort of _rapprochement_
between different motives. Control of its behaviour would flit from one
motive to another, appearing excessively single-minded over brief
periods of time. There would be much wasted effort: not only unnecessary
repetitions (due to its not being able to kill two birds with one
stone), but self-defeating actions too, wherein a goal or sub- goal that
has already been achieved is later undone in pursuit of some quite
different end.

The similarity of such behaviour to certain aspects of MPD is
evident. In general, one can think of mental dissociation as resulting
from various types of compromise or breakdown in the complex control
system that informs and "unifies" the normal mind. (The word ``unifies''
needs scare-quotes because the motivational and cognitive unity, or
consistency, of normal minds is by no means perfect.)

Independent, alternating, and perhaps even competing motivational
structures will very likely arise if the usual control-mechanisms for
integrating motives break down. Differential access to memories, which
plays an important criterial role in individuating "personalities" or
"alternates", may also arise in this way. If we think of the mind as a
computational system, we can see how it is possible for some memories to
be shared between several (or all) alternates, and for others to be
accessible only to one (or two ...). Reciprocal and non-reciprocal co-
consciousness, too, can be understood in these terms. Whereas McDougall
had to resort to positing "telepathic" (non-energetic) communication
between the monads forming the society of the mind, we can distinguish
computational (non-energetic) access and control of various specific

Jocelyn Paine

Reply all
Reply to author
0 new messages