Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What's this thing we call AI?

9 views
Skip to first unread message

ems...@mindspring.com

unread,
Sep 16, 2001, 3:02:37 AM9/16/01
to
Reading the AI in IF thread for some time, I've been struck by how
much people's internal concept of "real" AI seems to differ from the
most commonly accepted measuring stick thereof.

Simply: the Turing test looks purely at output. If the output is
indistinguishable from human output, the AI passes. (Granted, this is
not necessarily so useful for IF, especially in the sense that
currently we don't have natural language parsing; but suppose we were
to modify the Turing Test for IF so that a Turing-Quality Game is one
in which, if we imagine that we are only allowed to pass
IF-instruction-style messages under the door to a human Game Master
who then writes out responses, those responses are fully convincing.
They may be terse, but they don't ever reflect any foolish
misunderstandings of a non-human kind. Natural language parsing may
be desirable and achievable at some point, but this is not part of my
definition of a Turing-Quality Game.)

What people have seemed mostly to be saying here, however, is that
something is "real" AI only if it "really" understands what it is
being told, "really" has feelings, and so on. <insert rapid descent
into sophistry and philosophical conundra about the human brain.>
This is an argument based not on output but on underlying mechanism,
but unfortunately without any kind of clear definition of what that
mechanism should be.

If we can back off from the more problematic aspects of this
definition, what we get is, I think, something like this:

AI is systematic abstraction. The less explicit the programmer has to
be, the more the AI is able to 'fill in' from generalities, the better
the AI. This is not an absolute which allows us to say something is
or is not AI; at best, it only describes a system as more or less
intelligent on an AIQ scale. It also pertains to some things for
which artificial intelligence is perhaps not the term that leaps most
immediately to mind, in the sense that a good abstract simulationist
system allows the definition of objects without explicitness about
what happens to each particular one when it burns, for instance.

Likewise, it looks more AI-ish if I can write

if (somecondition)
ThrowTemperTantrum();

than

if (somecondition)
{ remove vase; move shards to location;
"Lisa hurls the priceless vase across the room! Wow she's
furious!!!";
}

Of course, it is still possible that ThrowTemperTantrum will turn out
to be coded as:

[ ThrowTemperTantrum;
remove vase; move shards to location;
"Lisa hurls the priceless vase across the room. Wow, she's
furious!";
! (well, at least the punctuation is better.)
];

in which case I would presumably only be calling it from that one
place in the code. But simply expressing it this way encourages the
development of better AI, because pretty soon it will occur to me to
make Lisa possibly get mad in more than one location, so I'll get

[ ThrowTemperTantrum;
if (vase in location)
{ <vase smashing code...> }
else if (china_dog in location)
{ <death to china dogs code...>
}
];

and from there one approaches

[ Tantrum x; ! (by now I will also have learned to make my routine
name shorter)

! locate a fragile item in scope
x = FindBreakable();

! run its smashing code
x.smash();

! describe the event
print (The) CurrentNPC, " throws ", (the) x, " at the wall...";
];

whereby one gradually approaches the kind of abstraction that makes
the NPC behavior more flexible and interactive than it was initially
designed to be.

***

Within this rubric (AI as abstraction), we see the following things
postulated as constituting AI of a type useful for IF:

1) the ability to generate correct English sentences representing an
idea (the idea somehow being stored in abstract form within the
program.) I continue to think that this is one of the most difficult
AI tasks imaginable, well beyond our current capacities. Partly, we
just don't know enough about how human language works; the Chomskian
concept that we start with a Deep Structure of a sentence and then
work it out into a grammatical form is belied by the fact that I often
begin a sentence without knowing precisely how I intend to finish it.
And then, of course, language abounds in nuance, irony, subtlety
impossible to codify. If and when this kind of AI is developed, I
think it will be by another method entirely than programmatic means
[*]. So much for that.

[*] I'm sure there is an official term to refer to what I mean, known
to those who have had any instruction in computer science. Since I am
not of the elect, I don't know what this term is, so I will have to
redefine from scratch. What I call 'programmatic means' is giving the
program a set of instructions and a set of data on which to perform
the instructions, where the instructions are supposed to model the
system we have in mind completely (emotions, natural language, plot
development, whatever.) The opposite would be a system by which the
computer would be trained up with positive and negative reinforcements
to develop its own instruction set, which would then be obscure to the
creator. Note also that my lack of computer science knowledge extends
to a complete ignorance about how to achieve this. I have heard the
phrase 'neural net,' for instance. Sounds like Magic to me.

[Digression on the footnote: IF languages and approaches in general so
far seem geared towards what I am calling programmatic means, and not
(to the best of my knowledge) to possess ideal tools for the-- er--
the other kind of development, for which I also do not know a useful
term. I guess I'll call it 'inductive,' lacking anything better. For
this reason it may be reasonable to level against them the complaint
that they are unsuitable for AI development; I don't know. I'm
inclined to think that

a. There are two goals to AI: to produce responses that are consistent
and sensible, and to produce responses that are unexpected by the
author. {More on this later.}

b. It is easier to make AI that does the former with programmatic
means, easier to make AI that does the latter by inductive ones.
Hence the chatbots, who accumulate data constantly and reincorporate
it, sometimes say things that are remarkably clever and funny, but
often spout gibberish. The clever funny things are attractive, but
inadequate to carry the needs of a plotted story or puzzle, in my
opinion.

c. For the above reasons, we are more likely to do programmatic
development of AI for interactive fiction, and more likely to find the
fruits thereof immediately useful, though it is perhaps possible also
to 'train up' a child AI, in inductive ways, that would eventually
possess a sufficient world model and emotional subtlety to be used in
an IF game. This would have to be a process of many many years of
work, however, and a new development system, too.]

2) the ability to replicate "natural" emotions, including some
internal representation of emotional state and a set of responses
which will convey this state back to the player. In a crude sense,
this can be done easily already; it is a matter of creating a machine
with a number of states and asking it to choose an action based on the
combination of those states. The sentence-thrower proposed on this
group some time ago was an example of such implementation.

3) the possession of independent goals, together with the ability to
determine dynamically a path to carrying those goals out. RAP is an
example of the attempt to do this, which can be expanded by
increasingly refined world modelling and the definition of more
complex actions. In a sense this is the closest to physical
simulationism, but it still presumes a teleological approach (work out
how to get to goal y from point x) rather than an etiological one
(work out what happens from point x when player does thing z).

3a) the possession of conversational goals, including the ability to
draw conclusions from abstractly-encoded data [primitive reasoning]
and to look for new information.

4) a game-master AI capable of guiding the development of the story to
some kind of satisfactory narrative shape, without that shape having
been anticipated precisely by the game's author. Again, the buds of
this exist, in the sense that there are multilinear games; I-0 is the
simplest possible example, in that the plot branches and reconverges
through a relatively small number of options. It is possible to build
a more complex kind of multilinear game with many more states than
this, which nonetheless keeps track of whether certain crisis points
have been reached and relentlessly manages game events to push the
player in the appropriate direction; this is what I've been working on
for City of Secrets. This is still not the kind of Plot God AI
anticipated by the more freewheeling imaginations, but it *is* an
increase-in-abstraction.

***

We have also seen it suggested that something is AI when it is able to
produce results unanticipated by the programmer. By this definition,
any of my buggy code is exquisite AI, but presumably we are also to
understand, "results that are nonetheless consistent and desirable."

I would argue that in fact we will never be able by programmatic means
to make an AI which will produce behavior which is startling to a mind
capable of understanding the system perfectly and completely; even an
AI programmed with a full lexicon will only produce sentences from the
words in the lexicon, and thus not expand essentially beyond the
(enormous) scope laid out for it by its creation.

The real issue is that there is a threshhold of complexity at which
the author of the program is unlikely to be able any longer to
anticipate precisely what will happen, not because the logic is
impossible to follow, but because there are so many factors that the
data cannot easily be remembered. Moreover, the logic itself must be
based on subtle decisions-- artistic decisions, one might say, as
opposed to scientific ones-- about what constitutes a 'good' plot, a
'likely' emotional response, and so on. These may in fact be
expressed within the program data as numbers and formulae, but the
management of said formulae nonetheless rests upon taste.

And here enters the mystical spark of seeming intelligence that
distinguishes a system we would like to refer to as AI from a
masterfully abstract simulationist world model. The latter type of
system is intended to make the world behave consistently according to
the predictable physical laws with which we are familiar, so that
paper burns, fires spread, a lack of oxygen suffocates a fire; the
human player knows in advance what *should* happen, and the question
is whether the system possesses the sophistication to bear out that
expectation. Contrariwise, there is a 'right' way to do the
programming: it is possible to sit down and draw up a modelling system
in which physical constants are accurately represented, for instance,
or the Ideal Gas Law made to function. The model will be most
restricted by the amount of storage and processing power at its
command and by the problems of parsing and describing in language the
results of these mathematically precise actions.

The construction of a plausible AI for emotional and/or conversational
output, however, requires a lot of fine-tuning. One of the things
that makes NPC-writing such a towering pain the ass from my point of
view is precisely that I can't always tell what's going to happen, and
the more sophisticated the system designed to control them the less
certain it is; I keep having to go back to the system library and/or
the object data and massage the constants, or add handling for some
weird exception in the behavior that I want to achieve. The NPC
system I am now working with, Galatea's child or grandchild, allows
the NPC to pursue emotional and informational goals (find out this;
work towards that emotional result; convey the other thing), though of
course as the author I still have to decide what goals it should
pursue when. And it's still, as someone once called Galatea, "really
just a sentence-thrower"; it merely has a larger *library* of
sentences to throw and a more complicated set of rules for selecting
them.

Its more sophisticated grandchild system, assuming I ever wrote it,
might allow me to define different types of NPC character that would
deal with and prioritize such goals in different ways, so that one
will be more emotionally focused and another more philosophically
inclined. And the child of that, perhaps, might be better at choosing
its own goals and subgoals. And so on.

The thing is that from each level of abstraction the next level is
relatively approachable, in terms of system design; on the other hand,
each one requires a considerably greater training time-- more writing
of conversation pieces, more tweaking to make sure the conversation
pieces dovetail properly, and then a kind of editorial gloss that goes
over the top to make sure that I haven't mechanized all the soul out
of the result, because it is paradoxically easy to wind up making
sophisticatedly-planned NPCs whose actions are far less interesting
than the totally-preprogrammed sort.

Correspondingly, I think it is probably possible (though I have not
given it as much thought and have not a solution off the top of my
head) to develop the Plot-Manager-AI to a new level of abstraction
beyond the scene-management that I already do. I also imagine that
the resultant game would take me years to write, even at my most
IF-devoted. [Well, I hear some of you saying, why not write an AI
capable of coming up with plot ideas and material on its own, so that
you wouldn't have to give it that much data to work with? To you I
say: *** You have missed the point entirely. ***]

Nonetheless, I think it is possible to approach better AI than we now
have, on the definition of AI I've offered, and leading to results
that are closer to fulfilling the Turing-Test-For-IF. I *don't* think
that we must (or, indeed, would find ourselves able to) do so by
completely throwing out our existing methodologies and attempting to
come up with Perfect AI From Scratch.

Sadly, I realize that what I've proposed here is not especially
glamorous, and that it is hard to distinguish my sedate concept of AI
development from, well, Regular Old Programming. Sigh. But look:
part of it has to do, not only with good programming practices and
attempting to generalize one's code, but also with the choice of
goals, to model successively more complex behavior and, moreover, to
do modeling that is in some way qualitative rather than quantitative
(as opposed to the physical-world-modelling discussed above. Not, er,
that I'm anti-world-modelling.)

At heart this *is* an art as well as a technical process, the province
of the Muses as well as of Athena. It may be a technical trick to
encode for the computer instructions for how to serve your notion of
good plotting or plausible conversation or delightsome wittiness, but
the ideas themselves are aesthetic.

</verbose>

Jason Melancon

unread,
Sep 16, 2001, 6:26:08 AM9/16/01
to
Here's an article which may be interesting to you, if not precisely
helpful. http://www.zmag.org/chomsky/pp/

Yes, I'm a fan. So sue me.

--
Jason Melancon

Jon Ingold

unread,
Sep 16, 2001, 8:31:43 AM9/16/01
to
Re: Emily's post. I agree with a lot of this. The idea of a Plot AI is a
nice one, the first prototype perhaps being Christminister. I've often
thought a nice idea for puzzle games with randomly strewn keys would be
to keep a track of which key in the game the player has been longest in
finding, and giving them that one inside the Carved Ebony Chest, to
minimise frustration. I've never done this, though, because it strikes
me as a weak gimmick and a poor substitute for just designing properly.
However, for story-based games the idea works a lot better, as the key
(the scenes one experiences) is the reward itself. So, if the player
hasn't heard from Abigail for a while, then when he listens at the
drainpipe it's the illicit conversation between her and Lord Charles
that the player will hear. Assuming it's possibly for the story to
continue with Abby and her lover stuck in a drainpipe - in both games
you have to be able to justify the presence of the Chancellor's Key in
the skeleton's mouth or romance in the plumbing.

However, to take a more technical approach to the topic:

The thing that most AI gets hung up on is the idea of learning - you
have a program designed to try things in a fairly random way, then gauge
how successful each approach/combinations of actions was, and then
weight the choices based on these, making better approaches more likely.
Now the reason this is used, really, is for things that people don't
actually know how to do. The simple robots that are being built and
trained to process camera footage are being asked to do it because
no-one has a good way of picking 3d objects out of two camera feeds. The
very first AI-approaches [to my knowledge] were used to solve
complicated systems of equations, by approaching the solution in a
haphazard way - used in turbulence studies, for instance. (I think this
eventually was given up in favour of the "pseudo-genetic approach"; with
solutions being coded into "genetic strings", intermingled on each
"breeding session" with other individuals deemed "successful enough",
with some chance of mutational "bit-error". In truth that really is
exactly the same thing as the first approach, only along with the
weighting of random, extant possibilities, there is the possibility of
the creation of other possibilities. The thing to note, of course, is
that those possibilities all come from one, pre-defined space).

Anyway. How is that relevant to IF? Quite simply, it isn't. I could
program an AI with a hundred switch statements, which could negotiate
its way across a map finding keys in odd places and unlocking doors,
etc. etc. as it goes. Or I could set up a learning AI which, upon
encountering something it doesn't know what to do with, categorises it
based on what it can observe (if we're within Inform code, it could be
"this has the door attribute, the heavy attribute, the lockable
attribute", or indeed, "this has a before property, so it might be
tricky") then tries things in an essentially random way until it gets
through. In the process of this, it's quite possible that it'll end up
with a heavy probalistic bias toward dealing with locked doors by
throwing keys at them; indeed, depending how involved you make it, it
may learn to try the key which is in the object's with_key property.

But the outcome from these approaches is entirely identical; well,
nearly - one will run happily from scratch, and the other will monkey
about for a few thousand turns before being able to do anything
properly. (For instance, if it encounters a locked door, it'll not only
have to work out how to unlock it, what to unlock it with, but it'll
also have to try going through it again when it's open and realise that
an open door is passable. We can't be allowed, if we're playing fair, to
preprogram that either. Otherwise it's just another switch statement).
Well, actually, I suppose the learned one might try some more strange
things, occasionally PULL doors instead of opening them, HIT boxes
rather than just OPENing them, as it has encountered some vases
en-route; and these things may make it more entertaining to observe.
That's a matter of taste, I suppose.

But then the question of the original thread was really "Is IF a good
medium in which to explore AI technology"? In which case, we're not
really talking about making good games at all. However, again, I'd still
be forced to argue that no, AI is not well-served by the IF medium and
the reason is this - paucity of data. An Inform object can have, what,
64 attributes? Most will have at maximum eight. So your AI will be fine
as it learns what doors are, or what verb to apply to containers which
lack the open attribute. And once it has finished with all of those,
there is Nothing Left. With the little camera-robots scurrying around in
MIT, once they deal with the large blocks of colour they can try and
refine the image to cope with smaller blocks of colour. They can take
the existing data produced, monkey with the program's fine-tuning, and
set it running again, to become ever more precise. In IF this would mean
having to do that, and also having to refine your entire model world as
well.

In a way, that makes sense too. The thing that makes any puzzle game
interesting is the fact that it's not a perfect simulation. If every
puzzle was a box to be OPENed, the game would be very boring. Instead,
it's a box to be opened by means of a lever through a ring on the lid
resting on a curious indented stone on the floor. It's not just PUTting
things on tables, it's PUTting the magic amulet of Ra on the magic table
of Ra, by a player who doesn't know what to expect. And no AI will be
able to handle those until it can differentiate between all those
different types of object. So either you simulate it _all_ (Class
MagicalAmulet, Class Lever, Class StrangelyIndentedRock) or you write a
program which can recognise words from the game's text.

Ah.... now maybe that would work? Give your AI two learning goals - the
first to learn to interact with objects in the essentially mundane
process of plonking the Verb Set onto the Noun Set until something
useful happens; the second to try and associate those random actions
with the words in the text. You'd need to train with thousands, possibly
hundreds of thousands, of scenarios, enough that it can compare the line
"The rock is basalt, set firmly into the ground, with a strange
indentation on its surface. It is rather like a Giant's finger-pad
ReadOMatic" and "This is a large whale-bone; perhaps a shoulder-blade.
Along the top runs an indentation where the tendons would have been" and
correllate the word "indentation" with its "PUT <long thin object> ON"
abilities. And not correllate "The", "its", "a". So for advancing the
theory of AI; if all this was successful you'd end up with a little IF
being able to open doors and crack boxes by "reading" their
descriptions, correlating important descriptive factors. You'd have the
ultimate puzzle-game player.

Would it be "intelligent"? In the context of his world, the IF-puzzle
game, yes, he would be. In any other meaningful sense, probably not. He
certainly wouldn't be portable anywhere else without just retraining him
again (thing to note about biological and also pseudo-biological
processes is they are Specific, hugely, utterly, Specific. My brain, for
instance, is very specific in that I can't speak Norwegian, and if you
dropped me in the Amazon I'd die in .39 seconds. The _learning_
capability of the brain, on the other hand, isn't specific, but that's
the training algorithm, not the finished product. That said, I'm rather
hoping my brain is not a finished product quite yet.)

But then -- maybe if you're going to start training your little being to
understand the words too, IF suddenly becomes a worse environment for
AI? There are several thousand words in English, say, and a lot meaning
the same thing. There are lots of ways of saying things by deliberately
not using the right words. Hell, I could describe my indented rock by
saying "The rock isn't not not unidented" which could confuse any
purely-correllating AI. Whereas, at least with camera input, the space
of possibile input at each level of resolution is not too big; and you
have near-infinite ability to increase that resolution, and so easily
step up the capabilites of your being. That's the equivalent of training
your AI on IF-Postman Pat and graduating him to IF-Tolstoy (in, of
course, the Russian).

Of course, none of this is really relevant to game writing....
unless....

Does anyone fancy writing the above AI? And then "reversing" him? And
using him to generate the perfect puzzle game? Perhaps from all this we
an construct, rather than the perfect NPC, the perfect AC [Author
character] or GDC [Game Design Character]. We can generate the perfect,
biological formula approach to write the perfect (probably puzzle-based,
admittedly) game?

Or would its output be formulaic, boring, pre-programmed? Just a little
too repetitive? Somehow, "paradoxically", come out as less interesting
than a game written from scratch by a free-wheeling imagination?

The simple truth of NPC design I think is that people are better at
producing people than algorithms. Yes, it's a real hassle to code four
hunded switch statements, but the output is so much better than a system
of NPC's whose wires stick out for all to see. Yes, it's a real hassle
to catch every combination; to ensure that every leap and aria of the
player's is met with a funny look. But isn't that better than twenty-two
characters from a boxed set, who all give the same funny look at every
action listed in their curious_action property?

Well, I think so. My favourite NPC of all time is Edward from
Christminister. He didn't have a lot to say, and he tended to repeat
himself. But all that was okay, because I was playing a text game at the
time, and so didn't read his repetitions too carefully, or his
dimissals. And when he did say things, they were interesting, they were
well caricatured, and they made me laugh. Whereas Billy said one such
thing ("Are you sad that you are just your girlfriend's computer, Jon?")
and a whole lot of garbage. And Galatea teased me with the promise of
interesting conversation, but it seemed that the line of questioning I
was intrigued by hadn't been programmed in. I would have been far
happier with a directed conversation and only four topics, I think.

Perhaps I'm an NPC luddite. But I can't help thinking that Tiffany --
er, I mean Trent -- from Leather Goddesses of Phobos was very cool when
he did stuff, and totally ignorable when he had nothing to say. The
Demon from Curses was a very good NPC, because he was clever, and I read
everything I got him to say with a knowing smile. One player said he
spent a good while in The Mulldoon Legacy chatting up Officer Cat
Dowobbly-Ftang, and I know how few things she has to say. But it doesn't
matter, because it's not about the numbers. It's about the words.

Finally I add to this my current sig, which sums it up somewhat.

Jon

----

I suppose it depends which you wish to create:
- an adventure game -- Alex Warren, raif
- a friend on advanced parsers


OKB -- not okblacke

unread,
Sep 16, 2001, 12:53:04 PM9/16/01
to
ems...@mindspring.com (ems...@mindspring.com) wrote:
<a lot of interesting stuff>

Cool. Yeah. I agree with a lot of what you said here, and, to a
substantial extent, these are the reasons why I think AI in IF, though it might
be interesting, is probably not going to happen for some time -- and even if
there is AI-IF, it still might not be the best IF around.

Here's my thinking. We're all pretty much in agreement that games tend to
be either puzzle-based or story-based. And, as you say, when we talk about
very sophisticated programming, we're generally talking about either world
modelling (simulationism) or AI. Now, I do think that a sooper-dooper
simulationist library could be very helpful in implementing some killer
puzzles. But I don't think AI is going to be very helpful in implementing a
good story, simply because if I'm trying to write a story, I don't want some
intelligent NPC fouling it up with his own ideas on how it ought to turn out.
(I'm passing over the question of whether AI would be useful for puzzles, or
whether simulationism would be useful for stories, in any substantial measure,
because I think neither is the case.)

All this is pretty much why I really like menu-based conversation systems
-- specifically, menu-based conversation systems that let you choose what
you're actually going to say, not what you're going to ask about. I like them
as a player, because they let me see exactly what I'm going to say; I don't
like typing "ASK JOE ABOUT BILL" and not knowing how the character is going to
phrase his question. I like them as an author, because they let me ignore what
isn't relevant (by leaving things off the menu) and condense what is (by
writing 3 or 4 menu options that are diverse in their meanings and intents, as
opposed to many similar ones).

Basically, AI would add a new dimension to the "player vs. author"
consideration. It would become "player vs. author vs. NPC", with the author
increasingly dropping out altogether. There are good points to authorial
control and good points to player freedom, but it seems like letting the game
be largely controlled by an NPC would take a lot of the fun out of writing and
playing, leaving only the NPC-as-toy.

--OKB (Bren...@aol.com) -- no relation to okblacke

"Do not follow where the path may lead;
go, instead, where there is no path, and leave a trail."
--Author Unknown

OKB -- not okblacke

unread,
Sep 16, 2001, 12:55:51 PM9/16/01
to
"Jon Ingold" j...@ingold.fsnet.co.uk
>I've often
>thought a nice idea for puzzle games with randomly strewn keys would be
>to keep a track of which key in the game the player has been longest in
>finding, and giving them that one inside the Carved Ebony Chest, to
>minimise frustration.

This idea of letting the game silently adjust itself to move the player
and the story along is something I'm really interested in (and I've spouted off
about it here at length in the past). I have a WIP that does a somewhat
simplistic job of this.

Gadget

unread,
Sep 16, 2001, 2:22:17 PM9/16/01
to
On 16 Sep 2001 00:02:37 -0700, ems...@mindspring.com
(ems...@mindspring.com) wrote:

<AI article>

I've been thinking about your article and it reminded me about
something I read bout the design of an intelligent opponent in, for
example, a game of Tic Tac Toe. How do you let the computer make
decisions that are constructive towards a certain goal: in this case
winning the game.

The article used what it called the 'minimax' method:

The program looks at the situation on the gameboard and compiles a
list of possible moves for itself. Then each move in the list is
awarded a score: high if it will advance the situation in favour of
the program, low if it will have a less negative on the state of the
game. Then the same is done with the possible moves of the player.
Finally, the move is chosen that has the maximum positive effect for
the computer with the minimum advantage for the player.

For my own tic tac toe game this was rather easy to implement. Now I
was thinking about the possibility of implementing something like this
in IF.

Let's say we make the setting a castle. The player must enter the
castle, find the magic spells and vanquish the evil sorcerer who lives
in the castle. (And let's not discuss originality here ;-)

Now, let's make the sorcerer an NPC with a number of possible actions:
he can cast a set of spells. He cannot, however, see the player since
the players' level of magic is not high enough and he cannot directly
kill the player. He can, however open and close doors and spring
certain traps.

Now we could make the sorcerer move around in the castle, with the
awareness that he has to stop the player from reaching a certain
location to obtain a certain powerful spell that would destroy him.
The sorcerer now can evaluate his situation in each turn and see how
he can best advance his own goal and to hinder the player to the point
of stopping him. To make it fair, the sorcerer kan only perform one
action per turn, for example: check where the player is, close a door,
open a door, spring a trap or whatever. The sorcerer also has full
knowledge of the geography of the castle, so he can make educated
guesses of where the player could go next. (If the PC is in a room
with two doors, the Sorcerer has a 50% chance of guessing which way
the PC will go).

Now we have a kind of IF chess going: each player (the sorcerer has
now become an active player) moves through the castle one move at a
time, as does the player. Each has his own goal and the playing field
has just become very dynamic.

One thing is lacking in this design: the sorcerer cannot learn from
his experience. Using the 'minimax' method he can only see in the
'now' and in the 'soon after' but not in the past. He can't determine
what kind of strategy the player is using or what kind of personality
the player displays in his choices. This lack of learning would mean
for me that the sorcerer has no *real* intelligence. But he would be
acting with a certain level of intelligence, comparable to, say, a
simple chess computer.

The problems I would foresee with this design are of complexity: The
more possible actions the PC and NPC an perform, the more difficult it
will be for the NPC to calculate what the best possible next move will
be.

On a gameboard like in tic tac toe, te options are limited enough to
let te computer think several moves in advance. In an IF setting, the
gameboard is also a model world with simulation elements that all
complicate matters.

Still, I think the implementation of such an intelligent opponent
could lead to an interesting and unpredictable game...

-------------
It's a bird...
It's a plane...
No, it's... Gadget?
-------------------
To send mail remove SPAMBLOCK from adress.

Sean T Barrett

unread,
Sep 17, 2001, 4:33:51 AM9/17/01
to
ems...@mindspring.com <ems...@mindspring.com> wrote:
>We have also seen it suggested that something is AI when it is able to
>produce results unanticipated by the programmer.
[snip]

>And here enters the mystical spark of seeming intelligence that
>distinguishes a system we would like to refer to as AI from a
>masterfully abstract simulationist world model. The latter type of
>system is intended to make the world behave consistently according to
>the predictable physical laws with which we are familiar, so that
>paper burns, fires spread, a lack of oxygen suffocates a fire; the
>human player knows in advance what *should* happen, and the question
>is whether the system possesses the sophistication to bear out that
>expectation.

While I basically agree with this entire post, I thought I
would point out that even a pure simulationist system can
result in "emergent complexity", where combination of rules
produce unexpected results. For example, the rules of chess are
incredibly simple compared to the "emergent" rules; the
rules of chess are about moving and capturing; there is
no notion of "defending" or protecting, nor of value for
the pieces; these emerge from the near-optimal strategy
for playing the game.

Or to use another example, I mentioned in my blue-sky
post that in IF and commercial games, the moment-to-moment
is interactive and player-ordered, but the large scale
tends to be a story. One of my ex-coworkers was fond
of telling a story about an experience one player of
Ultima Underworld had, where the player got in a fight
with a goblin and then ran, found a door, got on the
other side of the door (and perhaps spiked it shut),
only to, moments later, discover that he had locked
himself into a room with a skeleton warrior, a rather
tough opponent at that stage of the game--and that
moment, that experience, had never been planned by
the designers. And yet this was all programmatic
simulationist logic. (Most players didn't have anywhere
as neat as an experience, since there was no
'gamemaster AI' driving it; this was just chance.)

Whether you accept this as an emergeent, player-driven
small-scale "story" or not is a separate quesiton.

Sean

Sean T Barrett

unread,
Sep 17, 2001, 4:43:55 AM9/17/01
to
Jon Ingold <ji...@cam.ac.uk> wrote:
>The thing that most AI gets hung up on is the idea of learning

The belief that AIs should work by learning is not a universal
AI belief; it is a premise that locks you into certain models
of AI from the moment you make that assumption.

>But then the question of the original thread was really "Is IF a good
>medium in which to explore AI technology"? In which case, we're not
>really talking about making good games at all. However, again, I'd still
>be forced to argue that no, AI is not well-served by the IF medium and
>the reason is this - paucity of data.

Actually, there are some who might argue with you. Douglas Hofstadter's
AI book "Fluid Concepts and Creative Analogies" (or something like that)
is about an approach to AI that exhibits emergent behavior (you can't
predict what it will come up with) and works best with limited
"micro-worlds"--in fact his whole premise is that things like Cyc
and the widespread bias against microworlds in the AI community
is exactly what's preventing us from getting anywhere with AI, since
it's forcing us to solve a much harder problem than we need to solve.

Games offer a built-in microworld with simple discrete rules, no
worries about people using "Big Mac" as a metaphor so the knowledge
base needs to have 200 facts about McDonalds, and no worry about
sense data processing. (However, I don't see any obvious IF use
for the analogy-making DH's programs do, except possibly in the
area of storytelling.)

SeanB

Sean T Barrett

unread,
Sep 17, 2001, 4:57:00 AM9/17/01
to
OKB -- not okblacke <bren...@aol.comRemove> wrote:
>Here's my thinking. We're all pretty much in agreement that games tend to
>be either puzzle-based or story-based. And, as you say, when we talk about
>very sophisticated programming, we're generally talking about either world
>modelling (simulationism) or AI. Now, I do think that a sooper-dooper
>simulationist library could be very helpful in implementing some killer
>puzzles. But I don't think AI is going to be very helpful in implementing a
>good story, simply because if I'm trying to write a story, I don't want some
>intelligent NPC fouling it up with his own ideas on how it ought to turn out.

Being "story-based" is different from "writing a story". Some of us
(hmm, at least one of us anyway) think that the power of the interactive
medium as a medium compared to other non-interactive media comes
from, what a surprise, interactivity. The puzzle-based storyless games
leverage interactivity well; the story-based games don't leverage
interactivity at all--at least on the level of the story.

To express a very simple example of this: if we want the player's
interaction to affect the story deeply, we need to branch (and
not remerge later). If we want 32 decision points, there are
2^32 possible stories. No human is going to write all 2^32 of
those stories out. But a human could sit there with another
human and make it up as they go--only ever writing one story,
but it's the story that accounts for what the player wanted to
do. So one use for AI in IF would be to take that "storyteller"
role--to invent the unfolding plot as the 32 decision points are
visited so that the result is still a good story.

To me that's obviously something a human can't do 2^32 stories--can't
create that experience by "hand", and to me it's obviously a "more
interesting" sort of IF than the plain linear story IF. Maybe Photopia
wouldn't work as well as static fiction as it does as IF, and maybe
Photopia would still be better than a lot of AI-plotted story-driven
games like I just characterized, but I suspect that Photopia couldn't
compete with the best of the AI-plotted story-driven games should such
things really be plausible and ever exist. Maybe I'm wrong there,
or maybe it's just personal opinion, but it seems to me that
if you're an interactive medium, the story that is good no
matter what you do, but which varies depending on what you do,
is better. Yes you can try to do simpler "depends on what you do",
with branches that merge and etc., but return to the human "interactive
storyteller" who doesn't need to work with those constraints.

I agree that AI for NPCs is of more questionable utility,
since they could easily wreck stories or wreck puzzles due
to their unpredictability.

SeanB

Jon Ingold

unread,
Sep 17, 2001, 6:10:42 AM9/17/01
to
Some spoilers for Hunter in Darkness

> Maybe I'm wrong there,
> or maybe it's just personal opinion, but it seems to me that
> if you're an interactive medium, the story that is good no
> matter what you do, but which varies depending on what you do,
> is better.

I disagree with this. Imagine you played a game where whatever you did,
no matter what you did, a story unfolded. You play it once, and so
that's as good as the only story - it makes, in truth, no damn
difference to the player if there is the possibility of 2^32 stories
because he'll play one. Okay, if he plays again, he might "that's neat",
but (a) the first one he plays will seem "definitive" I should think,
and more strongly (b) if it's really 2^32 then they're going to have to
be quite different, and so the experience would be more like a book with
the same first chapter and the rest -quite randomly - written by a
different author.

And so what the player experiences is a story in which no matter what he
does it all keeps moving. Which - I suspect - might feel very, very
railroaded. Paradoxically, I think that if you cover every option with a
new story, your player is going to feel like it doesn't matter what he
does, because there's always going to be a story there, so he may as
well just type z.

The cure for this sort of thing is parts where the game does not
progress at all, time stands still effectively, until the player does
something. If that something is in character, it will immerse him
further; or it may have other effects (see the bit in Photopia, or heh,
the business with the form in 9:05)

Of course, some of this depends on your definitions of the "good story".
If your game design story teller AI takes all this into account, he can
ensure that all his stories have sufficient "moments" to keep the player
thinking he is expected to keep up, and that's okay. Well, I'm still a
sceptic, but in theory it all works.

Second problem - the UNDO command. How many people played only _one_
ending of So Far? I'd put a very large amount of money that everyone in
the entire world has reached an even number of endings (oh, hang on,
what happens if you just type Z? I can't remember, but I suspect I tried
it). A lot of players are going to find things deeply unsatisfying if
they find that going left and going right in a room produce entirely
different things which never tie up. There is a test case of this -
Hunter, In Darkness. Was the variation at the very start "worth it"? I
suspect that case doesn't hold up though, because I don't think I know
of anyone who managed to get anywhere going toward the chasm&rope at the
start. I suspect people went that way, restarted, went the other way and
forgot about it.

Jon


Davey

unread,
Sep 17, 2001, 8:53:10 AM9/17/01
to
People diss the Turing Test all the time, even though it is a test that has
yet to be passed!

I find this odd. If it's so easy, why haven't we seen it yet?


<ems...@mindspring.com> wrote in message
news:a69830de.01091...@posting.google.com...

Gabe McKean

unread,
Sep 17, 2001, 10:12:25 AM9/17/01
to
Davey wrote in message ...

>People diss the Turing Test all the time, even though it is a test that has
>yet to be passed!
>
>I find this odd. If it's so easy, why haven't we seen it yet?

Who said that the Turing Test was easy?

P.S. Please only quote the parts of a post that are relevant to your reply,
especially when you're dealing with a short reply to a very long post.


Matthew T. Russotto

unread,
Sep 17, 2001, 11:01:52 AM9/17/01
to
In article <scr9qtogjqvbcnnni...@4ax.com>,

Gadget <gad...@SPAMBLOCKhaha.demon.nl> wrote:
}On 16 Sep 2001 00:02:37 -0700, ems...@mindspring.com
}(ems...@mindspring.com) wrote:
}
}<AI article>
}
}I've been thinking about your article and it reminded me about
}something I read bout the design of an intelligent opponent in, for
}example, a game of Tic Tac Toe. How do you let the computer make
}decisions that are constructive towards a certain goal: in this case
}winning the game.

Tic Tac Toe is a poor example because there's a simple non-losing
strategy that doesn't require searching the game tree.
--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."

OKB -- not okblacke

unread,
Sep 17, 2001, 11:35:17 AM9/17/01
to
buz...@world.std.com (Sean T Barrett) wrote:
>Being "story-based" is different from "writing a story". Some of us
>(hmm, at least one of us anyway) think that the power of the interactive
>medium as a medium compared to other non-interactive media comes
>from, what a surprise, interactivity. The puzzle-based storyless games
>leverage interactivity well; the story-based games don't leverage
>interactivity at all--at least on the level of the story.
>
>To express a very simple example of this: if we want the player's
>interaction to affect the story deeply, we need to branch (and
>not remerge later). If we want 32 decision points, there are
>2^32 possible stories.


Ooer. I hardly think that "altering the story" per se is the end-all and
the be-all of interactivity. One alternative would be a game in which there
are many things to do but only one ending (or perhaps one "real" ending and
several "death" endings). (Suppsedly "My Angel" was like this, although I only
played it once, so this is hearsay.) Another approach would be a game in which
there are many "extra" things which add to the story but are not required to
end the game. Neither of these necessarily requires writing 2^32 different
stories -- they just require a bit of planning so that everything fits
together.

David Brain

unread,
Sep 17, 2001, 12:26:00 PM9/17/01
to
In article <Wmmp7.2333$f5.135885@news>, D...@G.com (Davey) wrote:

> People diss the Turing Test all the time, even though it is a test that
> has
> yet to be passed!
>
> I find this odd. If it's so easy, why haven't we seen it yet?

But it seems that the Turing Test has been passed several times - *if* the
test is limited to a "micro-world" as has been referred to in this thread.
For instance, ISTR hearing about the "Star Trek" test in which the
questioner couldn't tell which was the fan and which the computer simply
by asking Trek-based questions, because the world-set was small enough
that the AI could deal with all of it (alright, most of it.) Indeed, a
keen enough fan might even come across more "computer-like" than the AI
which may not know some answers... :-)

--
David Brain
London, UK

ems...@mindspring.com

unread,
Sep 17, 2001, 12:49:47 PM9/17/01
to
"Davey" <D...@G.com> wrote in message news:<Wmmp7.2333$f5.135885@news>...

> People diss the Turing Test all the time, even though it is a test that has
> yet to be passed!
>
> I find this odd. If it's so easy, why haven't we seen it yet?

I didn't diss the Turing Test. To quote, er, just the part of my
message that has bearing here:

> > Simply: the Turing test looks purely at output. If the output is
> > indistinguishable from human output, the AI passes.

What this says is not, "the Turing test is easy because it only looks
at output," but rather, "the Turing test is different from our other
formulations of the nature of AI because it does not investigate what
is going on under the hood."

The advantage of this definition is that it evades all the
philosophical conundra raised by discussing What Intelligence Really
Is. The disadvantage, at least from our point of view, is that it
doesn't provide many hints about how to solve the problem.

ES

Gadget

unread,
Sep 17, 2001, 1:01:41 PM9/17/01
to
On Mon, 17 Sep 2001 15:01:52 GMT, russ...@wanda.vf.pond.com (Matthew
T. Russotto) wrote:

>In article <scr9qtogjqvbcnnni...@4ax.com>,
>Gadget <gad...@SPAMBLOCKhaha.demon.nl> wrote:
>}On 16 Sep 2001 00:02:37 -0700, ems...@mindspring.com
>}(ems...@mindspring.com) wrote:
>}
>}<AI article>
>}
>}I've been thinking about your article and it reminded me about
>}something I read bout the design of an intelligent opponent in, for
>}example, a game of Tic Tac Toe. How do you let the computer make
>}decisions that are constructive towards a certain goal: in this case
>}winning the game.
>
>Tic Tac Toe is a poor example because there's a simple non-losing
>strategy that doesn't require searching the game tree.

For tic tac toe you can also read 'reversi' or 'connect four'. The
same system will work on those (but better, you are right of course
;-)

ems...@mindspring.com

unread,
Sep 17, 2001, 1:07:55 PM9/17/01
to
"Jon Ingold" <j...@ingold.fsnet.co.uk> wrote in message news:<9o265d$ot0$1...@news6.svr.pol.co.uk>...


> However, to take a more technical approach to the topic:
>
> The thing that most AI gets hung up on is the idea of learning - you
> have a program designed to try things in a fairly random way...
<snip>
> Anyway. How is that relevant to IF? Quite simply, it isn't. I could
> program an AI with a hundred switch statements, which could negotiate
> its way across a map finding keys in odd places and unlocking doors,
> etc. etc. as it goes. Or I could set up a learning AI which, upon
> encountering something it doesn't know what to do with, categorises it
> based on what it can observe (if we're within Inform code, it could be
> "this has the door attribute, the heavy attribute, the lockable
> attribute", or indeed, "this has a before property, so it might be
> tricky") then tries things in an essentially random way until it gets
> through.
<snip>

> But the outcome from these approaches is entirely identical; well,
> nearly - one will run happily from scratch, and the other will monkey
> about for a few thousand turns before being able to do anything
> properly.

It seems to me that the advantage of a learning AI is that it is able
to make rules that we aren't able to deduce ourselves ex nihilo.
Physical world modeling, *especially* in the low-level simulation that
is commonly used in the average IF library, is not a good context for
this, since it is possible to write routines to teach the AI how to
handle all of the things it will come across-- just as you say.

On the other hand, it seems that this sort of thing *might* be useful
for things as nebulous as conversation writing. Here I'm thinking of
a situation in which I create a bunch of topics to talk about, and
then train up the computer to know which conversation squibs lead
logically from which other ones under what circumstances: my current
system requires me to do this work myself, in the sense that I play
the game until I find it doing something I don't like, then I go back
and fix the code until it no longer does this. Sometimes that
requires me to give conversation topics/squibs new attributes or
properties so that I can introduce a new rule to the system. A
learning AI might, hypothetically, make that process less difficult
for me. The *fact* that it is a learning AI would thus be useful from
a construction point of view, not so much during the game's playtime.

> And Galatea teased me with the promise of
> interesting conversation, but it seemed that the line of questioning I
> was intrigued by hadn't been programmed in. I would have been far
> happier with a directed conversation and only four topics, I think.

One of the things I kicked around in my head for a while was the idea
of an open-NPC project. I would put the NPC out there for people to
play with, and then (oh, I don't know, maybe by running this on the
web) it would record any topics that got asked about for which it
didn't have a response, and I would write up responses for those and
so on.

The problems are obvious (and I don't just mean problems like, "Huge
Time Sink"). Nonetheless, I think what you've put your finger on (at
least in the sense of Galatea's limitations; making her only talk
about four things would have been counter to the entire point, but I
think we've established already that you and I have vastly different
game design preferences) is integral to good NPC development: if there
is not to be AI, then it helps at least to have other humans than the
author prodding at the NPC to find its edges and expand upon them.



> Perhaps I'm an NPC luddite. But I can't help thinking that Tiffany --
> er, I mean Trent -- from Leather Goddesses of Phobos was very cool when
> he did stuff, and totally ignorable when he had nothing to say. The
> Demon from Curses was a very good NPC, because he was clever, and I read
> everything I got him to say with a knowing smile. One player said he
> spent a good while in The Mulldoon Legacy chatting up Officer Cat
> Dowobbly-Ftang, and I know how few things she has to say. But it doesn't
> matter, because it's not about the numbers. It's about the words.

Here we're back to where I agree. At bottom, NPCs will not be any
better than their writing, and until we can teach an AI to write with
grace and humor and personality, I think we need to continue to put
effort into their dialogue ourselves.

ES

ems...@mindspring.com

unread,
Sep 17, 2001, 1:20:22 PM9/17/01
to
buz...@world.std.com (Sean T Barrett) wrote in message news:<GJsu7...@world.std.com>...

> I agree that AI for NPCs is of more questionable utility,
> since they could easily wreck stories or wreck puzzles due
> to their unpredictability.

Hmm. People seem to keep assuming that AI-driven NPCs == NPCs with
the ability to decide they want to stroll across town for a cup of
coffee, or burn a hole in the wall, or marry the PC's sister, leaving
the plot empty and formless. (I'm reminded a bit of "The Purple Rose
of Cairo" here.)

Not only are NPCs only as smart as we make them, but they are only
smart in the *ways* that we make them smart; by which I mean, it is
possible to give the NPC a fair amount of discretionary power over
methodology, but still control his end goals. If I want to write a
more-intelligent-seeming NPC than one who just steps through a list of
preprogrammed phrases in order, then I can, for instance, give him
some ways to react to the player to get the conversation back on
track, allow for some responses that reflect emotion, etc.; but if I
don't *want* to let him get angry enough to storm out of the room,
eg., I don't have to. There can be a great deal of sophistication in
this methodology, including sophistication that allows him to select
some subgoals, but all his efforts would still ultimately be for the
purpose of achieving his plot-purpose.

If you let NPCs roam your physical environment moving things around
and manipulating objects significantly, then yes, it is possible that
this will result in him, for instance, unintentionally locking up the
player in the attic without a key, rendering the rest of the game
unplayable. Here again, though, it seems mostly that there's a
question of imposing additional constraints or making the AI even
smarter -- teaching the NPC to recognize when it is about to constrain
the player's movement, for instance, or to answer when the PC knocks
on the attic door.

My point remains something like this: blue-sky AI, HAL-inna-box, a
super-NPC who is for all intents and purposes a person -- that sort of
thing is indeed beyond our abilities (at the moment) and incompatible
with our goals (at least in the context of IF). But if we talk about
AI in relative terms, as dealing with a greater degree of abstraction,
flexibility, and dynamic decision-making, then some improvement in AI
does become useful to us.

ES

Davey

unread,
Sep 17, 2001, 1:25:40 PM9/17/01
to
Certainly, a good example of a natural language interface where the software
actually knew what it was talking about was Terry Winograd's SHRDLU. Most,
if not all, others, (barring certain expensive expert systems) are just
keyword-matching routines, which as you know work very well for certain
domains but are exceedingly brittle.

"David Brain" <ne...@davidbrain.co.uk> wrote in message
news:memo.20010917...@atlan.cix.co.uk...

Gadget

unread,
Sep 17, 2001, 1:31:23 PM9/17/01
to
On Mon, 17 Sep 2001 08:33:51 GMT, buz...@world.std.com (Sean T
Barrett) wrote:


>Whether you accept this as an emergeent, player-driven
>small-scale "story" or not is a separate quesiton.
>
>Sean

Now you turn the discussion towards a very valid question:

Do you create a world or do you write a story?

If you just create an interesting world for the player to interact
with in any way s/he pleases, each player will have (sort of) unique
stories. If you tell the story of Arthur Dent who escapes from the
doomed planet Earth to struggle with the vending machine, get thrown
out of the airlock and finally reach a mysterious destination, the
story is set in stone, as it were. The player can only follow the path
the designer has set out for him.

I don't believe it is possible to let the program create storylines
that are so scripted as, say, Hitchhikers Guide. Well, maybe one day,
but currently, it would be WAY beyond the capabilities of current
programming languages.

It would ask for a thorough understanding of what creativity is and
how you can abstract that to a computer model.

Gadget

unread,
Sep 17, 2001, 1:44:11 PM9/17/01
to
On 17 Sep 2001 09:49:47 -0700, ems...@mindspring.com
(ems...@mindspring.com) wrote:

For me intelligence means:

a) The ability to process information
b) Then learn from that information
c) Base future actions on what has been learned from that information
*and* on what has been learned from previous actions.
d) solve problems based on c)
e) understand how the solution in d) was reached. (in short: why the
solution works.

These things together form, for me, intelligence.
A rat who pushes a lever to open a hatch to get to a piece of cheese
doesn't understand what he does. He just reacts to a pattern of
'stimulus-response'.

If, however, the lever is placed some distance from the hatch, and
that hatch closes if the lever is released, the rat would show
intelligence by butting something heavy on the lever to keep it down
to open the hatch to get the cheese.

Come to think of it: when we play IF, we basically just chase pieces
of cheese in other peoples mazes.

But if the rat would, for example,

Gabe McKean

unread,
Sep 17, 2001, 1:54:39 PM9/17/01
to
David Brain wrote in message ...

I hadn't heard about the "Star Trek" test. I do remember, though, that Gary
Kasparov (sp?) was briefly convinced that Deep Blue was in fact being
controlled by a human chess-player after one of their matches. I guess that
qualifies as passing a limited form of the Turing test. Of course, the
possible actions in a chess games are a fairly restricted, even for a
'micro-world.'


L. Ross Raszewski

unread,
Sep 17, 2001, 2:33:53 PM9/17/01
to
On Mon, 17 Sep 2001 19:01:41 +0200, Gadget
<gad...@SPAMBLOCKhaha.demon.nl> wrote:
>On Mon, 17 Sep 2001 15:01:52 GMT, russ...@wanda.vf.pond.com (Matthew
>T. Russotto) wrote:
>
>>In article <scr9qtogjqvbcnnni...@4ax.com>,
>>Gadget <gad...@SPAMBLOCKhaha.demon.nl> wrote:
>>}On 16 Sep 2001 00:02:37 -0700, ems...@mindspring.com
>>}(ems...@mindspring.com) wrote:
>>}
>>}<AI article>
>>}
>>}I've been thinking about your article and it reminded me about
>>}something I read bout the design of an intelligent opponent in, for
>>}example, a game of Tic Tac Toe. How do you let the computer make
>>}decisions that are constructive towards a certain goal: in this case
>>}winning the game.
>>
>>Tic Tac Toe is a poor example because there's a simple non-losing
>>strategy that doesn't require searching the game tree.
>
>For tic tac toe you can also read 'reversi' or 'connect four'. The
>same system will work on those (but better, you are right of course

Or, more tellingly, 'Global Thermonuclear War'

Gadget

unread,
Sep 17, 2001, 3:34:08 PM9/17/01
to

Wouldn't you prefer a good game of chess?

Sean T Barrett

unread,
Sep 17, 2001, 3:36:30 PM9/17/01
to
ems...@mindspring.com <ems...@mindspring.com> wrote:

>buz...@world.std.com (Sean T Barrett) wrote:
>> I agree that AI for NPCs is of more questionable utility,
>> since they could easily wreck stories or wreck puzzles due
>> to their unpredictability.
>
>Hmm. People seem to keep assuming that AI-driven NPCs == NPCs with
>the ability to decide they want to stroll across town for a cup of
>coffee, or burn a hole in the wall, or marry the PC's sister, leaving
>the plot empty and formless.

This wasn't really what I meant; in fact I nearly qualified the above
comment with "unless managed by some sort of gamemastering AI". But
I decided I didn't want to beat that drum too much, because I don't
want to give the impression that I think gamemastering AI is really
the best thing since diced bread.

>If you let NPCs roam your physical environment moving things around
>and manipulating objects significantly, then yes, it is possible that
>this will result in him, for instance, unintentionally locking up the
>player in the attic without a key, rendering the rest of the game
>unplayable. Here again, though, it seems mostly that there's a
>question of imposing additional constraints or making the AI even
>smarter -- teaching the NPC to recognize when it is about to constrain
>the player's movement, for instance, or to answer when the PC knocks
>on the attic door.

The moment the NPC AI starts coloring its actions based on the
potential player experience *as a game*--or, alternately,
the moment the NPC begins coloring its actions based on our
goals for the player instead of the NPCs goals--you are to
my mind writing game-mastering AI, not NPC AI. Hence what
I really meant but didn't say was "NPC AI (without gamemastering
AI) is more questionnable."

SeanB

Jon Ingold

unread,
Sep 17, 2001, 3:34:07 PM9/17/01
to
> On the other hand, it seems that this sort of thing *might* be useful
> for things as nebulous as conversation writing. The *fact* that it

is
> a learning AI would thus be useful from a construction point of
> view, not so much during the game's playtime.

Of course, if we allow it, we're back to definitions here. Would such an
NPC (or perhaps more accurately, such a game's ControlConversationFlow
daemon) be an AI? I think by my definitions, yes, it would be because it
learns in development, even if not in the final game release.

The possible drawback to such an idea would be the need to
pre-generalise your conversation system sufficiently to cover everything
before you know what it is. This sort of system would presumably not be
very favourable for hard-coded entry-points (or at any rate, they would
detract from the point). But once that work is done, it would exist and
not need to be done again.

Yes, I think I like this idea - then you play through your
conversation - play through with your NPC in a limbo room; and have him
spout randomly, just telling him whether or not he's rambling he or
whether he's generating the illusion of coherency. In the game context,
it would have weighted links between topics; by reading these weights
you could also add nicely to the conversations "trappings"; so, if an
option of low weighting is selected by a random process, then the game
chooses a "Suddenly, Jim says.." message rather than a "Jim replies".

Have you done any work toward producing such an authoring/training
system, or is still an idea-in-words-only? (And do your conversation
systems lend themselves toward nice generalisation?)

> making her only talk
> about four things would have been counter to the entire point, but I
> think we've established already that you and I have vastly different
> game design preferences)

[Certainly - my remark was more to make this clear than to attack the
game itself. No real contextual criticism/offence of it intended. After
all, if I'd have written Galatea, I suspect I would have buried the
location of the trapdoor key somewhere in her mind and forced the player
to chat her up sufficiently/convince her of his authority/bond with her
on the issue of the whales etc to obtain it. Counter, perhaps, to the
point.]

> is integral to good NPC development: if there
> is not to be AI, then it helps at least to have other humans than the
> author prodding at the NPC to find its edges and expand upon them.

I'm not sure this is a battle you can win, though. Once you remove the
[excuses and] context for limited-topic NPC's the fact they are
limited-topic becomes more telling. Having more people run through will
help, but may open more problems as new topics suggest new topics, so
forth. The ability to direct conversations from your end of the keyboard
will aid this, certainly. The other external pressures of the plot and
scene (from, of course, the Game God AI) will help too. But in the end,
all you're doing here is covering holes, and the stuff for the player to
enjoy still needs to be done. As I think I have probably suggested,
personally I feel NPC conversation is more a means to an end (that of a
good game, or a satisfying puzzle, or a nicely structured story) than an
end in itself. There is little point in a clever, genius-bug character
with no opinions and nothing of import to impart; just as there would be
no point to me learning topogical space axioms if there were no theories
of interest, and (I would think) little point in learning Ancient Greek
if the only texts were a thousand laundry lists. Do we converse with
NPC's to watch them speak, or to hear them?

Finally: along with the issue of quality of writing, I suspect there's a
secondary one of consistency, within your game and with the expectations
the player develops of it. [AI by it's probabilistic nature will lack
consistency, but then you have the consistency of slight
inconsistency...] If the consistency is wrong the game will feel rough,
unpolished, unfair. If the consistency is maintained, the players will
learn the space of the game and work within it: you achieve technical
context, and so under whatever model you reach a Nirvana game engine.

I wish you luck in your approach; it's a lot more complex and ambitious
than my own.

Sean T Barrett

unread,
Sep 17, 2001, 3:50:09 PM9/17/01
to
OKB -- not okblacke <bren...@aol.comRemove> wrote:
>Ooer. I hardly think that "altering the story" per se is the end-all and
>the be-all of interactivity.

Well. Abstracting for a moment, a system which is interactive but
for which the responses are not at all altered based on what the
user does can be said to be "interactive", but I think it pretty
clearly lies at the non-interactive end of the interactivity
continuum. (Example: press any key for next page.)

Since there are essentially no games that exhibit interactivity on
the scale I'm talking about (perhaps the C64 game "King of Chicago"),
we are forced to guess. My guess is that micro-gameplay which is
at the interactive end of the interactive spectrum PLUS story which
is at the interactive end of the interactive spectrum "leverages
interactivity the best", and I think it's painfully obvious why I
would suppose that. I'm really not clear from your reply exactly why
you're asserting (according to my terminology) that micro-gameplay
which is at the interactive end of the interactive spectrum PLUS
story which is at the non-interactive end of the interactive spectrum
is better, since you don't give any details.

Certainly as long as computers can't write good stories, the story
written by the computer will not be as effective as the story written
by the human. But appealing to that fact for this discussion is
begging the question. And again I refer you to the interactive
human storyteller scenario, who doesn't make use of "side extras"
or pull all the branches back to a common ending.

Note also that I'm not asserting that a human interactive storyteller's
story is a better story qua story than a traditional story--only that
it is a better interactive experience.

SeanB

OKB -- not okblacke

unread,
Sep 17, 2001, 5:25:41 PM9/17/01
to
ems...@mindspring.com (ems...@mindspring.com) wrote:
>Hmm. People seem to keep assuming that AI-driven NPCs == NPCs with
>the ability to decide they want to stroll across town for a cup of
>coffee, or burn a hole in the wall, or marry the PC's sister, leaving
>the plot empty and formless. (I'm reminded a bit of "The Purple Rose
>of Cairo" here.)

Well, either that or he just decides to wrench the conversation around to
12th century Moorish architecture.

>If I want to write a
>more-intelligent-seeming NPC than one who just steps through a list of
>preprogrammed phrases in order, then I can, for instance, give him
>some ways to react to the player to get the conversation back on
>track, allow for some responses that reflect emotion, etc.; but if I
>don't *want* to let him get angry enough to storm out of the room,
>eg., I don't have to.

This is something I hadn't really thought about, but it seems like it
might be cool. Have a very intelligent but very ignorant NPC who doesn't even
know it's possible to do anything but talk to the player. By using AI-type
techniques to do just a small subset of the things we generally think of as
"what an AI does", we might be able to beef up NPCs just where they need it.

OKB -- not okblacke

unread,
Sep 17, 2001, 5:35:38 PM9/17/01
to
buz...@world.std.com (Sean T Barrett) wrote:
>OKB -- not okblacke <bren...@aol.comRemove> wrote:
>>Ooer. I hardly think that "altering the story" per se is the end-all and
>>the be-all of interactivity.
>
>Well. Abstracting for a moment, a system which is interactive but
>for which the responses are not at all altered based on what the
>user does can be said to be "interactive", but I think it pretty
>clearly lies at the non-interactive end of the interactivity
>continuum. (Example: press any key for next page.)

I don't dispute this. What I'm saying is that there's a difference
between "the responses are not at all altered" and "the STORY is not at all
altered". In other words, a game can be very interactive and do many different
things without the common thread of the story having to change.

>My guess is that micro-gameplay which is
>at the interactive end of the interactive spectrum PLUS story which
>is at the interactive end of the interactive spectrum "leverages
>interactivity the best", and I think it's painfully obvious why I
>would suppose that. I'm really not clear from your reply exactly why
>you're asserting (according to my terminology) that micro-gameplay
>which is at the interactive end of the interactive spectrum PLUS
>story which is at the non-interactive end of the interactive spectrum
>is better, since you don't give any details.

I don't really think that either is better, and that's somewhat the point.
A game with "many stories" (i.e., the actual story changes depending on how
the player interacts) is good, if that's what you want, but a game with one
story which is simply developed/described differently/to a greater or lesser
extent is good, if you want that. As I said, I have one WIP which is the
latter -- and, in fact, I also have one giant WNIP in some nebulous state of
"I'm thinking about it" which is the former.

Daniel Barkalow

unread,
Sep 17, 2001, 7:01:49 PM9/17/01
to
On 16 Sep 2001, ems...@mindspring.com wrote:

> Within this rubric (AI as abstraction), we see the following things
> postulated as constituting AI of a type useful for IF:
>
> 1) the ability to generate correct English sentences representing an
> idea (the idea somehow being stored in abstract form within the
> program.) I continue to think that this is one of the most difficult
> AI tasks imaginable, well beyond our current capacities. Partly, we
> just don't know enough about how human language works; the Chomskian
> concept that we start with a Deep Structure of a sentence and then
> work it out into a grammatical form is belied by the fact that I often
> begin a sentence without knowing precisely how I intend to finish it.

Chomsky's concept is actually not quite that, but rather that the results
are the same as if we had. How we (as speakers, rather than as
linguists) come up with sentences is not Chomsky's particular field. So
you're right, but Chomsky doesn't care.

> And then, of course, language abounds in nuance, irony, subtlety
> impossible to codify. If and when this kind of AI is developed, I
> think it will be by another method entirely than programmatic means
> [*]. So much for that.

If you are content to, as a programmer, specify the stylistic aspects of
the text, it is likely possible to generate sufficient natural language
using templates into which the NPC would put the nouns and verbs for the
situation under consideration. The standard library responses are one step
down this path; Platypus takes a second step down it; it becomes
convincing long before it needs to be fully general, especially if your
NPCs would not have read Orwell.

> [*] I'm sure there is an official term to refer to what I mean, known

> to those who have had any instruction in computer science. [...] I have


> heard the phrase 'neural net,' for instance. Sounds like Magic to me.

I believe the term you're looking for is 'learning' (not to say that it is
really learning, but that, when used as a technical term, that's what it
means). There are more or less opaque ways to do it, but the essential
idea is that, if you consider the entire set of input (including the
program code) to the system to make it work, not all of it is
originally intended as input to a computer.

> 2) the ability to replicate "natural" emotions, including some
> internal representation of emotional state and a set of responses
> which will convey this state back to the player. In a crude sense,
> this can be done easily already; it is a matter of creating a machine
> with a number of states and asking it to choose an action based on the
> combination of those states. The sentence-thrower proposed on this
> group some time ago was an example of such implementation.

Indeed; the real issue is not giving the AI emotions, but rather giving it
human emotions: a set of states which a human will recognize and identify
with from the resulting behavior.

> ***
>
> We have also seen it suggested that something is AI when it is able to
> produce results unanticipated by the programmer. By this definition,
> any of my buggy code is exquisite AI, but presumably we are also to
> understand, "results that are nonetheless consistent and desirable."
>
> I would argue that in fact we will never be able by programmatic means
> to make an AI which will produce behavior which is startling to a mind
> capable of understanding the system perfectly and completely; even an
> AI programmed with a full lexicon will only produce sentences from the
> words in the lexicon, and thus not expand essentially beyond the
> (enormous) scope laid out for it by its creation.

It is easy, of course, to make a system that is too complex to
understand; if you code for longer than you can stay focused, or have
multiple people, or, for example, include an encyclopedia in your input to
the system (having written a program that can read it, at least somewhat).

> The construction of a plausible AI for emotional and/or conversational
> output, however, requires a lot of fine-tuning. One of the things
> that makes NPC-writing such a towering pain the ass from my point of
> view is precisely that I can't always tell what's going to happen, and
> the more sophisticated the system designed to control them the less
> certain it is; I keep having to go back to the system library and/or
> the object data and massage the constants, or add handling for some
> weird exception in the behavior that I want to achieve.

Ideally, you'd be able to specify what sort of thing you want to avoid,
not on the level of individual exceptional cases, but with broad
classifications: don't act uninterested in something you would care about,
even if you have nothing to say, etc. Fall back, in general, to safe
generic responses if a more particular response violates some rule. That
is, the programmer should not have to specify exactly what to do when a
possible response is judged unlikely; if all else fails, the NPC can just
behave in a emotion-based way unrelated to the situation. That is, after
all, what people tend to do when faced with something that perplexing. Of
course, this fallback position is unlikely to advance the story at all.

I really think that getting the NPCs to behave believably when the player
does something totally unexpected by the author, and have it advance the
story as if that command had been expected, is the holy grail of NPCs (and
puzzles, for that matter, probably; the parser, for all its non-character
status, is no less a candidate for AI-like behavior). This reminds me of
those times on game shows when the player has said something, and the host
thinks about it for a bit, and decides whether it's sufficiently close to
the answer he has to count it as right.

A point I think has been lurking around here somewhat, without necessarily
being said, so far as I can tell: an AI could be in the game as a
character, such that the NPC behaves as a real person would, unaware that
their whole world is limited to a directed graph liable to be saved to
disk at any point; an AI could, on the other hand, be in the world as an
actor playing a character. The AI would have information which would lead
it to behave in ways which are good for the game and for entertaining the
player, in addition to ways that advance the NPC's goals. I think that the
latter kind of AI would be far more managable as far as getting the story
to work.

This also applies to multi-human games: there could be multiple players
who compete or cooperate with each other, and who have no more information
than their characters or at least pretend that they don't, or there could
be one person who is the player, whose entertainment is the point of the
game, and other people who interact with them, having a better idea of
what is really going on and working to make the experience enjoyable and
interesting.

-Iabervon
*This .sig unintentionally changed*


Richard Bos

unread,
Sep 18, 2001, 4:59:27 AM9/18/01
to
ne...@davidbrain.co.uk (David Brain) wrote:

> In article <Wmmp7.2333$f5.135885@news>, D...@G.com (Davey) wrote:
>
> > People diss the Turing Test all the time, even though it is a test that
> > has yet to be passed!
> >
> > I find this odd. If it's so easy, why haven't we seen it yet?
>
> But it seems that the Turing Test has been passed several times - *if* the
> test is limited to a "micro-world" as has been referred to in this thread.

The problem is that this breaks the premise of the Turing test. You're
supposed to find out who is the human and who the computer, and the
computer is to stop you finding out _by imitating a human_, not by
answering questions from a pop quiz. Undoubtedly a computer would pass
the Mastermind test, but that is not the same thing as the Turing test,
which was specifically designed _also_ to allow personality hints and
creativity and the like to help decide the question.

Richard

Davey

unread,
Sep 18, 2001, 9:20:43 AM9/18/01
to
I actually believe there was a chess grandmaster helping the Deep Blue
programmers tweak the algorithm in real-time. Which I can't confirm but I
remember reading it and becoming cynical about the whole affair.

"Gabe McKean" <gmc...@wsu.edu> wrote in message
news:9o5ddi$19a3$1...@murrow.murrow.it.wsu.edu...

David A. Cornelson

unread,
Sep 18, 2001, 9:46:27 AM9/18/01
to
"Davey" <D...@G.com> wrote in message news:LSHp7.2687$f5.171186@news...

> I actually believe there was a chess grandmaster helping the Deep Blue
> programmers tweak the algorithm in real-time. Which I can't confirm but I
> remember reading it and becoming cynical about the whole affair.
>

According to IBM
http://domino.research.ibm.com/comm/wwwr_thinkresearch.nsf/pages/deepblue296
.html, the humans only forced a draw when it was clear Deep Blue would lose
and intervened when Kasparov offered a draw and wanted to play it out. It
doesn't look like anyone made moves except for Deep Blue.

The disconcerting part is that it seems they used a champion chess player to
code for weaknesses within the system instead of coding for Deep Blue to
learn from its mistakes, which it seems they haven't figured out yet.

Jarb


Davey

unread,
Sep 18, 2001, 10:00:09 AM9/18/01
to
Hi Jarb,
Your link is of course to information regarding Kasparov's first encounter
with Deep Blue, not the second one where Deep Blue "won". Again, my
cynicism is obvious.

Here's a quote from one of the team members, again after the first match, so
I still cannot confirm if my suspicions are true about real-time human
intervention during the second Deep Blue/Kasparov match:

"It's also possible that we could change the program too [to alter its
strategy between games]. We were very conservative about changing the
program, because in such a short time when you make a change you can't
really test it thoroughly. But in theory, if we see the same thing happening
over and over, we can change it. We can patch holes. We didn't do that, but
we are allowed to."

http://www.sciam.com/explorations/042197chess/042197blueinter.html

Again, this was the first match. I think they chose to intervene in the
second match in order to secure a win.

If someone will prove me wrong I will enjoy publicly eating my words!
D

"David A. Cornelson" <dcorn...@placet.com> wrote in message
news:tqek1hr...@corp.supernews.com...

Martin Bays

unread,
Sep 18, 2001, 10:01:41 AM9/18/01
to
buz...@world.std.com (Sean T Barrett) wrote in message news:<GJstL...@world.std.com>...

> Jon Ingold <ji...@cam.ac.uk> wrote:
> >The thing that most AI gets hung up on is the idea of learning
>
> The belief that AIs should work by learning is not a universal
> AI belief; it is a premise that locks you into certain models
> of AI from the moment you make that assumption.
>
> >But then the question of the original thread was really "Is IF a good
> >medium in which to explore AI technology"? In which case, we're not
> >really talking about making good games at all. However, again, I'd still
> >be forced to argue that no, AI is not well-served by the IF medium and
> >the reason is this - paucity of data.
>
> Actually, there are some who might argue with you. Douglas Hofstadter's
> AI book "Fluid Concepts and Creative Analogies" (or something like that)
> is about an approach to AI that exhibits emergent behavior (you can't
> predict what it will come up with) and works best with limited
> "micro-worlds"--in fact his whole premise is that things like Cyc
> and the widespread bias against microworlds in the AI community
> is exactly what's preventing us from getting anywhere with AI, since
> it's forcing us to solve a much harder problem than we need to solve.
>
> Games offer a built-in microworld with simple discrete rules, no
> worries about people using "Big Mac" as a metaphor so the knowledge
> base needs to have 200 facts about McDonalds, and no worry about
> sense data processing. (However, I don't see any obvious IF use
> for the analogy-making DH's programs do, except possibly in the
> area of storytelling.)
>
> SeanB

The way I always read Hofstadter on this, though I don't think he ever
gave a proper explanation of quite how it works, is that though
explicit analogy making itself isn't much use as a tool to be brought
to bear on real world situations (how often do you actually need to
know what the London of California is?), equivalent processes are
fundamental to the formation and use of real concepts. Elsewhere in
the thread I've given some vague graspings at how this might actually
work in practice, and I still think that if it could be got to work,
this could actually have a use in IF (or in some IF offshoot - Jon
Ingold and others have given a pretty convincing argument to the
effect that real AI isn't helpful to IF as we know it).

Martin

Martin Bays

unread,
Sep 18, 2001, 10:41:53 AM9/18/01
to
ems...@mindspring.com (ems...@mindspring.com) wrote in message news:<a69830de.01091...@posting.google.com>...

> What this says is not, "the Turing test is easy because it only looks
> at output," but rather, "the Turing test is different from our other
> formulations of the nature of AI because it does not investigate what
> is going on under the hood."
>
> The advantage of this definition is that it evades all the
> philosophical conundra raised by discussing What Intelligence Really
> Is. The disadvantage, at least from our point of view, is that it
> doesn't provide many hints about how to solve the problem.
>
> ES

I think the point here is that while, sure, anything that passes the
fully fledged Turing Test must be considered as intelligent as you or
some other you (but not me - that's a bit different since I have
special knowledge of my own intelligence), we can't do that yet. And
so we look at micro-domains, and IF could be considered one of them,
and the point of 'looking under the bonnet' rather than just at what
the programme can do is to see whether or not the approach you're
trying can be scaled up to the full real-world domain of the Turing
Test. If yes, you have AI. If not, you've just got another Eliza, or
Deep Thought, or 'expert system' which might do cool things but
doesn't really shed any light on the nature of intelligence or how to
build one.

Martin

Gabe McKean

unread,
Sep 18, 2001, 12:01:25 PM9/18/01
to
Davey wrote in message ...

>Here's a quote from one of the team members, again after the first match,
so
>I still cannot confirm if my suspicions are true about real-time human
>intervention during the second Deep Blue/Kasparov match:
>
>"It's also possible that we could change the program too [to alter its
>strategy between games]. We were very conservative about changing the
>program, because in such a short time when you make a change you can't
>really test it thoroughly. But in theory, if we see the same thing
happening
>over and over, we can change it. We can patch holes. We didn't do that, but
>we are allowed to."
>
>http://www.sciam.com/explorations/042197chess/042197blueinter.html
>
>Again, this was the first match. I think they chose to intervene in the
>second match in order to secure a win.
>
>If someone will prove me wrong I will enjoy publicly eating my words!
>D

I don't have any proof, but I seriously doubt that they were tweaking Deep
Blue in 'real-time,' ie. in the middle of a chess match. They did do
extensive tweaking on its programming before the rematch, including input
from a grandmaster, but that's a far cry from what you're claiming. To
actually change its programming during a game, or even in between two games,
would be just as likely to screw something up as it would to make Deep Blue
better at beating Kasparov. Ask any experienced programmer. I suppose they
could have had their grandmaster actually inputting Deep Blue's moves at
critical moments, but that assumes such dishonesty on the part of the IBM
team that I'm loathe to believe it without some evidence.


Davey

unread,
Sep 18, 2001, 1:25:41 PM9/18/01
to
Actually Gabe, I just found this bit of info over at
http://www8.zdnet.com/pcmag/news/trends/t970502b.htm

(this article is about the second match, the one which DB won)

Yet the machine's development over the past year was not solely physical.
With the help of Brooklyn-born grand master Joe Benjamin, IBM has refined
Deep Blue's basic chess knowledge and corrected its inability to handle
midgame strategy changes. Its software platform is also more flexible,
allowing team members to massage its core code on the fly. Like Kasparov, it
can alter its methods between games. Of course, it requires human help to do
so.

As I mentioned before, I never believed that the Deep Blue win was an honest
one. I know I had heard that algorithms were being tweaked on-the-fly by a
chess grandmaster.

I happen to be a programmer, too...

"Gabe McKean" <gmc...@wsu.edu> wrote in message

news:9o7r59$1hch$1...@murrow.murrow.it.wsu.edu...

Davey

unread,
Sep 18, 2001, 1:28:46 PM9/18/01
to
Here's something else I found:

Why had Deep Blue performed so differently the day before? Had the IBM
Research team modified the machine's software, as they can now do with
relative ease between games? "We had it throw back a few cocktails," said an
ecstatic Tan.--Cade Metz

http://www8.zdnet.com/pcmag/news/trends/t970505b.htm

"Davey" <D...@G.com> wrote in message news:psLp7.2774$f5.179139@news...

Dan Schmidt

unread,
Sep 18, 2001, 1:17:59 PM9/18/01
to
"Davey" <D...@G.com> writes:

| I actually believe there was a chess grandmaster helping the Deep
| Blue programmers tweak the algorithm in real-time. Which I can't
| confirm but I remember reading it and becoming cynical about the
| whole affair.

They changed parameters between games, is all.

--
http://www.dfan.org

Davey

unread,
Sep 18, 2001, 2:07:56 PM9/18/01
to
What parameters?

(Like your web site, by the way)

D

"Dan Schmidt" <df...@harmonixmusic.com> wrote in message
news:wky9nc2...@turangalila.harmonixmusic.com...

John W. Kennedy

unread,
Sep 18, 2001, 2:36:27 PM9/18/01
to
"ems...@mindspring.com" wrote:
> Hmm. People seem to keep assuming that AI-driven NPCs == NPCs with
> the ability to decide they want to stroll across town for a cup of
> coffee, or burn a hole in the wall, or marry the PC's sister, leaving
> the plot empty and formless. (I'm reminded a bit of "The Purple Rose
> of Cairo" here.)

Perhaps it is necessary to make a distinction between "artificial
intelligence" (1/2 of what Victor Frankenstein achieved) and "use of AI
technologies"?


--
John W. Kennedy
(Working from my laptop)

John W. Kennedy

unread,
Sep 18, 2001, 2:43:31 PM9/18/01
to
Gadget wrote:
> The program looks at the situation on the gameboard and compiles a
> list of possible moves for itself. Then each move in the list is
> awarded a score: high if it will advance the situation in favour of
> the program, low if it will have a less negative on the state of the
> game. Then the same is done with the possible moves of the player.
> Finally, the move is chosen that has the maximum positive effect for
> the computer with the minimum advantage for the player.

This is a classic example of what _isn't_ Artificial Intelligence. A
true artificial intelligence will judge the intelligence of the opponent
and move accordingly, employing a minimax technique only against an
opponent who is intelligent enough to do the same. A mid-level player
is likely to be defeated by moves that will lose to an expert, whereas
minimax moves against anyone but an utter novice will almost always
produce a tie.

John W. Kennedy

unread,
Sep 18, 2001, 2:45:51 PM9/18/01
to
Gadget wrote:
> Wouldn't you prefer a good game of chess?

There's a minimax solution to chess (although no-one knows what it is).

John W. Kennedy

unread,
Sep 18, 2001, 2:49:19 PM9/18/01
to
"David A. Cornelson" wrote:
> The disconcerting part is that it seems they used a champion chess player to
> code for weaknesses within the system instead of coding for Deep Blue to
> learn from its mistakes, which it seems they haven't figured out yet.

Deep Blue mostly did its job by massive searches of the game tree, not
by techniques generally recognized as AI.

To be fair, that was what the IBM engineers were working to achieve --
raw hardware speed.

Matthew Russotto

unread,
Sep 18, 2001, 3:41:15 PM9/18/01
to
In article <3BA796B6...@attglobal.net>,

John W. Kennedy <jwk...@attglobal.net> wrote:
>Gadget wrote:
>> Wouldn't you prefer a good game of chess?
>
>There's a minimax solution to chess (although no-one knows what it is).

Forced win for white, forced win for black, or only nonlosing
strategies?


--
Matthew T. Russotto russ...@pond.com
=====
Get Caught Reading, Go To Jail!
A message from the Association of American Publishers
Free Dmitry Sklyarov! DMCA delenda est!
http://www.freedmitry.org

Gadget

unread,
Sep 18, 2001, 4:32:09 PM9/18/01
to
On Tue, 18 Sep 2001 18:43:31 GMT, "John W. Kennedy"
<jwk...@attglobal.net> wrote:

>Gadget wrote:
>> The program looks at the situation on the gameboard and compiles a
>> list of possible moves for itself. Then each move in the list is
>> awarded a score: high if it will advance the situation in favour of
>> the program, low if it will have a less negative on the state of the
>> game. Then the same is done with the possible moves of the player.
>> Finally, the move is chosen that has the maximum positive effect for
>> the computer with the minimum advantage for the player.
>
>This is a classic example of what _isn't_ Artificial Intelligence.

Why? If you add learning to the loop it is intelligence. Learning in
the sense of: after several games the computer takes previous matches
into account in judging the score for the current possibilities.

> A
>true artificial intelligence will judge the intelligence of the opponent

How? When I play a game, I use a form of the minimax method. I try to
find an advantageous move for myself that will not help my opponent in
the long run. That and I have a certain experience with previous games
which tell me about the style of the opponent. Style can also be
abstracted in the way I mentioned above.

>and move accordingly, employing a minimax technique only against an
>opponent who is intelligent enough to do the same. A mid-level player
>is likely to be defeated by moves that will lose to an expert, whereas
>minimax moves against anyone but an utter novice will almost always
>produce a tie.

Only if the program just looks ahead one turn. The more turns the
program looks ahead (by using a recursive loop, the number of turns
ahead is only limited by what is an acceptable waiting time for the
human player) the better the program will devise a long term strategy
for winning, rather then trying to keep a balance on the board.

Andrew Plotkin

unread,
Sep 18, 2001, 4:46:21 PM9/18/01
to
Matthew Russotto <russ...@wanda.pond.com> wrote:
> In article <3BA796B6...@attglobal.net>,
> John W. Kennedy <jwk...@attglobal.net> wrote:
>>Gadget wrote:
>>> Wouldn't you prefer a good game of chess?
>>
>>There's a minimax solution to chess (although no-one knows what it is).

> Forced win for white, forced win for black, or only nonlosing
> strategies?

One of those, yes. :) Lots of people would love to know.

Although I think chess experts believe that "both sides can force a
draw" is the way to bet.

--Z

"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*
* Make your vote count. Get your vote counted.

Matthew Russotto

unread,
Sep 18, 2001, 5:05:44 PM9/18/01
to
In article <9o8bqt$k8n$1...@news.panix.com>,

Andrew Plotkin <erky...@eblong.com> wrote:
>Matthew Russotto <russ...@wanda.pond.com> wrote:
>> In article <3BA796B6...@attglobal.net>,
>> John W. Kennedy <jwk...@attglobal.net> wrote:
>>>Gadget wrote:
>>>> Wouldn't you prefer a good game of chess?
>>>
>>>There's a minimax solution to chess (although no-one knows what it is).
>
>> Forced win for white, forced win for black, or only nonlosing
>> strategies?
>
>One of those, yes. :) Lots of people would love to know.

Drat. I was hoping that after such an authoritative pronouncement,
I'd get a real answer :-).

David Thornley

unread,
Sep 18, 2001, 7:33:58 PM9/18/01
to
In article <9o8bqt$k8n$1...@news.panix.com>,
Andrew Plotkin <erky...@eblong.com> wrote:
>Matthew Russotto <russ...@wanda.pond.com> wrote:
>> In article <3BA796B6...@attglobal.net>,
>> John W. Kennedy <jwk...@attglobal.net> wrote:
>>>
>>>There's a minimax solution to chess (although no-one knows what it is).
>
>> Forced win for white, forced win for black, or only nonlosing
>> strategies?
>
>One of those, yes. :) Lots of people would love to know.
>
>Although I think chess experts believe that "both sides can force a
>draw" is the way to bet.
>
Strictly speaking, a minimax solution is one where you can get the
best result possible to force against optimum play, so chess has
numerous minimax solutions, many of no practical importance.
(For example, it's quite possible that a4/P-QR4 is the first move
of a minimax solution just like d4/P-Q4, although there are very
good reasons why many more people start a game by pushing the
Queen pawn than the Queen Rook pawn.)

All of them, of course, have the same result: White win, Black
win, or draw.

FWIW, most people tend not to think of minimax planning like this
to be true intelligence, although it does produce chess-playing
computers that can compete on a very high level.

--
David H. Thornley | If you want my opinion, ask.
da...@thornley.net | If you don't, flee.
http://www.thornley.net/~thornley/david/ | O-

Bennett Standeven

unread,
Sep 18, 2001, 9:18:56 PM9/18/01
to
"Davey" <D...@G.com> wrote in message news:<psLp7.2774$f5.179139@news>...
>
[...]

> As I mentioned before, I never believed that the Deep Blue win was an honest
> one. I know I had heard that algorithms were being tweaked on-the-fly by a
> chess grandmaster.
>


What's "dishonest" about that? Kasparov's "algorithms" were being
tweaked on-the-fly by a chess grandmaster, too...

Damien Neil

unread,
Sep 19, 2001, 2:45:28 AM9/19/01
to
On Tue, 18 Sep 2001 18:43:31 GMT,
John W. Kennedy <jwk...@attglobal.net> wrote:
> This is a classic example of what _isn't_ Artificial Intelligence. A
> true artificial intelligence will judge the intelligence of the opponent
> and move accordingly, employing a minimax technique only against an
> opponent who is intelligent enough to do the same. A mid-level player
> is likely to be defeated by moves that will lose to an expert, whereas
> minimax moves against anyone but an utter novice will almost always
> produce a tie.

Actually, minimax search with alpha-beta pruning is a classic AI
algorithm. It works, which is why people tend to not think of it as
AI these days. ("AI" is a fuzzy term which often is taken to mean
"these really cool things we're about to get working real soon now".)

- Damien

Jason Melancon

unread,
Sep 19, 2001, 7:13:34 AM9/19/01
to
On Mon, 17 Sep 2001 19:36:30 GMT, buz...@world.std.com (Sean T
Barrett) wrote:

> ems...@mindspring.com <ems...@mindspring.com> wrote:

> >Here again, though, it seems mostly that there's a
> >question of imposing additional constraints or making the AI even
> >smarter -- teaching the NPC to recognize when it is about to constrain
> >the player's movement, for instance, or to answer when the PC knocks
> >on the attic door.
>
> The moment the NPC AI starts coloring its actions based on the
> potential player experience *as a game*--or, alternately,
> the moment the NPC begins coloring its actions based on our
> goals for the player instead of the NPCs goals--you are to
> my mind writing game-mastering AI, not NPC AI. Hence what
> I really meant but didn't say was "NPC AI (without gamemastering
> AI) is more questionnable."

If you were an NPC in my game, wouldn't you care whether you
accidentally locked me in the attic? I would hope you would let me
out when I started making noise. My point, of course, is that being
humane and not locking people in attics is a perfectly desirable,
albeit sophisticated, goal.

Also, Emily said:

> My point remains something like this: blue-sky AI, HAL-inna-box, a
> super-NPC who is for all intents and purposes a person -- that sort of
> thing is indeed beyond our abilities (at the moment) and incompatible
> with our goals (at least in the context of IF). But if we talk about
> AI in relative terms, as dealing with a greater degree of abstraction,
> flexibility, and dynamic decision-making, then some improvement in AI
> does become useful to us.

I doubt this is your main point, but couldn't we make a distinction
between intelligence and free will, two orthogonal properties of an
NPC? It seems to me you could have an NPC with either, neither, or
both. That way, (once it's within our abilities) we can have real
blue-sky AI (?) NPC intelligence *without* free will, which won't mess
up your plot even if it gets a hankering for coffee. In that sense, I
don't see why real AI is incompatible with IF.

--
Jason Melancon

Davey

unread,
Sep 19, 2001, 11:22:36 AM9/19/01
to
Yes, you could look at it that way....

But then he really wasn't playing a computer, was he?

"Bennett Standeven" <be...@pop.networkusa.net> wrote in message
news:24c3076b.0109...@posting.google.com...

Davey

unread,
Sep 19, 2001, 11:22:38 AM9/19/01
to
The latest Wired magazine, I just found out, has a write-up on the
affair....stating the same things I was. There is a planned match between
the new world champion (his name escapes me) and a new computer program,
which interestingly is specifically not to be changed during the tournament.

"Davey" <D...@G.com> wrote in message news:ivLp7.2775$f5.179072@news...

Gabe McKean

unread,
Sep 19, 2001, 12:59:04 PM9/19/01
to

Davey wrote in message ...
>Actually Gabe, I just found this bit of info over at
>http://www8.zdnet.com/pcmag/news/trends/t970502b.htm
>
>(this article is about the second match, the one which DB won)

[snip]

Ok, I stand corrected. Still, tweaking the algorithm between games and
intervening in 'real time' during a game are two very different things. Of
course, as someone else has already commented, Deep Blue's victory would
have been much more impressive it had been able to adjust its own algorithm
based on its previous games with Kasparov.


Sean T Barrett

unread,
Sep 19, 2001, 2:19:48 PM9/19/01
to
Jason Melancon <jaso...@afn.org> wrote:
>buz...@world.std.com (Sean T Barrett) wrote:
>> The moment the NPC AI starts coloring its actions based on the
>> potential player experience *as a game*--or, alternately,
>> the moment the NPC begins coloring its actions based on our
>> goals for the player instead of the NPCs goals--you are to
>> my mind writing game-mastering AI, not NPC AI.
>
>If you were an NPC in my game, wouldn't you care whether you
>accidentally locked me in the attic? I would hope you would let me
>out when I started making noise.

It depends on the NPC, doesn't it? On the NPC's goals. Like I said.

>My point, of course, is that being
>humane and not locking people in attics is a perfectly desirable,
>albeit sophisticated, goal.

The attic is merely a particular example, and not the best one
for my side of this issue. IF tends to involve a very fragile
simulation; in the real world, we rarely get so stuck that a
goal is totally unachievable, but it's easy in an IF to, e.g.,
consume a non-renewable resource that the player needs to win
the game. I'm all for flagging such resources and having NPCs
not consume them in deference to the author's goals for the
player, even if the NPCs goals are in conflict with the
protagonist's; but to me, that's not NPC AI, that's game "AI".

This sort of thing happens automagically in pen&paper RPG's;
the gamemaster takes on the persona of an NPC and provides the
"AI" for that NPC, but at the same time the GM leavens the NPC
actions with the GM's goals for the entire game/session.

[re: Emily's post]


>I doubt this is your main point, but couldn't we make a distinction
>between intelligence and free will, two orthogonal properties of an
>NPC?

"Free-will" is nearly as loaded a term as "intelligence". I think
people generally use it to mean "having a choice over what action
to take", but you seem to be using it here to mean "having a choice
over what goal to achieve", since an AI which is pre-programmed
with only one choice per scenario can hardly be an AI. (And of course,
save random numbers, any AI will actually only exhibit one behavior
in any exactly identical situation, so it's hard to argue that any
AI of any size really exhibits free-will, hence the problem with
the term. Some people argue that free-will is as much an illusion
as recent posters have suggested consciousness is.)

Even limiting the goals doesn't really address the problem of
an AI which makes plans and achieves them; to achieve a plan one
often creates side-effects on the world, and producing the list
of "allowable" side-effects is complicated, especially if the AI
doesn't know what the consequences of all actions are in advance.

SeanB

Davey

unread,
Sep 19, 2001, 2:30:51 PM9/19/01
to
My only question would be (not to you, but the IBM team): Who/what was
Kasparov playing? A computer or not?
If not, the whole "man vs machine" thing should be rightfully put to bed
regarding those matches.

Not that someday soon a chess-playing computer won't assume the status of
"best chess player on the planet" anyway....

D

"Gabe McKean" <gmc...@wsu.edu> wrote in message

news:9oaitd$1qdo$1...@murrow.murrow.it.wsu.edu...

John W. Kennedy

unread,
Sep 19, 2001, 3:01:47 PM9/19/01
to
Matthew Russotto wrote:
>
> In article <3BA796B6...@attglobal.net>,
> John W. Kennedy <jwk...@attglobal.net> wrote:
> >Gadget wrote:
> >> Wouldn't you prefer a good game of chess?
> >
> >There's a minimax solution to chess (although no-one knows what it is).
>
> Forced win for white, forced win for black, or only nonlosing
> strategies?

Almost certainly not a forced win for black, since the game is
symmetrical.

John W. Kennedy

unread,
Sep 19, 2001, 3:06:42 PM9/19/01
to
Matthew Russotto wrote:
>
> In article <9o8bqt$k8n$1...@news.panix.com>,
> Andrew Plotkin <erky...@eblong.com> wrote:
> >Matthew Russotto <russ...@wanda.pond.com> wrote:
> >> In article <3BA796B6...@attglobal.net>,
> >> John W. Kennedy <jwk...@attglobal.net> wrote:
> >>>Gadget wrote:
> >>>> Wouldn't you prefer a good game of chess?
> >>>
> >>>There's a minimax solution to chess (although no-one knows what it is).
> >
> >> Forced win for white, forced win for black, or only nonlosing
> >> strategies?
> >
> >One of those, yes. :) Lots of people would love to know.
>
> Drat. I was hoping that after such an authoritative pronouncement,
> I'd get a real answer :-).

There exists a proof that there is a minimax strategy for all two-player
zero-sum games with no secrets. Chess is a two-player zero-sum game
with no secrets. (i.e., there is no way for both players to "win" by
cooperating, and there are no invisible chessmen to be used in an
ambush). Therefore, there is a minimax strategy for chess.

Dan Schmidt

unread,
Sep 20, 2001, 8:19:47 AM9/20/01
to

Although I agree that it would be awfully strange if chess turns out to
be a forced win for black, there are plenty of symmetrical two-player
games that are a forced win for the second player.

--
http://www.dfan.org

Daniel Barkalow

unread,
Sep 20, 2001, 4:48:15 PM9/20/01
to
On Wed, 19 Sep 2001, Davey wrote:

> My only question would be (not to you, but the IBM team): Who/what was
> Kasparov playing? A computer or not?
> If not, the whole "man vs machine" thing should be rightfully put to bed
> regarding those matches.

Deep Blue didn't learn to play chess; it was, rather, taught in a much
more invasive and direct way, sort of like Neo learning martial arts in
the Matrix. But the humans were acting entirely in the roles of coaches
and teachers in this odd way. The chess skill exhibited on that side
certainly couldn't have been done by the IBM team without Deep Blue, being
far too computation-heavy. In the case of Deep Blue, the computer couldn't
necessarily have won without learning anything between games, and it
didn't know how to learn anything; it only knew how to play chess.

> Not that someday soon a chess-playing computer won't assume the status of
> "best chess player on the planet" anyway....

It'll be a while yet before a computer is the best, having learned chess
itself (i.e., going from novice to master based on analyzing the games it
plays). Also a while before a computer shows any signs of enjoying the
game...

-Iabervon
*This .sig unintentionally changed*

0 new messages