Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

AI in IF

19 views
Skip to first unread message

Martin Bays

unread,
Sep 3, 2001, 8:48:42 AM9/3/01
to
Hi there, people. Seems to me it's been far too long since there's had
a proper discussion here of the role Artificial Intelligence could
play in IF. So I'm here with the hope of sparking a bit of debate -
and maybe even some actual work in the area.

First off - a question. Why has so little been done? Sure, there was
the Oz project approximately one eon ago - but funding dried up (or
whatever) before they really got very far, and anyway none of their
code was in a form usable to us. Since then there's been Nate Cull's
highly laudable RAP (Reactive Agent Planning) system, but that was
quite limited in scope and not really much use on its own. Basically
we've had little but silence on the subject. Is it just that
everyone's given up on the idea of believable NPC's? Thought it all
too hard to think about, never mind implement? Decided it wouldn't
actually be of that much use to real IF? Well, I'm not sure I agree
with either of these, and I think the time is ripe for something to be
done.

A suggestion. IF is a great medium for the study of AI. Why? Allow me
to list.
- In a piece of IF, the world and events within it are very clearly
defined, allowing our AI to bypass the more difficult, low-level
perception (vision, speech recognition etc.) that have proved to be
huge problems for real world AI. And yet we still have a scope, a
broadness of world and possibility, which is very similar to that of
the real world. Which means, I think, that this is our best chance to
agents which act in a realistic nature in realistic situations without
having to explore the depths of information-processing.
- The very nature of IF and its languages means that creating new
situations with which to test our growing AI is very easy.
- We have an obvious, and I think achievable, goal to aim for - a
system for the creation of what the Oz Project called "believable
agents" - NPC's which aren't necessarily that 'deep' in terms of their
intellectual capablities, but which are 'broad' enough for the player
to accept them as intelligent beings. Used well, I think these could
have a revolutionary impact on real IF, as well as being very
interesting in their own right.
- Lastly, and probably most importantly, and a point that is nicked
straight from Nate Cull's Inform RAP documentation (cheers Nate, hope
you don't mind) - the IF community this newsgroup represents consists
of a pool of proficient (don't laugh) and inventive programmers who
are willing to work on IF (and hence maybe this kind of project(?))
without material recompense and in a nice friendly open source way.
Which sounds like the perfect breeding ground for some genuinely
original creations.

One possibly significant problem I can think of though - I'm not sure
the IF languages (I'm thinking primarily of Inform here, though I
think TADS has similar restrictions) have the capabilities for running
the kind of programs needed. In particular, the lack of a dynamic
memory might cause problems (though actually it's just occured to me -
isn't this the kind of thing glulx (or however you spell it) was meant
to solve?).

Anyway, there you go. Comment please! Has anyone else been thinking on
similar lines? Has anyone in fact got any relevant WIP's? Anyone even
(and I know this is too much to ask really, but that's not going to
stop me) feel like expressing a willingness to collaborate on some
grand IF-AI project? And yes that is my humbly raised hand you spy.

Well, thank you. And goodbye.

David Samuel Myers

unread,
Sep 3, 2001, 11:20:45 AM9/3/01
to
Martin Bays <marti...@hotmail.com> wrote:
> - In a piece of IF, the world and events within it are very clearly
> defined, allowing our AI to bypass the more difficult, low-level
> perception (vision, speech recognition etc.) that have proved to be
> huge problems for real world AI. And yet we still have a scope, a
> broadness of world and possibility, which is very similar to that of
> the real world. Which means, I think, that this is our best chance to
> agents which act in a realistic nature in realistic situations without
> having to explore the depths of information-processing.

And there's the rub, too. While there *are* world models built into IF,
they are still not explicit enough to deal with the complex modeling that
will be necessary to be as simulationist as you want. When you talk about
human interactions, you're getting ahead of this thorny representation
issue. Witness how much work it was to do a small bit of physical
simulationism in Metamorphoses. Our capability to represent and wrangle
the kind of information needed to show the depth you crave is still a ways
off. Refer to the Erasmatron and www.robotwisdom.com for proposals
concerning how this is going to happen. But realize that the "man behind
the curtain" approach, misdirecting the player into thinking the world is
more responsively simulated than it is (rather than just
author-anticipated) ... this approach is going to be with us a while
longer. NPCs as intelligent agents is a real hard problem. If you decide
to move along with this, much luck.

-d

ems...@mindspring.com

unread,
Sep 3, 2001, 2:40:11 PM9/3/01
to
marti...@hotmail.com (Martin Bays) wrote in message news:<7d26d66b.01090...@posting.google.com>...

> Hi there, people. Seems to me it's been far too long since there's had
> a proper discussion here of the role Artificial Intelligence could
> play in IF. So I'm here with the hope of sparking a bit of debate -
> and maybe even some actual work in the area.
>
> First off - a question. Why has so little been done? Sure, there was
> the Oz project approximately one eon ago - but funding dried up (or
> whatever) before they really got very far, and anyway none of their
> code was in a form usable to us. Since then there's been Nate Cull's
> highly laudable RAP (Reactive Agent Planning) system, but that was
> quite limited in scope and not really much use on its own. Basically
> we've had little but silence on the subject. Is it just that
> everyone's given up on the idea of believable NPC's?

No, not everyone has given up on the idea of believable NPCs. I'm not
sure, on the other hand, that there's a universal consensus that
believable NPCs require AI. Aside from the manifold problems involved
in actually writing such a thing, there's the question of whether we
want to build simulations or write stories. I'm more than happy to
argue in favor of simulationism, but not to such an extent that the
motivation, tone, and unique quality of the underlying story is lost.

[I am reminded of a recent post which expressed the opinion that a
'perfect' game would be one in which you would be free to do anything
you liked, at all, and the game would work out the ramifications; no
motives or plot would be imposed on the player. Personally, I find
that too reminiscent of Real Life, and when I'm playing a game I
become unsatisfied if it doesn't provide some indication of what I am
supposed to be doing.]


Most people who are working on this kind of thing, at least as far as
I know, are focusing on specific aspects of the realistic NPC problem.
Some issues/directions that I know are under investigation at the
moment include:

-- making NPCs better at reacting to player actions. (Sean Barrett;
see "The Weapon," which, despite my complaints about some aspects,
does do some interesting things.)

-- improving NPC conversation, including the ability to pursue
conversational goals of one sort or another. (Er, me. Examples
ongoing: "Galatea" did this a bit, but the background library has been
extensively reworked since then and is still undergoing improvements.)

-- work on TADS 3 to make it more flexible in terms of passing orders
to actors and having them respond properly. (I think-- I saw some
stuff about this on the T3 list a while ago, but I only read it
closely when I have time. MJR and company.)

Problems that are mostly solved:

-- making the NPC choose a path through the environment to another
location, reacting appropriately to locked doors en route. (The
NPC_Engine library.)



> A suggestion. IF is a great medium for the study of AI. Why? Allow me
> to list.
> - In a piece of IF, the world and events within it are very clearly
> defined, allowing our AI to bypass the more difficult, low-level
> perception (vision, speech recognition etc.) that have proved to be
> huge problems for real world AI. And yet we still have a scope, a
> broadness of world and possibility, which is very similar to that of
> the real world. Which means, I think, that this is our best chance to
> agents which act in a realistic nature in realistic situations without
> having to explore the depths of information-processing.

As David Myers pointed out, the world is only clearly defined to a
point; if you want realistic NPC behavior in the context of physical
simulationism, you will have to define all of your objects much more
exactly (material, weight, size, precise location and ability to be
looked under, climbed on, or moved...). The underlying classes for
Metamorphoses provide some of this information, and I have
occasionally toyed with the idea of trying to build some kind of
system wherein the NPC would interact with Metamorphoses-like objects.

This would enable me to tell the NPC something along the lines of:
your top-level goal is to destroy item x. The NPC 'knows' that
burning, cutting, and smashing to smithereens are all acceptable means
to destroy something, so it checks for the prerequisite conditions for
both these activities: is x made of something flammable? do we have a
flame source? is x made of something cuttable? do we have a blade? is
x made of something fragile?

If it comes up with success conditions for any of the options, it
reacts by performing the action in question. [Of course, this would
need refinement; I would also have to teach it not to set fire to
items it is wearing, items in contact with other flammable items it
did not want to destroy, and so on.] If not, (and here I suppose my
idea is much like RAP, though I haven't actually gone to look), the
NPC goes off in search of a candle or a pair of shears.

The reason I *haven't* written this yet is that it hasn't seemed, on
its own, especially exciting. The number of high-level goals I would
be able to give my NPC remained too low to be particularly fun, in a
story context (or particularly different from what I would be able to
provide for by coding it all up in a non-AI, predetermined way. The
rule here seems to be that if the game is narrow enough, you can
provide intelligent-seeming behavior without any AI at all: the
interrogator in Spider and Web talks like an intelligent person and
reacts to things sensibly, the NPCs in Being Andrew Plotkin were
memorable and amusing. In neither case was it possible to change the
environment or the topic of conversation very much beyond the
conditions anticipated by the author.)

> - The very nature of IF and its languages means that creating new
> situations with which to test our growing AI is very easy.

Perhaps.

> - We have an obvious, and I think achievable, goal to aim for - a
> system for the creation of what the Oz Project called "believable
> agents" - NPC's which aren't necessarily that 'deep' in terms of their
> intellectual capablities, but which are 'broad' enough for the player
> to accept them as intelligent beings. Used well, I think these could
> have a revolutionary impact on real IF, as well as being very
> interesting in their own right.

Maybe. It's still hard to envision a situation in which the author
wouldn't have to have anticipated at some level most of the states
that the conversation could get into, if you're thinking of
conversational interaction here and not behavior like searching rooms
for missing items and so on.

> - Lastly, and probably most importantly, and a point that is nicked
> straight from Nate Cull's Inform RAP documentation (cheers Nate, hope
> you don't mind) - the IF community this newsgroup represents consists
> of a pool of proficient (don't laugh) and inventive programmers who
> are willing to work on IF (and hence maybe this kind of project(?))
> without material recompense and in a nice friendly open source way.
> Which sounds like the perfect breeding ground for some genuinely
> original creations.

We-ell. Maybe :).

> One possibly significant problem I can think of though - I'm not sure
> the IF languages (I'm thinking primarily of Inform here, though I
> think TADS has similar restrictions) have the capabilities for running
> the kind of programs needed. In particular, the lack of a dynamic
> memory might cause problems (though actually it's just occured to me -
> isn't this the kind of thing glulx (or however you spell it) was meant
> to solve?).

Not exactly. Not to mention that the nature of Inform is such that
it's not optimally convenient for storing and manipulating large
quantities of data.

> Anyway, there you go. Comment please! Has anyone else been thinking on
> similar lines?

I think about this more or less constantly. Now, granted that the
foregoing all sounds fairly discouraging, here is a rough sketch of
what areas of the problem I've thought about:

Interaction with the model world.

-- Be able to move from one location to another along a deliberate and
dynamically-selected path. [solved.]

-- Be aware of high-level effects of one object upon another (keys for
locks, opening doors, destroying objects, opening objects, turning
things on and off). Be able to set the goal in the form *put object x
into state y* and then do what is necessary to achieve this. (cf. the
burning example above.) [Not solved, but probably not actually
difficult, at least for a smallish set of such goals. The more you
have, the more work it is. I can imagine a two-level version of this
that would handle certain general rules and then provide for
exceptions: it would, for instance, always deal with
cutting/burning/smashing the same way for all objects, but specialized
items would have their own states defined and their own instructions
to the NPC to tell what prerequisites are required.

Eg: cuckoo_clock has states Working and Stopped, and if the NPC wishes
to convert it from Stopped to Working, he sends a message to the
NPC_actions property of the clock, which checks to see if
prerequisites are met (does the NPC have the key to wind the clock?),
sets goals accordingly if not (get the key), and then returns, if
appropriate, an action (wind the clock). The NPC performs the action
and the verb handling reports it. {This requires the verb system also
to be able to generate appropriate messages for events whether they
are performed by the player or by someone else, but that's not too
desperately difficult. TADS is, I think, better at it, from what I've
heard, though I could be wrong.}]

-- Be able to search plausibly for an object needed. [Knowledge
representation that would allow the NPC to remember where he has seen
things would be helpful or perhaps even essential here; it would be
annoying and implausible if the NPC would for instance cheerfully
ransack a house looking for an item that the player has shown him a
minute before.] This involves looking under and on top of things [and
here the world modeling requires a better representation of the
relative location of objects], searching inside things [including
opening nested objects and remembering what has already been
searched], and so on.


Reaction to Player Action
-- Notice what the player does and react to it appropriately if it is
peculiar. (There being two possible stages here, interfering with
player action to prevent it and reacting to it after it is done.)

Sentence-throwing Conversation (model 1, which is what I do now)

-- Represent general topics of conversation as a tree
(General->Weather->Rain, for instance.) Within this, represent
individual statements that can be made. Allow the NPC to remember
which statements have already been used. Have some statements to lead
to natural follow-ups for the NPC. Model the effect of these
statements on the NPC's emotions, and have gestures and so on that
reflect the NPC's emotional state. [Galatea.]

-- Give the NPC specific sets of information he wishes to convey.
Allow the player to interrupt him and side-track the conversation, but
make sure the NPC always returns to the subject of interest to him.
[Pytho's Mask.]

-- Provide for interaction between multiple NPCs so that you can
apparently talk to more than one person at a time. [City of Secrets
(not available yet)]

-- Give the NPC general goals (of emotional state, information to
convey, etc) and allow him to partially direct conversation according
to those goals. [WIP.]


Knowledge/logic-based Conversation (model 2. Somewhat More
Difficult.)

-- Represent all facts in the NPC's possession. Allow for learning
and memory, and include knowledge about past statements and behaviors
of the PC and other NPCs, as well as about the state of the
environment. [This is, very dimly, approached by certain systems;
I've seen code that has the NPC 'learn' specific squibs of information
so that if you >TELL NPC ABOUT THING you can then >ASK NPC ABOUT THING
and get the same info back. Also, NPC_Engine has a mechanism whereby
an NPC can remember where he last saw a given other NPC and how many
turns ago. So far so good, but ideally the NPC should also remember
the location of every object he has seen, so that he can intelligently
go back and get, eg, the key on the mantlepiece to wind the cuckoo
clock.]

-- Represent questions to the NPC as a request for information. Have
the NPC determine whether he possesses the information asked for and
whether it is compatible with his mood and goals to reveal it.

-- Represent statements to the NPC as the presentation of new
information. Do a logical check of information so provided to the NPC
to see whether it fits with his existing knowledge and whether he
wishes to accept or challege it.

-- Do dynamic prose generation to produce realistic-sounding,
spontaneous dialogue representing the NPC's response. Take into
account his current mood and his level of diction.

[This latter system comes closer to real AI than anything else here, I
think, and it is correspondingly difficult (for me, at least) even to
imagine a workable model for how to do it, let alone implement
something similar. The prose generation problem alone is not
something I could solve to my own satisfaction; I prefer to handcraft
everything that will be coming out of the NPC's mouth. For another
thing, it would require huge amounts of work to provide the NPC with a
working lexicon, and this is not the sort of task for which IF systems
were designed.]

Anyway, end of ramble. I am working on as large a piece of this as I,
personally, can handle at the moment, but my goal is still to produce
the effects that I want in my games.

ES

Ashikaga

unread,
Sep 3, 2001, 2:42:47 PM9/3/01
to
"Martin Bays" <marti...@hotmail.com> wrote...

Could you first of all tell me what kind of AI you are thinking of? An NPC
who has his/her own agenda in the game?

> Hi there, people. Seems to me it's been far too long since there's had
> a proper discussion here of the role Artificial Intelligence could
> play in IF. So I'm here with the hope of sparking a bit of debate -
> and maybe even some actual work in the area.
>
> First off - a question. Why has so little been done? Sure, there was
> the Oz project approximately one eon ago - but funding dried up (or
> whatever) before they really got very far, and anyway none of their
> code was in a form usable to us. Since then there's been Nate Cull's
> highly laudable RAP (Reactive Agent Planning) system, but that was
> quite limited in scope and not really much use on its own. Basically
> we've had little but silence on the subject. Is it just that
> everyone's given up on the idea of believable NPC's? Thought it all
> too hard to think about, never mind implement? Decided it wouldn't
> actually be of that much use to real IF? Well, I'm not sure I agree
> with either of these, and I think the time is ripe for something to be
> done.

I think it's too terribly hard to make an AI that will work responsively.
Imagine how complex it'll make the storyline, and how easily it'll lead the
story off-track of the author's original intention (which may not be bad,
but wouldn't the player get confused as what's the purpose of the game if
there wasn't a check system)?

Let me conjure up a scenario to explain what kind of a crazy idea I am
envisioning.

Situation: Tax Payer 332-93-0667 wants to get a sea monkey to give it to his
girlfriend on her birthday (her sea monkey of 6 years has just died), but
Generic Frat Boy #2 also wants to get a sea monkey, and he has an
acquantance in an underground sea monkey trading floor. GFB#2 is TP
332-93-0667's neighbor (you know the boy next door type..., sort of). TP
332-93-0667's has no idea his neighbor wants a sea monkey (because who would
guess it...).

You play as TP 332-93-0667. NPC's are GFB#2 and his illegal sea monkey
trader acquantance.

Intented plot: You must talk to GFB#2, and find out he has a connection to
the underground trading floor. Since you are a law-abiding tax payer, you
can't risk yourself being seen in a black market where nobody pays sales tax
and no receipt given, can you? So you must tell GFB#2 to get it for you on
a commission. You still don't know he wants a sea monkey himself.

Under the normal course, you would tell him to run the errand, and he would
do it for you for the commission, so he can get a sea monkey himself (so he
can put it in the punch bowl at the party coming up in about 23 hours...).
He probably would cheat you a little, but that's normal.

If GFB#2 was given an AI (and you programmed him to be inquisitive, because
every studious college student must be trained to do that, don't they?
:-D), then he might find out you were buying it for your girlfriend. He
might just buy it (using your money, naturally) and by a twisted fate..., he
gave it to the sweet girl who was terribly grieve over the lost of her long
time companion. Before you can ask "What the f*ck did you eat last night?"
Your girlfriend (the fictitious one..., I mean the one proposed in this
scenario) has just been stolen! What does that has anything to do with the
main storyline?!?

Okay..., just a random thought (which means I didn't plan this, either...).
And I ate BBQ sirloin yesterday.

> A suggestion. IF is a great medium for the study of AI. Why? Allow me
> to list.

I have no opposition argument with it. It is great medium for authors to
explore and experiment a little, too.

<snipped long list>


> One possibly significant problem I can think of though - I'm not sure
> the IF languages (I'm thinking primarily of Inform here, though I
> think TADS has similar restrictions) have the capabilities for running
> the kind of programs needed. In particular, the lack of a dynamic
> memory might cause problems (though actually it's just occured to me -
> isn't this the kind of thing glulx (or however you spell it) was meant
> to solve?).

Well, I think it's better to make a petri dish world to experiment a little
bit before we move it to a full-blown world. That's the only feasible
solution to it. We simply need to learn a little more about its usage and
what sort of stuff so we can do it effectively. (yes, normally I am a
serious person in RL)

> Anyway, there you go. Comment please! Has anyone else been thinking on
> similar lines? Has anyone in fact got any relevant WIP's? Anyone even
> (and I know this is too much to ask really, but that's not going to
> stop me) feel like expressing a willingness to collaborate on some
> grand IF-AI project? And yes that is my humbly raised hand you spy.

Well..., I haven't tried to program an AI, since I am not really a
programmer, but I write stuffs. One thing I do to make NPC more believable
is to pay more attention to its mindset. It's prescripted, and of course it
has its limitation, but like you mentioned, it's the only way we can do now.

Good luck with your project. I am for it.

> Well, thank you. And goodbye.

I thought you still want to hear from us.

Ashikaga


J. Mancera

unread,
Sep 4, 2001, 11:10:42 AM9/4/01
to
On 3 Sep 2001 11:40:11 -0700, ems...@mindspring.com
(ems...@mindspring.com) wrote:

>-- work on TADS 3 to make it more flexible in terms of passing orders
>to actors and having them respond properly. (I think-- I saw some
>stuff about this on the T3 list a while ago, but I only read it
>closely when I have time. MJR and company

T3 list? Is this a mailing-list about TADS 3, I suppose.

Could you give more details about it? (URL, for example)

Thanks.,

Adam Thornton

unread,
Sep 4, 2001, 12:06:02 PM9/4/01
to
In article <7d26d66b.01090...@posting.google.com>,

Martin Bays <marti...@hotmail.com> wrote:
>First off - a question. Why has so little been done? Sure, there was
>the Oz project approximately one eon ago - but funding dried up (or
>whatever) before they really got very far, and anyway none of their
>code was in a form usable to us. Since then there's been Nate Cull's
>highly laudable RAP (Reactive Agent Planning) system, but that was
>quite limited in scope and not really much use on its own. Basically
>we've had little but silence on the subject. Is it just that
>everyone's given up on the idea of believable NPC's? Thought it all
>too hard to think about, never mind implement? Decided it wouldn't
>actually be of that much use to real IF? Well, I'm not sure I agree
>with either of these, and I think the time is ripe for something to be
>done.

Knock yourself out.

I would be remiss if I didn't point you in the direction of Chris
Crawford's Erasmatron. I never really understood how Erasmatron and Oz
differed, in that they both seem to want to use a bunch of state
variables to model a point in an emotional space, and then base
character reactions on the trajectory of points-in-that-space, but it's
probably worth looking at.

You will find these objections in a rather surprising place, should I
finish it in time (he said cryptically), but I figure I'll state them
now, especially since that venue is, ahem, a rather unlikely place to be
looking for criticism of theories of drama. There's one fundamental
problem I have with the whole simulationist approach to complex,
believable AI NPCs.

> - We have an obvious, and I think achievable, goal to aim for - a
>system for the creation of what the Oz Project called "believable
>agents" - NPC's which aren't necessarily that 'deep' in terms of their
>intellectual capablities, but which are 'broad' enough for the player
>to accept them as intelligent beings. Used well, I think these could
>have a revolutionary impact on real IF, as well as being very
>interesting in their own right.

Here is the objection.

In a world of Aristotelean dramatics, where plot evolves dialectically
from character, this might indeed be a valid approach to storytelling.
However, almost no one actually writes like that. Characters are,
typically, used in the service of the story, rather than the story being
generated from the characters' beliefs and actions.

Attempts to have character drive narrative, rather than the other way
round, have typically led to dull, aimless stories, in my experience. I
suspect that this is likely to be even more the case in IF than it is in
static fiction.

That said: I'd be vastly pleased to be proven wrong.

Adam


Martin Bays

unread,
Sep 4, 2001, 1:41:26 PM9/4/01
to
<Snip my witterings>

> No, not everyone has given up on the idea of believable NPCs. I'm not
> sure, on the other hand, that there's a universal consensus that
> believable NPCs require AI. Aside from the manifold problems involved
> in actually writing such a thing, there's the question of whether we
> want to build simulations or write stories. I'm more than happy to
> argue in favor of simulationism, but not to such an extent that the
> motivation, tone, and unique quality of the underlying story is lost.
>

I'm kind of in two minds about this myself. I've got two conflicting
ideas about what this should all be about. In the red corner : AI as a
means of improving IF. In the blue corner : IF as a means to
researching AI. So which should win? Well, I'm a bit of a soppy old
pacifist at heart, and I think they should just meet in the middle,
hug and form a big red/blue (purple?) IF-AI mess. But I do still see a
problem here. Basically, doing 'proper AI' - developing a NPC which
has real intelligence (whatever that means) is not only ridiculously
difficult, but also as you say not likely to be controllable enough to
have any real relevance to IF as we know it. But still I would argue
that some AI-like developments are needed to get truly believable
agents. In particular, I'd say it's needed to get intelligently active
NPC's - that is, ones that actually do things - rather than being the
passive, merely reactive things we tend to see today - and which
appear through their actions to think in a fluid and natural way -
rather the forced, clearly explicitly programmed actions all NPCs I
can think of up till now have displayed (even Galatea - although she
did make some fumbling steps in this direction, it was still all too
clear that she said what she just said because she'd been programmed
to say it if I did what I just did (by the way - I guess this is as
good an opportunity as any to tell you how much I loved Galatea. It
was great. Thank you muchly)). I'm not sure I'm getting this point
across quite how I'd like to -surely any computer programme's actions
will seem programmed? But I think that not only is it possible for
them not to, but that the only way to accomplish this is by putting
the kind of complexity into the NPC so that we're doing fundamentally
AI-like things with it..

Perhaps it would make things clearer if I gave an example of the kind
of thing I'm talking about, what I would like to see simulated. For a
start, I think having NPCs using language in a similarly realistic way
is much too ambitious for now. So what I want to see is a decent
simulation of a higher, non-human animal, say a dog or a cat. These
would need to have a good understanding of both the inanimate world
around them and (more importantly for believability) of other NPCs on
a mental and emotional level. The true test of this, and a way of
getting some interesting behaviour, would be to have two or more such
NPCs interacting simultaneously with each other and the player.
Clearly this wouldn't work as an integral part of normal IF (unless, I
suppose, you had some puzzle which needed the cats (or whatever) to
like you, or something), but it would make a nice IFart show piece -
kind of like "Sparky and Boots", only good (no offence to the author
of that piece - it was severely constrained by the limits of standard
NPC handling, and gives a nice example of why we need a whole new
system to handle that kind of thing).


> [I am reminded of a recent post which expressed the opinion that a
> 'perfect' game would be one in which you would be free to do anything
> you liked, at all, and the game would work out the ramifications; no
> motives or plot would be imposed on the player. Personally, I find
> that too reminiscent of Real Life, and when I'm playing a game I
> become unsatisfied if it doesn't provide some indication of what I am
> supposed to be doing.]
>

I seem to remember the Oz Project had something similar planned. Seems
a bit pointless to me too, really. But as far as AI's concerned, it
all comes down to the red and the blue again.

> Most people who are working on this kind of thing, at least as far as
> I know, are focusing on specific aspects of the realistic NPC problem.
> Some issues/directions that I know are under investigation at the
> moment include:
>
> -- making NPCs better at reacting to player actions. (Sean Barrett;
> see "The Weapon," which, despite my complaints about some aspects,
> does do some interesting things.)
>

Don't know that one. I'll look into it. Thanks.

> -- improving NPC conversation, including the ability to pursue
> conversational goals of one sort or another. (Er, me. Examples
> ongoing: "Galatea" did this a bit, but the background library has been
> extensively reworked since then and is still undergoing improvements.)
>

If you can get this working well, I'll be very, very impressed. I used
to think that RAP (or derivative) might be able to do something like
this, but gave up on the idea when I realised just how complicated it
would be even without natural language production. But if you think
you can do it, good luck to you.

Yes, it's true that you can get half-decent NPCs by narrowly defining
their environment and, in particular, by narrowly defining how they
interact with the PC and other NPCs. Witness the "say yes", "say no"
or "say nothing" choices you had in Spider and Web, which actually
also went as far as to have you physically strapped in during most of
the interaction. It makes for an easily implemented NPC, but how long
can we keep on going with new variations on the theme of a restricted
PC whenever we want a realistic NPC? These methods just don't extend
well when there is more freedom of action. BAP I think worked
simlilarly, with menu-based conversation only (but I don't really
remember it, so I might be wrong). One story that worked quite well
while keeping player freedoms was Worlds Apart, but I for one was
still constantly annoyed by the fact that the workings of it were so
obvious (generally "OK, I'm going to sit here for x turns, answering
any questions you might ask me, and then say something that probably
is completely irrelevant to what we were just 'discussing').

Also, it's true that having an NPC which can handle inanimate objects
in a realistic way is generally neither interesting nor useful. Nor do
I think this depth of world knowledge and that kind of thought are
necessary for creating believable agents - inter-intelligence (i.e.
NPC-NPC & NPC-PC) behaviour is much more important for that. I can't
really see 'puzzle-solving' NPCs being of much use, except when
considered in the kind of abstract ways that might give rise to
intelligent behaviour (like, where the puzzle is "make this character
laugh").

> > - The very nature of IF and its languages means that creating new
> > situations with which to test our growing AI is very easy.
>
> Perhaps.
>
> > - We have an obvious, and I think achievable, goal to aim for - a
> > system for the creation of what the Oz Project called "believable
> > agents" - NPC's which aren't necessarily that 'deep' in terms of their
> > intellectual capablities, but which are 'broad' enough for the player
> > to accept them as intelligent beings. Used well, I think these could
> > have a revolutionary impact on real IF, as well as being very
> > interesting in their own right.
>
> Maybe. It's still hard to envision a situation in which the author
> wouldn't have to have anticipated at some level most of the states
> that the conversation could get into, if you're thinking of
> conversational interaction here and not behavior like searching rooms
> for missing items and so on.
>

Conversations are very, very difficult, and since I can't see for the
foreseeable future any way around having any sentences the NPC might
say being written explicitly by the author, what you say is true
(though this still doesn't mean that you want what gets said when to
be the result of explicitly programmed instructions). But then, as I
said I'm more interested for now in animal simulations. Maybe I should
just leave you all alone for a while and write my "Sparky and Boots
Mark II" or whatever. In fact, I might just do that... but I'd like to
see this thread get a bit thicker and more tangled yet.


> > - Lastly, and probably most importantly, and a point that is nicked
> > straight from Nate Cull's Inform RAP documentation (cheers Nate, hope
> > you don't mind) - the IF community this newsgroup represents consists
> > of a pool of proficient (don't laugh) and inventive programmers who
> > are willing to work on IF (and hence maybe this kind of project(?))
> > without material recompense and in a nice friendly open source way.
> > Which sounds like the perfect breeding ground for some genuinely
> > original creations.
>
> We-ell. Maybe :).

Yeah, I know, not actually going to happen. But I felt like saying it
anyway - after all, it's one of those things that *could* happen if
enough people wanted it to, so you've got to at least try to encourage
it.

>
> > Anyway, there you go. Comment please! Has anyone else been thinking on
> > similar lines?
>
> I think about this more or less constantly.

I'm very glad you said that. I really was starting to think that
pretty much no-one was interested in this anymore, and any work I did
would be of no interest to anyone else. Glad to be proved wrong.

This is sounds like the kind of thing RAP was designed for, though it
does it slightly differently. Still, you should have a look if you're
genuinely interested in this.

> -- Be able to search plausibly for an object needed. [Knowledge
> representation that would allow the NPC to remember where he has seen
> things would be helpful or perhaps even essential here; it would be
> annoying and implausible if the NPC would for instance cheerfully
> ransack a house looking for an item that the player has shown him a
> minute before.] This involves looking under and on top of things [and
> here the world modeling requires a better representation of the
> relative location of objects], searching inside things [including
> opening nested objects and remembering what has already been
> searched], and so on.
>

I actually wrote an extension to RAP a while ago that tried to do
exactly this (I think I got it working, but it was a long time ago
now). It was very, very messy though, so don't go asking for the
source. Still, I could probably clean it up if people are actually
interested...

>
> Reaction to Player Action
> -- Notice what the player does and react to it appropriately if it is
> peculiar. (There being two possible stages here, interfering with
> player action to prevent it and reacting to it after it is done.)
>

This is of course the one thing that Inform (and TADS(?)) is good at
already (through .before and .after), though it would be nice to tie
it in to a more general architecture.

> Sentence-throwing Conversation (model 1, which is what I do now)
>
> -- Represent general topics of conversation as a tree
> (General->Weather->Rain, for instance.) Within this, represent
> individual statements that can be made. Allow the NPC to remember
> which statements have already been used. Have some statements to lead
> to natural follow-ups for the NPC. Model the effect of these
> statements on the NPC's emotions, and have gestures and so on that
> reflect the NPC's emotional state. [Galatea.]

So *that's* how you did it! I've often wondered just how Galatea
worked. I don't suppose you feel like publishing the source code, or
at least sending me a copy?

>
> -- Give the NPC specific sets of information he wishes to convey.
> Allow the player to interrupt him and side-track the conversation, but
> make sure the NPC always returns to the subject of interest to him.
> [Pytho's Mask.]
>
> -- Provide for interaction between multiple NPCs so that you can
> apparently talk to more than one person at a time. [City of Secrets
> (not available yet)]

Is this one of your WIPs? Or else who's?

>
> -- Give the NPC general goals (of emotional state, information to
> convey, etc) and allow him to partially direct conversation according
> to those goals. [WIP.]
>

By the way, that's the kind of thing that could potentially be RAPped
(as I said, I've thought in this direction myself), though there might
be better ways.

>
> Knowledge/logic-based Conversation (model 2. Somewhat More
> Difficult.)
>
> -- Represent all facts in the NPC's possession. Allow for learning
> and memory, and include knowledge about past statements and behaviors
> of the PC and other NPCs, as well as about the state of the
> environment. [This is, very dimly, approached by certain systems;
> I've seen code that has the NPC 'learn' specific squibs of information
> so that if you >TELL NPC ABOUT THING you can then >ASK NPC ABOUT THING
> and get the same info back. Also, NPC_Engine has a mechanism whereby
> an NPC can remember where he last saw a given other NPC and how many
> turns ago. So far so good, but ideally the NPC should also remember
> the location of every object he has seen, so that he can intelligently
> go back and get, eg, the key on the mantlepiece to wind the cuckoo
> clock.]
>

Remembering object location actually isn't actually that difficult - a
system I tried out a while ago just gave each object a .bel_parent
property (or array for multiple NPCs) which just gave what the NPC(s)
currently thought the object's parent was, which was updated for
objects in sight each turn. This ties in well with the object
searching thing mentioned earlier - if last time you saw it the key
was on the mantlepiece, go there. And if it's not there, search
through all the locations and open everything in them until you find
it, and if you can't find it give up and move on to the next plan (hit
the cuckoo clock until it opens). RAP can be extended to do it all.

Maybe I'm preaching about RAP a bit too much here. It's not *that*
great, and I'd like to see it being superceded by a more complex and
adaptable architecture. One day...

> -- Do dynamic prose generation to produce realistic-sounding,
> spontaneous dialogue representing the NPC's response. Take into
> account his current mood and his level of diction.
>
> [This latter system comes closer to real AI than anything else here, I
> think, and it is correspondingly difficult (for me, at least) even to
> imagine a workable model for how to do it, let alone implement
> something similar. The prose generation problem alone is not
> something I could solve to my own satisfaction; I prefer to handcraft
> everything that will be coming out of the NPC's mouth. For another
> thing, it would require huge amounts of work to provide the NPC with a
> working lexicon, and this is not the sort of task for which IF systems
> were designed.]
>

I think that's beyond the collective ingenuity of humanity at the
moment, never mind IF writers.

> Anyway, end of ramble. I am working on as large a piece of this as I,
> personally, can handle at the moment, but my goal is still to produce
> the effects that I want in my games.
>
> ES

End of my rambling answer. Thanks for all that, you've given me things
to think about *and* some valuable encouragement.
Be seeing you,
Martin

Martin Bays

unread,
Sep 4, 2001, 3:25:45 PM9/4/01
to
"Ashikaga" <ashi...@worldnet.att.net> wrote in message news:<HaQk7.5653$Uf1.4...@bgtnsc06-news.ops.worldnet.att.net>...

> "Martin Bays" <marti...@hotmail.com> wrote...
>
> Could you first of all tell me what kind of AI you are thinking of? An NPC
> who has his/her own agenda in the game?
>

Weeeel, maybe, but not in the bad way your thinking of. Allow me to
refute your example as demonstration of what I mean...

Argch! My brain! OK, I get your point - limit your NPCs if you want to
avoid headaches (of both the figurative and the literal varieties).
But that doesn't mean you can't have what we could call 'NPC freedom
within a well-defined domain'. Hey, I like that phrase! Anyway, the
point is that although in a conventional piece of IF you don't want
your NPCs messing in any significant way with the plot-line, that
doesn't mean you want them to be completely constrained either. I
guess it's a parallel problem to the old one of giving the player
small-scale freedoms, and preferably an approximation to an illusion
of complete free will, while having the large-scale plot-line being
basically linear. We've seen some nice solutions to that problem, so I
don't see that the NPC one is necessarily insurmountable either. Put
simply, you wouldn't allow TP 332-93-0667 to recover the corpse of the
dead sea monkey, attach wires in appropriate places and use it like a
string-puppet to fool girlfriend into thinking it's real for a while
before disposing of it and telling her it's come to some terrible
accident so that TP 332-93-0667 still gets the credit for attempted
sea monkey replacement (for example), so why should you let GFB#2 not
keep his word?


> Okay..., just a random thought (which means I didn't plan this, either...).
> And I ate BBQ sirloin yesterday.
>

Ah, all is explained.

> > A suggestion. IF is a great medium for the study of AI. Why? Allow me
> > to list.
>
> I have no opposition argument with it. It is great medium for authors to
> explore and experiment a little, too.
>
> <snipped long list>
> > One possibly significant problem I can think of though - I'm not sure
> > the IF languages (I'm thinking primarily of Inform here, though I
> > think TADS has similar restrictions) have the capabilities for running
> > the kind of programs needed. In particular, the lack of a dynamic
> > memory might cause problems (though actually it's just occured to me -
> > isn't this the kind of thing glulx (or however you spell it) was meant
> > to solve?).
>
> Well, I think it's better to make a petri dish world to experiment a little
> bit before we move it to a full-blown world. That's the only feasible
> solution to it. We simply need to learn a little more about its usage and
> what sort of stuff so we can do it effectively. (yes, normally I am a
> serious person in RL)
>

Yes, I think you're probably right if what you want is real AI.
'Microworlds' with very well defined and more importantly small
domains are what you need to study the basics of intelligence, and I
guess logically you should do that first and than transfer it to IF.
But it's likely to be some decades before we get it all sorted (if we
ever do), and that's probably the kind of thing best left to the
experts (or least not me), so can't we try and do something now? It
should be fun, and it might even have some implications for the real
stuff. And the goal of believable agents is, I think, much more likely
to be possible in IF then fully fledged AI's, and what's more I think
IF is probably the most primitive domain in which believable agents
can exist (please explain why I'm wrong here if I am - I've got a
vague feeling this might be important).

> > Anyway, there you go. Comment please! Has anyone else been thinking on
> > similar lines? Has anyone in fact got any relevant WIP's? Anyone even
> > (and I know this is too much to ask really, but that's not going to
> > stop me) feel like expressing a willingness to collaborate on some
> > grand IF-AI project? And yes that is my humbly raised hand you spy.
>
> Well..., I haven't tried to program an AI, since I am not really a
> programmer, but I write stuffs. One thing I do to make NPC more believable
> is to pay more attention to its mindset. It's prescripted, and of course it
> has its limitation, but like you mentioned, it's the only way we can do now.
>

Or is it? Or IS it? Eh? Eh? (Please substitute a convincing and
coherent argument for why less prescripted, and hence limited, NPCs
are possible. Thank you).

> Good luck with your project. I am for it.
>
> > Well, thank you. And goodbye.
>
> I thought you still want to hear from us.
>

Errr, I think I meant goodnight. Which wouldn't have made much more
sense, but...
Anyway, cheers for your comments, and goodbye - for now,

Martin

David Samuel Myers

unread,
Sep 4, 2001, 3:29:31 PM9/4/01
to
Adam Thornton <ad...@fsf.net> wrote:
> Attempts to have character drive narrative, rather than the other way
> round, have typically led to dull, aimless stories, in my experience. I
> suspect that this is likely to be even more the case in IF than it is in
> static fiction.

As a highly tangential side note, there is a new, long-awaited novel which
just came out that attempts to take on this task head first-- "The
Corrections", by Jonathan Franzen. Some meta-information is available:

http://www.nytimes.com/2001/09/02/magazine/02FRANZEN.html

This link requires free subscription and will self-destruct (i.e. become
non-free) in a few days. A book review dated today is available on the
same site.

-david

Martin Bays

unread,
Sep 4, 2001, 3:39:25 PM9/4/01
to
David Samuel Myers <dmy...@ic.sunysb.edu> wrote in message news:<3b939...@marge.ic.sunysb.edu>...

I think I fundamentally agree with what you're saying - that the goal
of having real intelligent agents in a fully simulated world is, very,
very difficult, and probably not worth the effort. This may well
contradict what I said the other day, and maybe even what I was
thinking, but hey! What do you expect? Some kind of coherency of
thought? Pphagh!

Still, I am arguing for some kind of AI-like mechanics, I guess to
supplement what you call the "man behind the curtain" effect (which
I'm guessing is the same as what I would call the "Eliza" effect i.e.
the natural way people tend to credit an NPC with intelligence until
they do something (or fail to do something) which upsets their
expectations) - this can only get us so far. So some kind of knowledge
representation would be necessary for what I'm suggesting, and I take
the point that this is a nasty, complicated area. I haven't worked out
any specific ways of tackling it (feel free to suggest!), but I think
the important thing will be to carefully limit what the NPCs will be
expected to know about and interact with. In particular, I wouldn't
want them to have to know anything more about most objects than their
location, and the more generally relevant properties like whether
they're open or closed, carryable or fixed etc. and any specific
things unique to the object which the character might have to use -
this shouldn't require much work by the author above what they have to
do to get the library to understand these things. More complicated
will be modelling NPCs knowledge and beliefs about the state of other
characters, which will have to involve deeper, less obviously
determinable things like emotional states and *their* knowledge and
beliefs. Not sure how to do that yet.

Well, thank you, see you,

Martin

Roger Carbol

unread,
Sep 4, 2001, 5:27:15 PM9/4/01
to
I think there has been some work in the direction of
insect-level AI and actors who decide on their behaviour
based on certain low-level goals. Consider, for
example, "Four in One".

Does IF need more believable insect NPCs? Perhaps.

.. Roger Carbol .. rca...@home.com

Ashikaga

unread,
Sep 4, 2001, 5:13:26 PM9/4/01
to
"Martin Bays" <marti...@hotmail.com> wrote...
> "Ashikaga" <ashi...@worldnet.att.net> wrote...

> > "Martin Bays" <marti...@hotmail.com> wrote...
> >
> > Could you first of all tell me what kind of AI you are thinking of? An
NPC
> > who has his/her own agenda in the game?
>
> Weeeel, maybe, but not in the bad way your thinking of. Allow me to
> refute your example as demonstration of what I mean...

Okay. I hope I didn't offend you with my
I-thought-it-would-be-fun-and-yet-get-my-point-across example.

> > I think it's too terribly hard to make an AI that will work
responsively.
> > Imagine how complex it'll make the storyline, and how easily it'll lead
the
> > story off-track of the author's original intention (which may not be
bad,
> > but wouldn't the player get confused as what's the purpose of the game
if
> > there wasn't a check system)?
> >
> > Let me conjure up a scenario to explain what kind of a crazy idea I am
> > envisioning.

<snip the illegal sea monkey trading puzzle turned bad soap opera example>


>
> Argch! My brain! OK, I get your point - limit your NPCs if you want to
> avoid headaches (of both the figurative and the literal varieties).

Maybe we should allow NPCs to take medication whenever s/he feels headachy.

> But that doesn't mean you can't have what we could call 'NPC freedom
> within a well-defined domain'. Hey, I like that phrase!

:-) Makes sense, and that's what I was suggesting also, if my words weren't
so obscure....

> Anyway, the
> point is that although in a conventional piece of IF you don't want
> your NPCs messing in any significant way with the plot-line, that
> doesn't mean you want them to be completely constrained either. I
> guess it's a parallel problem to the old one of giving the player
> small-scale freedoms, and preferably an approximation to an illusion
> of complete free will, while having the large-scale plot-line being
> basically linear. We've seen some nice solutions to that problem, so I
> don't see that the NPC one is necessarily insurmountable either. Put
> simply, you wouldn't allow TP 332-93-0667 to recover the corpse of the
> dead sea monkey, attach wires in appropriate places and use it like a
> string-puppet to fool girlfriend into thinking it's real for a while
> before disposing of it and telling her it's come to some terrible
> accident so that TP 332-93-0667 still gets the credit for attempted
> sea monkey replacement (for example), so why should you let GFB#2 not
> keep his word?

Err..., because GFB stands for "generic frat boy?" :-D j/k :-/

Yes, point taken, but wouldn't the point of creating an AI is to allow the
NPC to have more freedom, therefore, more believable? I am not saying it
must be screwy like the one I proposed, but wouldn't you want your NPCs to
solve the puzzles of their own with multiple paths, much like you would do
for your PC? (anywayz, that's hard to do for PC's part already)

Or, maybe we can have some sort of flag so like what you've suggested, the
NPC can remember things, like noticing something was missing, therefore s/he
would be prompted to search for it.

Anywayz, that one might be the easier thing to resolve, but how about a
conversation prompted by an NPC? Using that missing stuff example, I mean,
how can we let NPC ask for another NPC or the PC for the stuff in complete
sentences like "Have you seen a ring with a capital A inscripted on it?" or
"May I see the ring on your finger?" instead of "Ring, miss, where?" This
one can be resolved if the question is prescripted, but perhaps if we can
let NPC understand the language structure (can current AI do that?), it
might be more efficient, and it may even allow the NPC to ask a rephrased
follow up question. Suppose you said "No" to the NPC, maybe he can modify
his question and ask "It's a silver ring with several small diamonds on it."

I am not trying to unload tons of stuffs at once, but I am merely
brainstorming problems needs to be resolved. Some sort of algorithms of
making an AI, if you will. I hope this helps.

> > Okay..., just a random thought (which means I didn't plan this,
either...).
> > And I ate BBQ sirloin yesterday.
>
> Ah, all is explained.

:-)

<snip>


> > Well, I think it's better to make a petri dish world to experiment a
little
> > bit before we move it to a full-blown world. That's the only feasible
> > solution to it. We simply need to learn a little more about its usage
and
> > what sort of stuff so we can do it effectively. (yes, normally I am a
> > serious person in RL)
>
> Yes, I think you're probably right if what you want is real AI.
> 'Microworlds' with very well defined and more importantly small
> domains are what you need to study the basics of intelligence, and I
> guess logically you should do that first and than transfer it to IF.
> But it's likely to be some decades before we get it all sorted (if we
> ever do), and that's probably the kind of thing best left to the
> experts (or least not me), so can't we try and do something now? It
> should be fun, and it might even have some implications for the real
> stuff. And the goal of believable agents is, I think, much more likely
> to be possible in IF then fully fledged AI's, and what's more I think
> IF is probably the most primitive domain in which believable agents
> can exist (please explain why I'm wrong here if I am - I've got a
> vague feeling this might be important).

Yes. Now I feel more clear on what kind of AI you are talking about. Have
you tried Blade Runner or The Last Express (both are graphic adventures)? I
think the latter has very believable NPCs, partly because of the script was
written in very fine details, partly because NPCs talk to each others and
your PC can actually eavesdropping them, giving you a sense of they were
alive. Another sort of mini-AI demonstrated there is that NPCs would
actually stop talking if you got too close to them and they noticed you.

I am not the programmer, but my friend who does the programming for me said
there is a daemon thingy in TADS that would allow you to create NPC
automation (correct me if I am wrong), e.g., GFB#2 would go to the
underground sea monkey trading floor at certain time of the day. Once you
noticed his habit, then I guess that would give some impression that GFB#2
is not just a talkative item, but a believable agent.

> > > Anyway, there you go. Comment please! Has anyone else been thinking on
> > > similar lines? Has anyone in fact got any relevant WIP's? Anyone even
> > > (and I know this is too much to ask really, but that's not going to
> > > stop me) feel like expressing a willingness to collaborate on some
> > > grand IF-AI project? And yes that is my humbly raised hand you spy.
> >
> > Well..., I haven't tried to program an AI, since I am not really a
> > programmer, but I write stuffs. One thing I do to make NPC more
believable
> > is to pay more attention to its mindset. It's prescripted, and of
course it
> > has its limitation, but like you mentioned, it's the only way we can do
now.
>
> Or is it? Or IS it? Eh? Eh? (Please substitute a convincing and
> coherent argument for why less prescripted, and hence limited, NPCs
> are possible. Thank you).

I am not too sure what you are asking me here. I was merely stating that,
currently we don't have a sophisticated AI, the only coping strategy for
making NPCs more believable is by writing believable conversations for NPCs,
creating a sense to the player as if they have emotions, like a bad temper,
or cynicism. NPCs, even the friendly one to the PC, should be a conflict of
some sort. In RL, the problems we face each day are mostly conflicts of
interests among people. But we cannot foresee every emotion inflicts on the
NPC as a ripple effect of a PC's action, therefore, it's a limitation of
using prescripted plot. That's what I meant.

> > Good luck with your project. I am for it.
> >
> > > Well, thank you. And goodbye.
> >
> > I thought you still want to hear from us.
>
> Errr, I think I meant goodnight. Which wouldn't have made much more
> sense, but...
> Anyway, cheers for your comments, and goodbye - for now,

Okay.

> Martin
Ashikaga


Gadget

unread,
Sep 4, 2001, 6:22:22 PM9/4/01
to
A lot of points have been already made in this thread, but I still
would like to put in my 2cts:

The one thing I could imagine improve gameplay in an IF would be
having an intelligent 'adversary'. Imagine, for example, Enchanter
(which I am playing now, haven't finished, so please no spoilers ;-)
but with an intelligent evil sorcerer that tries to counter your
actions, even the ones the programmer could not foresee. A kind of IF
chess in a way...

Cheers,
Harry

-------------
It's a bird...
It's a plane...
No, it's... Gadget?
-------------------
To send mail remove SPAMBLOCK from adress.

Gadget

unread,
Sep 4, 2001, 6:22:43 PM9/4/01
to

Or: Knight Orc, where lots of NPC's run around gathering treasure to
dump in the swamp in a mud-style environment.

ems...@mindspring.com

unread,
Sep 4, 2001, 10:20:26 PM9/4/01
to
marti...@hotmail.com (Martin Bays) wrote in message news:<7d26d66b.01090...@posting.google.com>...

> But still I would argue


>that some AI-like developments are needed to get truly believable
>agents. In particular, I'd say it's needed to get intelligently
active
>NPC's - that is, ones that actually do things -

Ah... but what things do you want them to do?

It's certainly possible to give an illusion of not-total-passivity by
having daemons that make the NPCs do things with apparent spontaneity;
this has been used to good effect for movement (cf. Deadline and its
intellectual offspring) and the pursuit of goals that you-the-player
can perceive. Arguably it is unsatisfying if this pre-programmed
action does not take player behavior into account, but it is possible
to give some flexibility there too, using appropriate switch
statements and so on. Some of my games have NPCs that speak to the
player to begin a conversation, or try to direct the course of the
conversation to persuade the player to do something, and this is also
aimed at giving the illusion of activity.

It seems to me that what you are really asking for therefore is not
quite so much a change in the qualitative things being done, but a
change in the order of magnitude: with enough prescripted actions, and
enough nuance in the selection of script, one approaches the
appearance of AI.

(One might also argue that this is "faking it," but I would argue in
return that all AI is fake AI.)


>Perhaps it would make things clearer if I gave an example of the kind
>of thing I'm talking about, what I would like to see simulated. For a
>start, I think having NPCs using language in a similarly realistic
way
>is much too ambitious for now. So what I want to see is a decent
>simulation of a higher, non-human animal, say a dog or a cat.

I admit to having a rather tepid interest in this model.

I have in fact seen a rather charming implementation of a sidekick
animal which remembered and acted upon its memories of your prior
treatment of it. Unfortunately this is in a game that seems to be on
permanent development hiatus (grrrr.). But I think, though the effect
was charming and cute, that the code could not easily be scaled up to
handle the problem of plausible human relationships. Even leaving
aside the problem of language, which is infinitely strange and whose
nuances often elude the understanding of Non-Artificial I., human
behavior is supposed to take into account logical deductions and
inferences.



>> Reaction to Player Action
>> -- Notice what the player does and react to it appropriately if it
is
>> peculiar. (There being two possible stages here, interfering with
>> player action to prevent it and reacting to it after it is done.)
>>
>This is of course the one thing that Inform (and TADS(?)) is good at
>already (through .before and .after), though it would be nice to tie
>it in to a more general architecture.

Yes; and I suppose I should add, here, that what I have in mind (and
have partly implemented) is contextually-based reaction, rather than a
constant one. This is most crudely evident in Galatea, where her
reactions to certain affectionate gestures (hug, etc) depend upon her
current state of mind. But there are also some reactions to player
actions that don't directly affect her; for instance, if you look at
her hands at a certain point when she has just been commenting on
them, she makes some further remark.

The disciplined application of this idea, as opposed to a hackish
application in a couple of restricted cases, requires that all actions
the player does that are visible to the NPC must be passed to him for
reaction, and that that code must be written to provide for reactions
that interrupt the action or respond to it according to the current
state of events.

This is not sufficient, of course, in itself, since as you point out,
reactive NPCs, however richly described, retain the status of lifeless
machines to some extent.



>> -- Represent general topics of conversation as a tree
>> (General->Weather->Rain, for instance.) Within this, represent
>> individual statements that can be made. Allow the NPC to remember
>> which statements have already been used. Have some statements to
lead
>> to natural follow-ups for the NPC. Model the effect of these
>> statements on the NPC's emotions, and have gestures and so on that
>> reflect the NPC's emotional state. [Galatea.]

>So *that's* how you did it! I've often wondered just how Galatea
>worked. I don't suppose you feel like publishing the source code, or
>at least sending me a copy?

No, I don't. :)

The Galatea source elucidates nothing; it is a spaghetti of hacks and
special cases. All that it contains was subsequently regularized into
a cleaner and more manageable form for Pytho's Mask. (Which, no,
before you ask, I am also not releasing to the public just at the
moment.)

>> -- Provide for interaction between multiple NPCs so that you can
>> apparently talk to more than one person at a time. [City of
Secrets
>> (not available yet)]
>Is this one of your WIPs? Or else who's?

It's mine, now in betatesting and mostly complete, to be released in
Glulx format towards the end of this year. It was also (perhaps
uniquely in the history of IF) written on commission, so it will not
be available for free, but I have done my best to keep it to the
standards I use for everything else. Only it's much bigger than any
of my previous games. For more details, see
http://emshort.home.mindspring.com/CSUpcoming2.htm.



>Remembering object location actually isn't actually that difficult -
a
>system I tried out a while ago just gave each object a .bel_parent
>property (or array for multiple NPCs) which just gave what the NPC(s)
>currently thought the object's parent was, which was updated for
>objects in sight each turn.

Ah, good point. I was contemplating something much uglier, involving,
perhaps, a humungous array attached to each NPC. [Spitting
apotropaically.]

ES

OKB -- not okblacke

unread,
Sep 5, 2001, 12:06:35 AM9/5/01
to
ems...@mindspring.com (ems...@mindspring.com) wrote:
>It seems to me that what you are really asking for therefore is not
>quite so much a change in the qualitative things being done, but a
>change in the order of magnitude: with enough prescripted actions, and
>enough nuance in the selection of script, one approaches the
>appearance of AI.
>
>(One might also argue that this is "faking it," but I would argue in
>return that all AI is fake AI.)

This is something that I think is important to this issue. Exactly what
do we mean by AI? Certainly, in the sense that "fake"=="artificial",
"artifical intelligence" is fake. But what is the difference between fake AI
and "real" AI?

To me, the "intelligence" bit is what matters. Current NPCs are not
intelligent because the author explicitly programs them with all possible
combinations of output. For example, you can use a series of random switches
to generate a sentence from component parts, for example "NPC hits/smacks/slaps
himself on the forehead/thigh/chest and coughs/grins/laughs.". In this
situation there are 27 possible result sentences. Changing the NUMBER of
possibilities for each random choice does not qualitatively change the nature
of the NPC's "intelligence".

I would say that an NPC becomes intelligent when the range of its
resulting output cannot be defined or predicted by the author. To parallel the
above, we could give the NPC a basic English vocabulary, making sure we include
words like "slap" and "forehead" as well as common function words like "of".
We cannot then know whether he will say "NPC slaps himself on the forehead." or
"Hitting himself on the thigh, NPC coughs." or "NPC smacks himself on the
chest, coughs, then grins."

When the NPC not only has a knowledge of the individual words (or phrases)
he can use to construct output, but has a knowledge of what those words mean
EVEN if they are not actually printed, then you can no longer predict what he
will say. With current NPC modelling, code an NPC by saying "okay, the
message printed in such-and-such situation follows this pattern, and we can put
these bits of text into these blanks." With an intelligent NPC we would say,
"okay, in such-and-such situation we want to print a message that means this,"
and the NPC would devise a sentence based on what it is supposed to mean, by
using his knowledge of the meaning of the words he knows.

Personally, I think this is not going to happen in IF anytime soon
because: A) it is heinously difficult, so difficult that people are spending
their careers JUST on trying to do AI, without worrying about setting or plot
or anything else an author thinks about when writing a game; B) IF seems to be
moving along pretty well using conventional NPC models, and people are not
likely to invest immense amounts of time in AI if they still think they can
tell such-and-such story or do such-and-such thing without having to go to all
that trouble. I imagine that any IF-AI connection will happen as a result of
an AI researcher attempting to apply his ideas to IF, not as a result of an IF
author attempting to bring AI into his work.

--OKB (Bren...@aol.com) -- no relation to okblacke

"Do not follow where the path may lead;
go, instead, where there is no path, and leave a trail."
--Author Unknown

Sean T Barrett

unread,
Sep 5, 2001, 2:08:19 AM9/5/01
to
ems...@mindspring.com <ems...@mindspring.com> wrote:

>marti...@hotmail.com (Martin Bays) wrote:
>>> Reaction to Player Action
>>> -- Notice what the player does and react to it appropriately if it
>is
>>> peculiar. (There being two possible stages here, interfering with
>>> player action to prevent it and reacting to it after it is done.)
>>>
>>This is of course the one thing that Inform (and TADS(?)) is good at
>>already (through .before and .after), though it would be nice to tie
>>it in to a more general architecture.
>
>Yes; and I suppose I should add, here, that what I have in mind (and
>have partly implemented) is contextually-based reaction, rather than a
>constant one.
>
>The disciplined application of this idea, as opposed to a hackish
>application in a couple of restricted cases, requires that all actions
>the player does that are visible to the NPC must be passed to him for
>reaction, and that that code must be written to provide for reactions
>that interrupt the action or respond to it according to the current
>state of events.

Setting aside the "contextually-based reaction" for the nonce,
I'll sum up some of the things that led to The Weapon's
reaction-oriented character.

I've posted several times through the years about the MUD system
I co-developed. On a MUD, multiple people can see the same event,
and need at least slightly different views of it ("You hit Bob"
versus "Satoria hits Bob"). On LPMud, for instance, there is a
function call that takes two strings, one of which is sent to
the current player for the current command, and one which is sent
to everyone else.

In the system we developed, all events which caused something to
display generated an *object* instead of a string. When a player
observed the event, it queried the object to print an
appropriate-to-that-player message. (Players might perceive
different objects differently based on PC knowledge, magic
senses, distance from event, etc.) At the same time, *NPCs*
could query the objct to give it a little "event packet" which
described the event in more world-model-ish terms, for example
"GIVE MOUSE TO BOB" might print "Satoria gives Bob a mouse" but produces
"Event: Mouse MovedTo: Bob CausedBy: Satoria" for the NPC--while
"GET MOUSE" produces "Event: Mouse MovedTo: Satoria CausedBy: Satoria"
(where these are all actually object identifiers, not strings).
This made NPC reactions more tractable and consistent, since the
number of event types was smaller, and because the NPCs would
react to visible things happening in the world, rather than what
the player typed. (But we needed an entirely separate system for
NPCs to prevent player actions.)

Putting the event description in the object also allowed us to
propogate it around the container hierarchy and through exits.
The object could be "filtered"--passing through a soundproof window,
going around a corner (propogating sound but not vision), reflecting
through a mirror (causing writing to be reversed). Each message to
be displayed became a little program for displaying text appropriately
in context, accounting for filtering, for the distance travelled
by the event, and for the PC's status (doesn't know language X,
can currently see invisible, etc.)

For The Weapon, I simply hacked all the character reactions
directly into the room and objects--it is a one-room game for
this reason (well, sort of, actually it was a one-room game
first). I hacked in a bunch of sensory puzzles that would
have been much easier with a true propogating sense system
(there's a bunch of sensing in the EM spectrum outside the
visible range).

I also think the commercial PC game "Thief", which had a
relatively detailed sensing system, and which for gameplay
reasons the player had to be aware of how the NPCs might
be sensing the PC and work with that, probably influenced
the direction The Weapon's sense-stuff developed.

>This is not sufficient, of course, in itself, since as you point out,
>reactive NPCs, however richly described, retain the status of lifeless
>machines to some extent.

This was pretty much the sole reason for the NPC-driven
one-sided "conversation" in The Weapon.

And now that I've explained that history of it, I should point
out that I do not plan on being out in the front leading the
charge on "reactive NPCs"--that will probably have been my
only shot at it.

>It's mine, now in betatesting and mostly complete, to be released in
>Glulx format towards the end of this year. It was also (perhaps
>uniquely in the history of IF) written on commission, so it will not
>be available for free,

which unfortunately means it's ALSO not going to be available as
source. Maybe libraries?

>>Remembering object location actually isn't actually that difficult - a
>>system I tried out a while ago just gave each object a .bel_parent
>>property (or array for multiple NPCs) which just gave what the NPC(s)
>>currently thought the object's parent was, which was updated for
>>objects in sight each turn.
>
>Ah, good point. I was contemplating something much uglier, involving,
>perhaps, a humungous array attached to each NPC. [Spitting
>apotropaically.]

This is really an implementation detail. One ought to be able to abstract
away such concerns and say "I need a mapping from <AI,object> to
<last-seen-location, last-seen-time>", and not really fret the
details. Most languages allow easy data abstractions, especially
object-oriented ones (where objects are not primarily "simulation
objects", but rather structured data). This is the sort of thing
I mean when I say that Inform is really not an ideal language to
be doing complex programming in. (Mind you, all of my games to date
and in-progress are Inform games.)

SeanB

Sean T Barrett

unread,
Sep 5, 2001, 3:17:16 AM9/5/01
to
In article <9n2u5a$qvq$1...@news.fsf.net>, Adam Thornton <ad...@fsf.net> wrote:
>Attempts to have character drive narrative, rather than the other way
>round, have typically led to dull, aimless stories, in my experience. I
>suspect that this is likely to be even more the case in IF than it is in
>static fiction.

I certainly agree, and I think the trick is to add a "storytelling
AI" or a "gamemaster AI" to the mix.

Pie-in-the-sky ramble follows.

For some time I've been interested in pursuing dynamic story/narrative
of some flavor a bit deeper than we see in both commercial games and
most IF, where the moment-to-moment interactions are totally fluid
and unplanned, but there's an overall fixed story arc (or mildly
branching). However, I find it very hard to take an evolutionary
step in that direction, and no commercial game publisher is willing
to spend money on a couple years of research that might have no payoff.
I hope maybe I can try to go this direction with IF, but I need a
much better programming language (I'm hopeful about Tads 3) and a
lot of free time.

My idea for the revolutionary thing is to get characters that
can converse vaguely sanely (to achieve believability) and that
have emotional states and goals (to achieve believability and
provide gameplay) and get all this in a totally dynamic,
uncanned setting. So there's knowledge representation issues
for conversation, plus conversation-generation issues: you
don't want NPC conversations to just be database queries--
"I last saw Bob three days ago in the Forest of the Damned";
my tentative plan is to work on simple cause-and-effect-chain
storytelling, and hope that leads to storytelling-conversation
that can be leveraged.

NPC actions would be driven by their goals, but also be
somewhat unpredictable, influenced by their mental states,
certain levels of irrationality, by a "random" tradeoff of
what goal to pursue, etc. Anywhere that the NPC actions
have an unpredictable/random element, the "Gamemaster AI"--whose
job it is to take the player's interactions and coax an
interesting story out of it--can influence the event within
the allowed paramters for the NPC so as to steer things
in a direction that seems more "interesting". Ideally some
aspects of NPCs would be undetermined until the gamemaster
needed to fill them in--e.g. some additional goals for the
NPC might be added later even though the player has already
interacted with it somewhat. (If the player can't query
the NPC "what are your goals?", it helps.) Indeed, ideally
some aspects of the game would be indeterminant--e.g. things
that happened offscreen--until the gamemaster needed to
determine the details or found an opportunity to steer
the game in a good direction. But that would require a
sort-of heisen-world-state that I don't think would be
particularly feasible to implement.

Could you put all that together into something that works?
How do you make an "interesting" "dynamic story"? No clue
at all. The only thing I have any idea how to tackle is the
storytelling AI, and even that is going to be hard. My personal
opinion is that one of the reasons we've mostly stagnated on
NPCs (both in IF and commercial games) is because an advance
is really going to require a revolution, not an evolution.
As you say, self-motivated characters alone are not going to
lead to a good game. But a non-canned story can't be done
with canned characters... so I suspect we need to advance on
a number of fronts at once. (Stepping away from games and
doing Galatea-like "experiences" might help to evolve in
steps.)

SeanB

Adam Thornton

unread,
Sep 5, 2001, 10:53:20 AM9/5/01
to
In article <GJ6HK...@world.std.com>,

Sean T Barrett <buz...@world.std.com> wrote:
>lead to a good game. But a non-canned story can't be done
>with canned characters... so I suspect we need to advance on
>a number of fronts at once. (Stepping away from games and
>doing Galatea-like "experiences" might help to evolve in
>steps.)

Perhaps. My basic objection is that I don't see anything wrong with
canned stories.

Fundamentally, I prefer narration to simulation. If I want a simulated
world, I'll probably go play an RPG, rather than an adventure.

Of course, this breaks down in pen-and-paper RPG playing, which is--when
it's working well--both narration and simulation. But to get that, you
need a good GM, and that set of skills is something which generally
takes bright, motivated humans a few years of steady play to acquire.

Adam

ems...@mindspring.com

unread,
Sep 5, 2001, 4:02:25 PM9/5/01
to
buz...@world.std.com (Sean T Barrett) wrote in message news:<GJ6ED...@world.std.com>...

> >It's mine, now in betatesting and mostly complete, to be released in
> >Glulx format towards the end of this year. It was also (perhaps
> >uniquely in the history of IF) written on commission, so it will not
> >be available for free,
>
> which unfortunately means it's ALSO not going to be available as
> source. Maybe libraries?

Maybe; though I have other WIPs that make use of the lessons learned
there. It just happens to be the implementation thereof that's
closest to being finished.

To leap into another part of this conversation, City of Secrets also
has a more open-ended plot structure than most games, and there are a
lot of events that occur only under the correct circumstances and are
dependent on previous decisions. Making the system to program this
was a bit of a headache, but it now works reasonably well: there is a
game "moderator" that keeps track of what scene of the game you're in,
under what conditions that scene can be ended, and what scenes you get
next. These scene-ending conditions can be things like moving into or
out of a specific location, ending a conversation with an NPC, or
doing a bunch of other particular actions that are special to the
circumstances.

The combination of the open-ended plotting system and the NPC
conversation system makes it easier to build dynamic scenarios where
NPCs engage you in conversation, try to talk you into or out of things
(depending on the nature of the scene and your previous actions), move
around, etc. This particular game involves a fairly large number of
(comparatively) shallow NPCs: most of them have supporting roles in
the plot and only pop up from time to time. But I could see using the
same system for games with fewer, deeper NPCs.

Anyway. Blah blah blah; I sense that at any moment Adam Cadre or
someone will pop out of the woodwork and tell me to stop talking about
a game that as far as the public is concerned doesn't exist yet. But
it's all coded up, really... just a little twinking left. :)

-- Emily

ems...@mindspring.com

unread,
Sep 5, 2001, 4:18:17 PM9/5/01
to
bren...@aol.comRemove (OKB -- not okblacke) wrote in message news:<20010905000635...@mb-fm.aol.com>...


> I would say that an NPC becomes intelligent when the range of its
> resulting output cannot be defined or predicted by the author.

...


> When the NPC not only has a knowledge of the individual words (or phrases)
> he can use to construct output, but has a knowledge of what those words mean
> EVEN if they are not actually printed, then you can no longer predict what he
> will say. With current NPC modelling, code an NPC by saying "okay, the
> message printed in such-and-such situation follows this pattern, and we can put
> these bits of text into these blanks." With an intelligent NPC we would say,
> "okay, in such-and-such situation we want to print a message that means this,"
> and the NPC would devise a sentence based on what it is supposed to mean, by
> using his knowledge of the meaning of the words he knows.

So if I understand you right, you're equating the "intelligence" with
the ability to generate prose to reflect the meaning of a given
proposition, which exists in some encoded form inside the NPC's code.

What about the ability to manipulate the abstract propositions to
create "new" knowledge? e.g., I program the computer with a large
number of logical rules of the "if P, then Q" variety, allowing it to
put together the pieces from what the player has said. Is this also
necessary for "intelligence"? Is it sufficient? What about a system
in which there was a deep or symbolic representation of logical
propositions, but in which all the expressions of those ideas had in
fact been prewritten by the author?

No, I hear you saying, the NPC has to produce *new* output the author
doesn't expect. But even if I feed the NPC a large lexicon of words,
I can still "expect" that the NPC will only say things provided for in
that vocabulary. And if we're telling it, "say this in good English,"
then we can also still expect the full range of sentiments that it can
express.

Now, from my point of view, it seems more useful and practical to set
aside, for a moment, the prose generation problem (except at the very
basic level of providing variations of whole phrases) and concentrate
on creating a sufficient matrix of emotional and intellectual goals
that the NPC's particular path through it will be highly responsive to
player intervention and sufficiently nuanced to seem plausible, even
if the author of the game has not anticipated that particular course
of events.

-- Emily

OKB -- not okblacke

unread,
Sep 5, 2001, 5:20:10 PM9/5/01
to
ems...@mindspring.com (ems...@mindspring.com) wrote:
>So if I understand you right, you're equating the "intelligence" with
>the ability to generate prose to reflect the meaning of a given
>proposition, which exists in some encoded form inside the NPC's code.

Well, I'm also expecting the NPC to be able to figure out what situation
it's in, but current NPCs already do that, they just aren't as fine-tuned as
we'd like. The only difference between an NPC that uses a happy-o-meter and a
sad-o-meter to decide whether it's happy, sad, or neutral, and an NPC that uses
hundreds of emotional scales to decide between thousands of (not necessarily
mutually exclusive) emotional states is that the latter has MORE stuff -- not
different stuff.

Also, I think I've used the word "output" a little hastily. An
important part of my concept of an intelligent NPC is that it has a knowledge
separate from what it actually prints on the screen. When I say output, I mean
the end result of the NPC's thought, not necessarily the physical manifestation
of that in words.

>What about the ability to manipulate the abstract propositions to
>create "new" knowledge? e.g., I program the computer with a large
>number of logical rules of the "if P, then Q" variety, allowing it to
>put together the pieces from what the player has said. Is this also
>necessary for "intelligence"? Is it sufficient? What about a system
>in which there was a deep or symbolic representation of logical
>propositions, but in which all the expressions of those ideas had in
>fact been prewritten by the author?

I would say this is a higher-level AI that what I was describing. For
me, the first step is moving beyond single-stage calculations of results (i.e.,
a given set of input parameters will not always lead to the same output, or
even to one of any number of predictable outputs). My example was English
because, as complex and convoluted as the English language is, it does not
approach the complexity of human emotions. In other words, an NPC that prints
English text is not necessarily intelligent, but an NPC that "can speak
English" (i.e., knows what the words mean and how to use them) is intelligent
whether it actually says anything or not.

An NPC which can discourse in English on the current situation (which the
NPC would be able to identify) would be intelligent, but it would not be a
"growing" intelligence. (I know many people with this kind of intelligence.
:-) An NPC which can also apply what it knows to the creation of new situation
models would be an intelligence which can improve itself.

OKB -- not okblacke

unread,
Sep 5, 2001, 5:26:57 PM9/5/01
to
ems...@mindspring.com wrote:
>Now, from my point of view, it seems more useful and practical to set
>aside, for a moment, the prose generation problem (except at the very
>basic level of providing variations of whole phrases) and concentrate
>on creating a sufficient matrix of emotional and intellectual goals
>that the NPC's particular path through it will be highly responsive to
>player intervention and sufficiently nuanced to seem plausible, even
>if the author of the game has not anticipated that particular course
>of events.

I sent my earlier reply to the same post before I realized I had
forgotten to reply to this bit. I addressed this a bit in that post, but:

My point here is that the size of the "matrix" you describe is not
significant when deciding whether the NPC is intelligent or not. Any finite
number of variables describing the NPCs state is the same, in this sense. The
trick is to move beyond picking (possibly more than one) from a set of such
states to in some way "understanding" their meaning. This is why I said an
English-speaking NPC would be intelligent; to actually be fluent in English,
you need to know more than just what words can go where -- you need to know
what they mean.

This is not to say that developing such a matrix of emotional factors is
not worthwhile. Personally, I think it is more worthwhile than trying to make
an NPC that speaks English.

ems...@mindspring.com

unread,
Sep 5, 2001, 8:29:11 PM9/5/01
to
J. Mancera <calv...@NOSPAMarrakis.es> wrote in message news:<g5k9pt8bau3milufl...@4ax.com>...

Er... no, I couldn't, because I've forgotten the sign-up URL.
Presumably someone else will leap in here and explain the details.

(Sorry.)

ES

Ashikaga

unread,
Sep 5, 2001, 11:01:45 PM9/5/01
to
"Anson Turner" <anson@DELETE_THISpobox.com> wrote..
> In article <20010905000635...@mb-fm.aol.com>,(OKB -- not

okblacke) wrote:
>
> > This is something that I think is important to this issue. Exactly
what
> > do we mean by AI? Certainly, in the sense that "fake"=="artificial",
> > "artifical intelligence" is fake. But what is the difference between
fake AI
> > and "real" AI?

I think an AI is a simulation of human intelligence, and it's suppose to.
It should have a pre-set perimeter on which stuffs it is capable of doing.

> Consciousness. Without it, no AI will ever be able to react sensibly to
> situations which were completely unanticipated by the programmer.
Algorithms,
> procedures, etc., are fine for handling the situations for which they are
> intended, but consciousness is required in order to decide *which*
procedure
> to follow.

I thought about what you say a bit, and it's kind of difficult to develop a
consciousness for a fake human being, which seems to be what you are
proposing. I did suggest about emotions in NPC, but now I think that's too
difficult to do, if we are talking about a "real" emotion, something this
droid thingy actually feels, instead of an output of a computation process.

> People often make the mistake of following the wrong procedure when they
> aren't paying attention to what they are doing, that is, when they are
acting
> unconsciously. Once, as a child, I poured milk into a bowl of soup. I had
> cereal more often than soup, and having the milk in my hand and a bowl in
> front of me, my consciousness elsewhere, I did what I would normally do in
> that situation. Of course, as soon as I noticed what I was doing, I
realized
> my mistake. And therein lies the problem. Not only is an intelligence
devoid
> of consciousness liable to engage in bizarre behavior, it won't be capable
of
> knowing that the behavior is bizarre. It has no understanding, and doesn't
> know *why* it's doing what it's doing.
>
> I'm not suggesting the impossibility of creating an AI that would never
pour
> milk into soup. I'm suggesting that sooner or later a situation that the
AI
> lacks a decision-making algorithm for is going to come up, and then what?

I think those refining details (i.e., human flaws) in AI can be pushed back
a little bit. :-) It's more important to build the fundation first before
putting on the roof. Human flaws certainly would make an AI more authentic
(if being like human is the goal).

I doubt an AI would lack a "decision-making" algorithm though. If so, then
that's a bug. Of course, I am equating "decision-making" in the arithmatic
sense, meaning the internal formula AI is doing, not the complex, and often
irrational "gut-feeling" decision we usually do.

> If I'm engaged in a dull, repetitive activity, my mind -- that is, my
> conscious mind -- is certain to wander. But as soon as something
unexpected
> comes up, my attention is (hopefully) drawn back to what I'm doing, so
that I
> can "figure out" how to handle the situation. (If not, milk-in-soup.) A
"mind"
> devoid of consciousness has no such recourse.

Another intriuguing fact you've brought up that makes me believe AI would
never act like human. AI, if programmed correctly and assume no bug is
encountered, should always work in a totally logical manner. Just like a
MIDI music will always play the correct tune, but a human with a real
instrument will not. It's that "flaw" that makes us value a real
performance more than the other medium. It's more authentic.

Same thing for AI. I assume the reason people don't like a mechanical NPC
is just because it never does anything else, but the "correct action"
according to the script. I think even if we don't use a real AI, but
prescripted part of the story where the NPC pour milk into the soup, I bet
we'll be very happy to see such a thing happen in a story; it adds a few
touch to it.

> Obviously, I'm not one who believes that consciousness is simply a
byproduct
> of cognition. I think it serves an essential purpose, which might be
roughly
> expressed as "extemporaneous adaptation". It goes without saying that I
don't
> have the slightest idea how to create consciousness without making use of
> existing DNA, and I don't think that anyone else does either. I also doubt
> that it is even possible to do so on a computer of the type that exists
today,
> no matter how fast it may be or how much memory it is given.

I think you are trying to make a replicant. How are you going to use DNA on
a computer AI? :-) DNA has no direct impact on your consciousness, as far
as I know. Consciousness is not even a physical trait. Consciousness has
something to do with the mind, not a characteristic. Someone could be doing
things without any consciousness because he was depressed, and being easily
depressed is a trait possibly caused by a hereditary thingy encoded on the
person's DNA, but it does not suggest there is a direct link between the
two. Besides, there might be other complication that cause the person to do
things without consciousness, not just depression alone.

> But then, that's completely unnecessary simply to create more believable
NPCs.
> And if you could create such a sophisticated AI, why would you need to
write a
> game at all? The game would write itself. Then again, you could just find
your
> way to the nearest chat room.

That's true, except I find most of the chat rooms to be rather boring.
People pretty much do not talk with their consciousness in mind. :-)

I think the original post mentioned though, it's the merit of developing AI
using the IF as a stage, which I think it's very interesting.

> Anson.
Ashikaga


Ashikaga

unread,
Sep 6, 2001, 2:39:57 AM9/6/01
to
"Anson Turner" <anson@DELETE_THISpobox.com> wrote...
> "Ashikaga" <ashi...@worldnet.att.net> wrote:
> > "Anson Turner" <anson@DELETE_THISpobox.com> wrote..
<snip>

> > I thought about what you say a bit, and it's kind of difficult to
develop a
> > consciousness for a fake human being, which seems to be what you are
> > proposing. I did suggest about emotions in NPC, but now I think that's
too
> > difficult to do, if we are talking about a "real" emotion, something
this
> > droid thingy actually feels, instead of an output of a computation
process.
>
> I don't think "difficult" adequately covers it. Until and unless there is
some
> serious breakthrough in theories of consciousness, it's impossible. Again,
> it's also completely unnecessary if the goal is simply to create more
> believable NPCs. It's kind of like saying, "I want more people to play my
> games. Therefore, I will invent a nanotechnology which will be capable of
> repairing cancerous cells and use the resulting fame to promote IF."

It is nearly impossible, at this point. Though I hope that doesn't
discourage people who want to at least start on something. Yes, it's kind
of not so smart if someone develop a cutting edge technology just for a
simple tool, such as a knife....

> > I doubt an AI would lack a "decision-making" algorithm though. If so,
then
> > that's a bug. Of course, I am equating "decision-making" in the
arithmatic
> > sense, meaning the internal formula AI is doing, not the complex, and
often
> > irrational "gut-feeling" decision we usually do.
>

> You believe that it will have a decision-making algorithm for *any*
possible
> situation that might come up? Well, I suppose you could put in "If in
doubt,
> go berzerk and start killing people randomly" for that authentic cinematic
> experience.

So..., you are one of those people who killed their fellow classmate....
just kidding. :-D (*duck*)

I mean, of course there will be some perimeter that'll keep the action less
than random, even though they seem to be. Just like your milk in soup
example. Both milk and soup somehow tied with the bowl.

> > Another intriuguing fact you've brought up makes me believe AI would
> > never act like human.
>
> Which was my intention, more or less.

Okay. AI shouldn't act like human though.

> > I think you are trying to make a replicant. How are you going to use
DNA on
> > a computer AI? :-) DNA has no direct impact on your consciousness, as
far
> > as I know.
>

> All known examples of consciousness are attached to beings whose creation
> requires DNA, is what I meant.

Okay. That clarified quite a bit. Good inputs.

> Anson.
Ashikaga


Martin Bays

unread,
Sep 6, 2001, 11:50:16 AM9/6/01
to
bren...@aol.comRemove (OKB -- not okblacke) wrote in message news:<20010905172010...@mb-mo.aol.com>...

So what I think you're saying here is that the important thing is for
your NPC (or rather AI - this is more general than just IF) to have a
proper understanding of concepts, but it doesn't necessarily have to
be able to form new ones. The first bit I agree with completely, but
I'm not so sure about the second. You see, the thing about deep
concepts is that they can't be given explicitly to the AI. Consider
for example the concept of 'red', or 'redness'. Forget for a moment
the more complicated aspects of the idea, and concentrate on just its
simplest definition as a property of an object dependent on what
electromagnetic spectrum it produces (and if I get my science wrong
here, don't shout at me - it's not important). You might think it
would be relatively easy to programme a computer attached to a camera
to determine whether or not something is red in a human-like way, just
looking at where the spectrum peaks and saying "if it's in such and
such a range, we'll call it red". But it's not that simple when humans
do it. For example, the same colour might be called purple by a human
being if it were shown to them on a test card, context free, but
called red if they saw it as the bottom light on a set of traffic
lights. And similarly 'what we would call red' is different in all the
infinite possible contexts. So the point is that you can't explicitly
programme a machine to say 'call this red, and this not red' without
taking the entire, very deep concept of redness into account. And what
goes for a simple idea like redness goes many times over for something
more complicated. Like guitarhood, for example. Think of all the
different shapes you know that are all called guitars, that all 'have
guitarhood', and imagine trying to set down rules for what is a guitar
and what isn't which would *only* include guitars and would include
*all* guitars, even ones which are yet to be invented and may look
nothing like your current conception of guitarhood, and yet which you
would still recoginise as a guitar once you've seen it. It just can't
be done.

So what's the upshot of all this? Firstly it shows why I think having
deep, flexible concepts is important (though I've no good idea how to
do it in AI or how it's done in existing intelligences) and secondly
that you can't programme these concepts in explicitly. The only way I
can see to have an AI (or indeed a human being or other higher animal)
have such concepts is for it to have built them itself, by percieving
various examples of the concept. So we look at loads of guitars and
(somehow) get the idea of guitarhood. And at a higher level of
abstraction we gain concepts of red, green, blue, purple etc., notice
the similarities and lump them all in the category of colour - a new
abstract concept. So what I'm saying is that an AI which can gain new
concepts is not simply an AI which can improve itself, it is the only
kind of AI which can have deep concepts at all.

Anyone feel like picking holes in all this? I can see a couple myself
- the idea that concepts and groups of objects (possibly abstract
objects) are fundamentally the same seems a bit shaky to me, and I'm
kind of assuming it (though I'm not sure it's crucial to the
argument). And if we form concepts by seeing similarities between
objects and so grouping them, how do we percieve the similarities in
the first place if not through already formed concepts? This would
mean an intelligence would have to start with some innate concepts
from which others are built - and so these must be explicit. Things
like sameness, oppisite-ness and so on would come in this category,
but why are these able to explicitly determined when 'higher' ones
aren't? I'm confused. Oh, and of course none of this has any direct
implications for IF, though again it might be a could domain to study
them in, maybe. Well, that's me done for now,

Martin

Ashikaga

unread,
Sep 6, 2001, 12:34:22 PM9/6/01
to
"Martin Bays" <marti...@hotmail.com> wrote...
<Big snip of the learning of concept>

Your entire conversation of this learning of concept makes me think about
child development. Maybe we can learn something about the way child acquire
new knowledge (funny, we all used to be children once), and it might be the
right direction to develop an AI.

> Martin
Ashikaga


Neil Cerutti

unread,
Sep 6, 2001, 12:56:46 PM9/6/01
to
Ashikaga posted:

Check out Stephen Pinker's _Words and Rules_ for one theory on how
language skills are aquired by developing brains. It's a fun
read, mostly.

--
Neil Cerutti <cer...@together.net>

Sean T Barrett

unread,
Sep 6, 2001, 3:13:23 PM9/6/01
to
In article <9n5e90$k6c$1...@news.fsf.net>, Adam Thornton <ad...@fsf.net> wrote:
>Sean T Barrett <buz...@world.std.com> wrote:
>>But a non-canned story can't be done
>>with canned characters... so I suspect we need to advance on
>>a number of fronts at once.
>
>Perhaps. My basic objection is that I don't see anything wrong with
>canned stories.

There isn't anything wrong with them. And they are the basis of
a number of art forms/media: plays, movies, TV, books, comics,
etc. But maybe if you want a canned story you should look to
those media?

None of those forms involve the magic ingredient "interactivity",
and I think we do a disservice to the potential of interactivity
as a medium if we brush it off with "canned stories are good enough".
(Of course, personal tastes are personal tastes. But I'm talking
sort of manifesto-speak here, not just "what I personally want".)

>Fundamentally, I prefer narration to simulation. If I want a simulated
>world, I'll probably go play an RPG, rather than an adventure.

Well, you'll note that the post you're replying to made
reference to "commercial games". The point is that commercial
RPGs, commercial FPS, commercial everythig, all subscribe
to simulation at the lower-level interactions, and a canned
top-level story.

But on another level, I agree. Because IF is fundamentally
a text medium, I think we primarily want to read human-authored
text. I see IF in this regards as more of a space for prototyping
technologies for non-text games.

But on the third hand, I'll give the same rebuttal to that that
I always do: one reason people complain about the idea of simulationism
is because of the repetitive, lousy-authored text it produces, but
people don't complain (much) about contents lists, which are
purely generated text and which people seem to accept because
(I imagine) it transparently exposes the simulation and makes
it easier to *interact* (and there's that word again). Compare
the shenanigans people go to to avoid saying "exits are north,
east, and west" while accepting that (once the player has
interacted) objects will be listed in an equally dull fashion.

>But to get that, you
>need a good GM, and that set of skills is something which generally
>takes bright, motivated humans a few years of steady play to acquire.

Yes, GMing is hard, and I think good GMing AI will be hard--but
I don't think it's AI-Hard (e.g. Turing-Test-requiring).

SeanB

OKB -- not okblacke

unread,
Sep 6, 2001, 4:48:30 PM9/6/01
to
marti...@hotmail.com (Martin Bays) wrote:
>So what I think you're saying here is that the important thing is for
>your NPC (or rather AI - this is more general than just IF) to have a
>proper understanding of concepts, but it doesn't necessarily have to
>be able to form new ones.

That's a good condensation, sure.

>So the point is that you can't explicitly
>programme a machine to say 'call this red, and this not red' without
>taking the entire, very deep concept of redness into account.

Right. This is what I mean when I say the computer must know what red
"means", as opposed to having a list of objects which it knows are "red".

>Think of all the
>different shapes you know that are all called guitars, that all 'have
>guitarhood', and imagine trying to set down rules for what is a guitar
>and what isn't which would *only* include guitars and would include
>*all* guitars, even ones which are yet to be invented and may look
>nothing like your current conception of guitarhood, and yet which you
>would still recoginise as a guitar once you've seen it. It just can't
>be done.

I agree. When I say that the AI need not be able to form new concepts, I
mean that it need not be able to conceive of "guitarhood" on its own. It
should, however, be able to recognize new instances and applications of this
concept which it has never seen before.

>The only way I
>can see to have an AI (or indeed a human being or other higher animal)
>have such concepts is for it to have built them itself, by percieving
>various examples of the concept. So we look at loads of guitars and
>(somehow) get the idea of guitarhood.

I would say that an AI which can look a bunch of guitars and understand
that they are all guitars is of a higher order than one which already knows the
concept of "guitarhood" and can recognize guitars (even ones which it has never
seen before). I guess basically what I'm saying is that the primitive AI has a
fixed set of known concepts (presumably part of a knowledge base installed by
the programmer) but can learn new instances of them, while the advanced AI can
actually teach itself new concepts. (This is a lot like something you said
later in your post.)

A few years ago, I did some brief fiddling around with a simple program
called "neural net" which basically let you give the computer a bunch of simple
bitmaps, from which it would derive their basic concept. The crucial point
here is that the program assumed that all the examples you were giving it did
indeed share some basic quality, whereas in your example of the guitars, the
human intelligence (or artificial intelligence not yet created) has to pick the
guitars out of the rest of the world, see that they are all guitars, and then
realize the basic nature of guitarhood.

>And if we form concepts by seeing similarities between
>objects and so grouping them, how do we percieve the similarities in
>the first place if not through already formed concepts? This would
>mean an intelligence would have to start with some innate concepts
>from which others are built - and so these must be explicit. Things
>like sameness, oppisite-ness and so on would come in this category,
>but why are these able to explicitly determined when 'higher' ones
>aren't?

I think a real step to developing a higher-order AI is to explore these
concepts to find as many of the "innate" ones as possible. I do think that
there certain concepts which form the foundation of human (and, such as it is,
animal) intelligence (for example, equality, or "sameness"). I imagine that
even an infant could recognize similarities between various guitars or shades
of red, and realize that they all are somehow alike (although it's hard to test
this recognition). Less basic concepts are harder to grasp in the abstract; in
math, I notice that many people have a hard time seeing the relationship
between a concrete example problem and the abstraction of the relevant concepts
to a general formula.

Jason Melancon

unread,
Sep 6, 2001, 4:55:19 PM9/6/01
to
On Wed, 05 Sep 2001 00:57:37 -0400, Anson Turner
<anson@DELETE_THISpobox.com> wrote:

> People often make the mistake of following the wrong procedure when they
> aren't paying attention to what they are doing, that is, when they are acting
> unconsciously. Once, as a child, I poured milk into a bowl of soup. I had
> cereal more often than soup, and having the milk in my hand and a bowl in
> front of me, my consciousness elsewhere, I did what I would normally do in
> that situation. Of course, as soon as I noticed what I was doing, I realized
> my mistake.

Well, consciousness can work in complex ways -- did the soup *need*
more milk?

--
Jason Melancon

Jon Ingold

unread,
Sep 6, 2001, 6:43:02 PM9/6/01
to
> But on the third hand, I'll give the same rebuttal to that that
> I always do: one reason people complain about the idea of
simulationism
> is because of the repetitive, lousy-authored text it produces, but
> people don't complain (much) about contents lists, which are
> purely generated text and which people seem to accept because
> (I imagine) it transparently exposes the simulation and makes
> it easier to *interact* (and there's that word again).

Yeah, but then, the reason people complain about that is the _real_
reason people complain about simulationism: if everything in the
modelled world does exactly what you expect, then the game is incredibly
boring. The only things that save it are (a) brilliant writing (and the
description is a good place for it), (b) brilliant
<canned/switch-statement/certainly *not*
simulation-of-exactly-what-you-expect-to-occur-occuring> storyline or
(c) simulating interesting, non-real objects; and (c) isn't really
simulation any more.

This was, say, my problem with Metamorphoses: once I knew what the
machines did, I knew what they did before I did it, so i just kept going
back and forth doing it, and the results never took me by surprise.
Shade was similar after a while too, curiously, once I learnt "the
rules" it was also footwork. Whereas, Rameses simulated nothing at all,
but I found it much more enjoyable.

Just my own opinion, of course, and not hugely reflected in the comp
results... ;)

Jon


Andrew Hunter

unread,
Sep 6, 2001, 10:23:34 PM9/6/01
to
On Thu, 06 Sep 2001 03:01:45 GMT, Ashikaga wrote:
>"Anson Turner" <anson@DELETE_THISpobox.com> wrote..
>> In article <20010905000635...@mb-fm.aol.com>,(OKB -- not
>okblacke) wrote:
>>
>> > This is something that I think is important to this issue. Exactly
>what
>> > do we mean by AI? Certainly, in the sense that "fake"=="artificial",
>> > "artifical intelligence" is fake. But what is the difference between
>fake AI
>> > and "real" AI?
>
>I think an AI is a simulation of human intelligence, and it's suppose to.
>It should have a pre-set perimeter on which stuffs it is capable of doing.

Originally, yes. These days, no. The main aim of AI research is to produce
software that is good at solving ill-defined problems, image recognition
being an example. Usually the route to this goal is adding some form of
adaptation to the software, so it can learn from previous attempts.
Creativity is not usually a big part of research, but for things like
genetic algorithms it can be the primary aim.

>> Consciousness. Without it, no AI will ever be able to react sensibly to
>> situations which were completely unanticipated by the programmer.
>Algorithms,
>> procedures, etc., are fine for handling the situations for which they are
>> intended, but consciousness is required in order to decide *which*
>procedure
>> to follow.

Something of a sore point among AI researchers. My neural networks lecturer
believed that conciousness was an illusion, and presented evidence to support
this hypothesis. (Personally, I don't believe this, but there's a circular
argument here). At any rate, the human brain and nervous system operates
much like a machine in many respects. The entire nervous system of a few
simple creatures has been mapped, and *shown* to act like a machine
(there are errors because natural creatures are not designed to be
particularily precise, but a considerable amount of error correction is
built in, to the point that neurons are largely digital in nature, and
use repeaters to remove transmission errors).

Of course, this is a little hard to do in the case of the human nervous
system, it having a complexity that is several orders of magnitude above
the most complex computing device we have yet produced. But a sea slug
can arguably make valid decisions, and has a nervous system that is simple
enough (a few thousand neurons) to be mapped completely. It probably won't
make a particularily interesting NPC (even if it does learn from experience).

That's the point of AI, really. To build something that learns. To create an
AI NPC, you wouldn't program its responses, you would teach them. When
presented with an 'unexpected' stimulus, it would respond to them using
learned techniques, the nature of which may not be well understood by the
author (who'd be less of an author and more of a teacher at this point.
The 'intelligent' NPC may not always respond as the author would wish).

Whether or not such an AI actually understands what's going on is debatable,
of course. If it was clever enough, the way to resolve the question would be
to ask it. Not that we're going to be able to create something this clever
in the near future (and we'd probably need a larger virtual machine to
represent it in than any current authoring system can provide).

>I thought about what you say a bit, and it's kind of difficult to develop a
>consciousness for a fake human being, which seems to be what you are
>proposing. I did suggest about emotions in NPC, but now I think that's too
>difficult to do, if we are talking about a "real" emotion, something this
>droid thingy actually feels, instead of an output of a computation process.

If you can create a computer program that can feel, then there's the ethical
question of whether or not it should be switched off.

More realistically, you are not going to be able to create such a thing
for a while yet. IF games are simple things and are mainly about illusions.
Producing something that provides a better illusion seems to be the usual
goal of IF. I doubt that a simple system could ever fool a human for long
(though I read somewhere that Eliza's paranoid counterpart has been known
to fool real psychologists, so this might not always be true). There are
almost certainly techniques that can be used to increase the realism of NPCs

>> People often make the mistake of following the wrong procedure when they
>> aren't paying attention to what they are doing, that is, when they are
>acting
>> unconsciously. Once, as a child, I poured milk into a bowl of soup. I had
>> cereal more often than soup, and having the milk in my hand and a bowl in
>> front of me, my consciousness elsewhere, I did what I would normally do in
>> that situation. Of course, as soon as I noticed what I was doing, I
>realized
>> my mistake. And therein lies the problem. Not only is an intelligence
>devoid
>> of consciousness liable to engage in bizarre behavior, it won't be capable
>of
>> knowing that the behavior is bizarre. It has no understanding, and doesn't
>> know *why* it's doing what it's doing.
>>
>> I'm not suggesting the impossibility of creating an AI that would never
>pour
>> milk into soup. I'm suggesting that sooner or later a situation that the
>AI
>> lacks a decision-making algorithm for is going to come up, and then what?

Hmm, ever read 'The Two Faces of Tomorrow' (James P. Hogan)? I think this
book provides one of the best (fictional) depictions of AI, and the what it
would mean to have a machine that understands what it's doing but not why.
The 'milk into soup' example given at the beginning of that book is of an
'intelligent' system that realises that it can demolish things more quickly
using orbital bombardment than by using earth movers, and does so, causing
a near-fatal accident.

>I think those refining details (i.e., human flaws) in AI can be pushed back
>a little bit. :-) It's more important to build the fundation first before
>putting on the roof. Human flaws certainly would make an AI more authentic
>(if being like human is the goal).
>
>I doubt an AI would lack a "decision-making" algorithm though. If so, then
>that's a bug. Of course, I am equating "decision-making" in the arithmatic
>sense, meaning the internal formula AI is doing, not the complex, and often
>irrational "gut-feeling" decision we usually do.

Ultimately, AI research is all about decision making. If it can't make
decisions, it's not AI. It's not so much about how people make decisions
these days, though, but how to produce systems that can be trusted to
make good decisions.

>> If I'm engaged in a dull, repetitive activity, my mind -- that is, my
>> conscious mind -- is certain to wander. But as soon as something
>unexpected
>> comes up, my attention is (hopefully) drawn back to what I'm doing, so
>that I
>> can "figure out" how to handle the situation. (If not, milk-in-soup.) A
>"mind"
>> devoid of consciousness has no such recourse.

It may be that boredom is something we evolved to cope with certain classes
of survival conditions. It may be that its a byproduct of the human brain's
design.

'Figuring out' how to do something is not always that complicated. Theorem
solvers and languages like prolog have been doing this for quite a while.
(I guess a good example is a package like mathmatica or maxima. These can
'figure out' how to integrate arbitary functions very quickly, for instance.
Even a good mathematician tends to find this hard). One of the things about
AI research is that problems supposed to be hard (eg, playing chess) turn
out to be easy, and problems that are supposed to be easy (eg, vision) turn
out to be hard.

>Another intriuguing fact you've brought up that makes me believe AI would
>never act like human. AI, if programmed correctly and assume no bug is
>encountered, should always work in a totally logical manner. Just like a
>MIDI music will always play the correct tune, but a human with a real
>instrument will not. It's that "flaw" that makes us value a real
>performance more than the other medium. It's more authentic.

A person does this because she has to learn to play music. If she realises
her mistake, she will attempt to correct it, and may not make the same
mistake next time. The first time she tries to play a piece of music, most
of what she does will be mistakes. MIDI has no margin for error, so it has
no margin for learning (or decision-making, for that matter). A MIDI player
cannot apply knowledge gained through learning to play music to the creation
of new music. An AI programmed to learn how to play music might be able
to apply the concepts to 'unusual' music. That is, create new works.

The nature of creativity is obviously one for debate, though. Any
decision-making AI has to have *some* creativity, so that it can learn,
and so that it can deal with unusual situations. Neural network researchers
have observed that even simple nets tend to 'dream' when presented with
neutral input - this could be where creativity comes from. Then again,
it might not: we simply don't know enough to be sure. The human brain
is very obviously a system of structures built on top of earlier structures,
though, so this type of hypothesis can be popular with NN researchers.

>Same thing for AI. I assume the reason people don't like a mechanical NPC
>is just because it never does anything else, but the "correct action"
>according to the script. I think even if we don't use a real AI, but
>prescripted part of the story where the NPC pour milk into the soup, I bet
>we'll be very happy to see such a thing happen in a story; it adds a few
>touch to it.

A good AI probably would pour milk into the soup at least once. But if it is
told that that is a bad response, it will tend to do it less. (This is
Pavlovian learning, which is actually relatively easy to simulate).

Now, you might think that a person would never, ever, do such a thing. But
I've poured orange juice on my cereal in the past because it was the first
bottle to hand, simply because I didn't think. A 'good' NPC would not be
realistic if it wasn't capable of the same mistake.

>> Obviously, I'm not one who believes that consciousness is simply a
>byproduct
>> of cognition. I think it serves an essential purpose, which might be
>roughly
>> expressed as "extemporaneous adaptation". It goes without saying that I
>don't
>> have the slightest idea how to create consciousness without making use of
>> existing DNA, and I don't think that anyone else does either. I also doubt
>> that it is even possible to do so on a computer of the type that exists
>today,
>> no matter how fast it may be or how much memory it is given.
>
>I think you are trying to make a replicant. How are you going to use DNA on
>a computer AI? :-) DNA has no direct impact on your consciousness, as far
>as I know. Consciousness is not even a physical trait. Consciousness has
>something to do with the mind, not a characteristic. Someone could be doing
>things without any consciousness because he was depressed, and being easily
>depressed is a trait possibly caused by a hereditary thingy encoded on the
>person's DNA, but it does not suggest there is a direct link between the
>two. Besides, there might be other complication that cause the person to do
>things without consciousness, not just depression alone.

There's a point here, though. Learning is an evolutionary process. There's
evidence that competition is a major process in the formation of the brain,
and there's a way that the operation of the brain can be thought of as
competition. This is evolution, by and large. Things that win a competition
are done more, and things that lose are done less.

>> But then, that's completely unnecessary simply to create more believable
>NPCs.
>> And if you could create such a sophisticated AI, why would you need to
>write a
>> game at all? The game would write itself. Then again, you could just find
>your
>> way to the nearest chat room.

That's true. But if the aim is to make 'better' decisions under various
circumstances, then current AI research could well be relevant. The key
is learning: both a clever and a stupid NPC might burn itself, but the
clever one wouldn't do it twice.

In a pre-scripted world, this might not be useful. In a simulationist one,
where interactions are suitably complex, a learning AI would certainly
increase the level of realism, even if its responses are limited. You'd
be able to reason with it, after a fashion :-) World model view differences
might have an effect though. If the AI was in a GM role, you might find that
you can persuade it to change the rules (essentially by tricking it).
It's arguable that this would also increase realism :-)

>That's true, except I find most of the chat rooms to be rather boring.
>People pretty much do not talk with their consciousness in mind. :-)

Heh, hang around enough chat rooms, and soon becomes apparent how depressingly
similar people can be to each other.

>I think the original post mentioned though, it's the merit of developing AI
>using the IF as a stage, which I think it's very interesting.
>
>> Anson.
>Ashikaga

Andrew.

--
____
\ \ \ Andrew Hunter <and...@logicalshift.demon.co.uk>
> > > http://www.logicalshift.demon.co.uk (me)
/_/_/ http://www.impulse.org.uk (impulse)

Andrew Hunter

unread,
Sep 6, 2001, 10:27:56 PM9/6/01
to

Or do you want to find out if the soup works better with more milk? With
intelligence, 'wrong' answers might not always be wrong. You can't be
creative if you aren't willing to change your ideas, and if you aren't
creative, you are probably an automaton.

Andrew Hunter

unread,
Sep 6, 2001, 10:54:46 PM9/6/01
to
On Fri, 7 Sep 2001 02:23:34 +0000, Andrew Hunter wrote:
>On Thu, 06 Sep 2001 03:01:45 GMT, Ashikaga wrote:
>>"Anson Turner" <anson@DELETE_THISpobox.com> wrote..
>>> People often make the mistake of following the wrong procedure when they
>>> aren't paying attention to what they are doing, that is, when they are
>>acting
>>> unconsciously. Once, as a child, I poured milk into a bowl of soup. I had
>>> cereal more often than soup, and having the milk in my hand and a bowl in
>>> front of me, my consciousness elsewhere, I did what I would normally do in
>>> that situation. Of course, as soon as I noticed what I was doing, I
>>realized
>>> my mistake. And therein lies the problem. Not only is an intelligence
>>devoid
>>> of consciousness liable to engage in bizarre behavior, it won't be capable
>>of
>>> knowing that the behavior is bizarre. It has no understanding, and doesn't
>>> know *why* it's doing what it's doing.
>>>
>>> I'm not suggesting the impossibility of creating an AI that would never
>>pour
>>> milk into soup. I'm suggesting that sooner or later a situation that the
>>AI
>>> lacks a decision-making algorithm for is going to come up, and then what?

Whee, I illustrate my point about brain farts by making one myself, by
giving what is essentially the same example. The question is how an AI
defines 'bad', really. This is one of those circumstances where the
answer can be quite simple. If the soup tastes bad, then the AI has done
something wrong, so it should do it less. How can it 'know' what it did?
It doesn't have to. Pavlovian learning does not require a concious knowledge
that something directly connects to its result. When the AI does something
'good', you strengthen things that were done recently, so they happen
more often. When something 'bad' happens, you weaken them. In this case,
'good' and 'bad' are defined as 'tastes good' and 'tastes bad'.

An illustrative example is the sea slug I mentioned elsewhere. It has a
gill that it withdraws when it experiences pain. Touch sensors in its tail
are also physically connected to the gill withdraw system[1]. However, this
does not cause a withdrawal normally, so touch its tail, and it does nothing.
However, touch its tail and give it an electric shock, and next time it
withdraws the gill just when you touch its tail: the connection has suddenly
become 'active'. What happens is that after you touch its tail, the
input to the motor neuron (from the sense cell) is sensitised for a short
period. If it is fired, then the sensitised input is made more likely to
cause the neuron to fire, withdrawing the gill. The opposite happens
if it is *not* fired. So the slug seems to be able to reason that some
things are dangerous and some things are not, and make decisions based
on that knowledge, wheras the truth of what it's doing is much simpler.
(IE, it will 'forget' that something touching its tail is dangerous

Not all human learning is Pavlovian learning, obviously. We really are
able to reason about things and choose to learn things. But if you just
want something that is able to act 'intelligently' for a less strict
definition of intelligence, the ability to reason is not required. People
still learn this way. After all, you don't put your hand on a hot surface
and thing 'hmm, this is hot. If I don't remove my hand, I will be burned',
you remove your hand straight away, and *then* think 'ouch, that was a
silly thing to do'. To some extent, these reflexes can be learnt at a
more fundamental level.

So, a simple system can't entirely mimic human intelligence (er, obviously,
really), but it can learn to avoid doing stupid things. Obviously in the
real world the definition of stupid is made by the hostile universe, but
an IF author would have to write the definition himself (unless your
feedback is coming from the real universe and not the simulated one:
I think this is where the player comes in, unless you're into simulationism
in a really big way, and get your feedback from life-size mockups of your
NPCs and environs).

Andrew.

[1] A motor neuron, with inputs from sensors all over the slug's body.

OKB -- not okblacke

unread,
Sep 6, 2001, 11:56:44 PM9/6/01
to
Anson Turner anson@DELETE_THISpobox.com wrote:
>> Less basic concepts are harder to grasp in the abstract; in
>> math, I notice that many people have a hard time seeing the relationship
>> between a concrete example problem and the abstraction of the relevant
>> concepts to a general formula.
>
>Is it me, or did you say that backwards? If it's harder to grasp something in
>
>the abstract, a concrete example should be easier than the abstract concept
>it
>illustrates. Anyway, that would certainly be my experience; I find it easier
>to induce abstract from concrete than the other way around.

My thinking here is that most people find it easier to plug numbers into a
formula than to abstract a general formula from specific numbers. Do you
really find it easier to go from "(1/3)pi(5^2)(15)" to "(1/3)pi(r^2)h" than to
go the other way?

I think it's true, though, that most people are more comfortable
reasoning from concrete to abstract than from abstract to abstract. When
trying to figure something out, most people set up some example cases and try
to find the relationship between them; not many people start out with a bunch
of variables and manipulate them mathematically without checking themselves
with an example.

Adam Thornton

unread,
Sep 7, 2001, 1:28:15 AM9/7/01
to
In article <20010906164830...@mb-mj.aol.com>,

OKB -- not okblacke <bren...@aol.comRemove> wrote:
>marti...@hotmail.com (Martin Bays) wrote:
>>Think of all the
>>different shapes you know that are all called guitars, that all 'have
>>guitarhood', and imagine trying to set down rules for what is a guitar
>>and what isn't which would *only* include guitars and would include
>>*all* guitars, even ones which are yet to be invented and may look
>>nothing like your current conception of guitarhood, and yet which you
>>would still recoginise as a guitar once you've seen it. It just can't
>>be done.
> I agree. When I say that the AI need not be able to form new concepts, I
>mean that it need not be able to conceive of "guitarhood" on its own. It
>should, however, be able to recognize new instances and applications of this
>concept which it has never seen before.

Western Philosophy really *IS* a succession of footnotes to Plato, isn't
it?

Adam

Richard Bos

unread,
Sep 7, 2001, 6:00:13 AM9/7/01
to
ad...@fsf.net (Adam Thornton) wrote:

> Western Philosophy really *IS* a succession of footnotes to Plato, isn't
> it?

Only if you accept that most footnotes are longer than the original
work, and largely deny its validity.

Richard

L. Ross Raszewski

unread,
Sep 7, 2001, 10:23:43 AM9/7/01
to
On Fri, 07 Sep 2001 02:36:14 -0400, Anson Turner
<anson@DELETE_THISpobox.com> wrote:
>In article <slrn9pgbt5...@chrysoprase.localdomain>,

> and...@logicalshift.demon.co.uk (Andrew Hunter) wrote:
>
>> Something of a sore point among AI researchers. My neural networks lecturer
>> believed that conciousness was an illusion, and presented evidence
to support
>> this hypothesis.
>
>By showing you his circuitry access panel? Honestly, I don't even have a clue
>what the claim that "consciousness is an illusion" is supposed to mean. I
>think, therefore I must merely be deluded into thinking that I am thinking...
>
>

Essentially, we all make the claim that "I am self-aware", but none of
us can define what exactly that means in a philophically useful
way. If you're an existentialist, there isn't some "secret internal
you" who "really feels" things -- there's just you, who reacts to a
stimulus. If an AI was designed such that it *acted* like a human,
there is no metric by which we could show that it wasn't really
conscious -- any metric would also show that an actual human wasn't
"really" conscious either.

Andrew Plotkin

unread,
Sep 7, 2001, 10:30:23 AM9/7/01
to
Anson Turner <anson@delete_thispobox.com> wrote:

> As an aside, I think it was John Holt who wrote, on the belief that people are
> machines, that if an idea can be evil, that one is. (Having noted the
> progression from people having some things in common with machines, to people
> being like machines, to people being machines.)

"Some people look at this idea and say 'How awful, how can you think
that people are anything like machines?' Other people look at it and
say 'How wonderful! I never knew that machines could be as amazing as
*people*!'" -- Raymond Smullyan, paraphrased from memory

--Z

"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*
* Make your vote count. Get your vote counted.

Magnus Olsson

unread,
Sep 7, 2001, 10:35:45 AM9/7/01
to
In article <tphm3fk...@corp.supernews.com>,

What this shows is that we can't really define consciousness, we
don't really know what it is, and we can't measure or quantify it.

That's not the same thing as its being an illusion, of course.

--
Magnus Olsson (m...@df.lth.se, m...@pobox.com)
------ http://www.pobox.com/~mol ------

Magnus Olsson

unread,
Sep 7, 2001, 10:45:03 AM9/7/01
to
In article <9nallv$mqg$1...@news.panix.com>,

Andrew Plotkin <erky...@eblong.com> wrote:
>Anson Turner <anson@delete_thispobox.com> wrote:
>
>> As an aside, I think it was John Holt who wrote, on the belief that
>people are
>> machines, that if an idea can be evil, that one is. (Having noted the
>> progression from people having some things in common with machines, to people
>> being like machines, to people being machines.)
>
>"Some people look at this idea and say 'How awful, how can you think
>that people are anything like machines?' Other people look at it and
>say 'How wonderful! I never knew that machines could be as amazing as
>*people*!'" -- Raymond Smullyan, paraphrased from memory

This aspect comes up time and time again in the various science vs.
religion/feeling/art debates. One person thinks that Newton ruined his
enjoyment of the rainbow by reducing it to the refraction of light;
another thinks this knowledge only adds to the enjoyment. Are the
stars in the sky less beautiful because we know the processes by which
they emit light? Is the moon less beautiful because we know it's a
gray, barren desert rather than a huge celestial cheese?

But there is, of course, a deeper, philosophical issue in the "is man
a machine" debate: does "reducing" man to a machine also reduce things
like morality, consciousness, emotions, the value of life? This
doesn't follow - though both mechanists and vitalists fall victim to
the fallacy of thinking it does.

Martin Bays

unread,
Sep 7, 2001, 10:53:43 AM9/7/01
to
> >The only way I
> >can see to have an AI (or indeed a human being or other higher animal)
> >have such concepts is for it to have built them itself, by percieving
> >various examples of the concept. So we look at loads of guitars and
> >(somehow) get the idea of guitarhood.
>
> I would say that an AI which can look a bunch of guitars and understand
> that they are all guitars is of a higher order than one which already knows the
> concept of "guitarhood" and can recognize guitars (even ones which it has never
> seen before). I guess basically what I'm saying is that the primitive AI has a
> fixed set of known concepts (presumably part of a knowledge base installed by
> the programmer) but can learn new instances of them, while the advanced AI can
> actually teach itself new concepts. (This is a lot like something you said
> later in your post.)
>

But what I was trying to say (perhaps not very clearly) is that this
'primitive AI' which has some deep concepts installed by a programmer
just isn't possible - that the only way to for an intelligence to have
a concept (except possibly the very simple fundamental building-block
ones we mentioned, like equality) is for it to have built it itself -
not have it already laid out in a pre-supplied 'knowledge base'. I
guess if this were true you could still get a primitive AI by making a
copy of an advanced AI after it's formed a few concepts, and the
'turning off' its concept forming mechanisms. But I don't think that's
what you meant. And if it *is* possible to 'install' high level
concepts some other way, I'd like to know how. Ideas, anyone?



> A few years ago, I did some brief fiddling around with a simple program
> called "neural net" which basically let you give the computer a bunch of simple
> bitmaps, from which it would derive their basic concept. The crucial point
> here is that the program assumed that all the examples you were giving it did
> indeed share some basic quality, whereas in your example of the guitars, the
> human intelligence (or artificial intelligence not yet created) has to pick the
> guitars out of the rest of the world, see that they are all guitars, and then
> realize the basic nature of guitarhood.

Yes, that does seem a central problem, and I'm sure my following
argument is wrong, but I can't see why. I'm guessing (since this is
the only way I can think of doing it) your programme worked by looking
at the given examples and trying to see what was in some way 'the
same' in all of them. This is, I suppose, equivalent to making
analogies between them - seeing how they map on to each other and
looking for some aspects (lower-level concepts?) that stay constant or
nearly constant, and saying that it's that set of aspects that makes
up our new concept. Even if this wasn't how it worked, I'm sure
something like that could be done.

Now, say we're given a random collection of objects and this time we
aren't told that they share anything. Well, why can't we just run the
programme anyway, and let it see whether anything stays constant. If
it doesn't, we don't form a new concept, and if it does we do.

So looking at the world (or some more abstract domain, say number
theory), we (and potentially our AI) are constantly forming little
sets and looking for constants, and if we find something we think (or
probably some subconscious equivalent) "Aha! That might be
interesting!" and try and look for more objects that fit with the
concept (just to make sure it wasn't a coincidence) and then see how
it fits in with our already formed concepts. So for example I might
start playing around in my head with divisors of the naturals, notice
that 3, 5 and 7 all have none except themselves and 1, cast around for
some more, find 2, 11, 13, 17 etc. and there you go, I've got the
concept of 'prime'. Of course if it doesn't seem to have anything to
do with any other concepts I'll soon forget it, but say I know notice
that all the numbers I try have a unique prime factorisation. Well,
now I've got two new interlinked concepts that clearly also link back
to original concepts of the number system. My hypothesis, then, is
that a similar process to the quite conscious one described above is
going on subconsciously all the time as we try to make sense of the
world. Hence I know what a guitar is. A pretty rough model, yes, but
could it be approximately right? And could it be relatively easily
programmable?

>
> >And if we form concepts by seeing similarities between
> >objects and so grouping them, how do we percieve the similarities in
> >the first place if not through already formed concepts? This would
> >mean an intelligence would have to start with some innate concepts
> >from which others are built - and so these must be explicit. Things
> >like sameness, oppisite-ness and so on would come in this category,
> >but why are these able to explicitly determined when 'higher' ones
> >aren't?
>
> I think a real step to developing a higher-order AI is to explore these
> concepts to find as many of the "innate" ones as possible. I do think that
> there certain concepts which form the foundation of human (and, such as it is,
> animal) intelligence (for example, equality, or "sameness"). I imagine that
> even an infant could recognize similarities between various guitars or shades
> of red, and realize that they all are somehow alike (although it's hard to test
> this recognition). Less basic concepts are harder to grasp in the abstract; in
> math, I notice that many people have a hard time seeing the relationship
> between a concrete example problem and the abstraction of the relevant concepts
> to a general formula.
>

I guess we're going to have some rudimentary innate concepts relating
to vision and the other senses, allowing us to work out redness and
'guitarhood' fairly easily. And humans will probably have some
relating to language, which I think is what also gives us the ability
to form much more complicated, logical, structured concepts than other
animals (maths, IF, AI and cognitive science among them)

> --OKB (Bren...@aol.com) -- no relation to okblacke
>
> "Do not follow where the path may lead;
> go, instead, where there is no path, and leave a trail."
> --Author Unknown

Martin

Magnus Olsson

unread,
Sep 7, 2001, 2:11:21 PM9/7/01
to
In article <anson-E826AE....@nntp.mindspring.com>,
Anson Turner <anson@DELETE_THISpobox.com> wrote:

>In article <9namhf$ntj$2...@news.lth.se>, m...@df.lth.se (Magnus Olsson) wrote:
>
>> This aspect comes up time and time again in the various science vs.
>> religion/feeling/art debates. One person thinks that Newton ruined his
>> enjoyment of the rainbow by reducing it to the refraction of light;
>> another thinks this knowledge only adds to the enjoyment.
>
>Most self-indulgent twaddle Dawkins ever wrote.

I beg your pardon? I wasn't quoting Dawkins, not consciously anyway. I
may have read the example with the rainbow in Dawkins - I've heard
people express similar sentiments (that scientific understaning of
<foo> would ruin their enjoyment of it, since it would reduce <foo> to
just <some dry physics>) but the rainbow may be Dawkins'.

But why is this self-indulgent?

>Anyway, I don't think this
>really has much to do with the current discussion.

Anson, meet Topic Drift. Topic Drift, meet Anson. And it still has
more to do with AI than Peano's axioms have to do with Fooblitzky,
to grab an example from a neighbouring thread.

>Also, it's very odd the way you refer to people as "thinking" that
>they feel a certain way, in much the same way that it's odd to claim
>one's consciousness might be an illusion. What exactly is the
>difference between thinking that one feels something and actually
>feeling it?

Now you're really putting words in my mouth, and I resent that.

I didn't write "one person thinks he feels that Newton...", I wrote
"one person thinks that Newton..." - to what extent this person is
thinking or feeling this is another question, of course, but I
certainly didn't mean "he thinks that he feels this, but he's
mistaken", which is a really ugly rhetorical trick, and I resent your
implication that I'm indulging in that kind of behaviour.

Carl Muckenhoupt

unread,
Sep 7, 2001, 2:11:52 PM9/7/01
to
In article <anson-E826AE....@nntp.mindspring.com>,
anson@DELETE_THISpobox.com says...
>
> Again, this is based on the bogus abstract definition of "machine" whereby
> anything that can be analyzed in some way is considered to be "like a
> machine", regardless of whether it bears any meaningful resemblance to any
> machine that actually exists. I suspect this debate may be almost entirely the
> result of different ideas of what "machine" means. No existing machine is
> generally believed to have consciousness, feelings, etc., therefore to some,
> saying that people are machines is saying that they are mindless automatons
> (with the "illusion" of consciousness, I suppose).

I seem to recall something Smullyan said on this topic, something
along the lines of "Some people, contemplating the possibility that we
could be machines, respond 'Am I really no more than a machine? How
depressing!' My reaction is more like 'Amazing! I never knew machines
could be so wonderful.'"

Adam Thornton

unread,
Sep 7, 2001, 2:27:06 PM9/7/01
to
In article <9namhf$ntj$2...@news.lth.se>, Magnus Olsson <m...@df.lth.se> wrote:

AAAARGH! I HATE YOU!

If you're going to do this sort of thing, at least put in a SPOILER
WARNING!

I have added one here to perhaps prevent people from further
disillusionment.

SPOILER WARNING!!



>Is the moon less beautiful because we know it's a
>gray, barren desert rather than a huge celestial cheese?

You heartless bastard.

Adam

Adam Thornton

unread,
Sep 7, 2001, 2:28:54 PM9/7/01
to
In article <3b989a85...@news.worldonline.nl>,

I have no problem whatsoever with a text's critical apparatus both
exceeding it--often vastly--in terms of length, and raising objections
to it.

Adam

Daryl McCullough

unread,
Sep 7, 2001, 3:31:42 PM9/7/01
to
In article <3b989a85...@news.worldonline.nl>, in...@hoekstra-uitgeverij.nl
says...

This reminds of one of my favorite works of fiction/poetry/criticism:
_Pale Fire_ by Nabokov. The gimmick is that it is both an epic poem,
and also a work of fiction (in the form of footnotes to that poem).

--
Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

Ashikaga

unread,
Sep 7, 2001, 5:25:12 PM9/7/01
to
"Magnus Olsson" <m...@df.lth.se> wrote...
<snip>

> This aspect comes up time and time again in the various science vs.
> religion/feeling/art debates. One person thinks that Newton ruined his
> enjoyment of the rainbow by reducing it to the refraction of light;
> another thinks this knowledge only adds to the enjoyment. Are the
> stars in the sky less beautiful because we know the processes by which
> they emit light? Is the moon less beautiful because we know it's a
> gray, barren desert rather than a huge celestial cheese?
>
> But there is, of course, a deeper, philosophical issue in the "is man
> a machine" debate: does "reducing" man to a machine also reduce things
> like morality, consciousness, emotions, the value of life? This
> doesn't follow - though both mechanists and vitalists fall victim to
> the fallacy of thinking it does.

That's just ridiculous. Light is still light, we enjoy looking at the
rainbow because it's something inexplainably fun to look at, not because we
didn't know it's a bent light, so no spell is broken as a result. People
who think it'll make it less fun is just plain too brainlessly mechanical.
People don't really need to analyze why we enjoy certain something, you
know. It's that kind of analysis (the mere act itself) that spoils the fun.
So I am stupid, I enjoy looking at the rainbow and I might even accidentally
laugh a little bit, SO WHAT? People who wants to make a big deal out of it
just want to make sure everybody is depressed and forgot why they were able
to laugh at such a simple thing.

> --
> Magnus Olsson (m...@df.lth.se, m...@pobox.com)
> ------ http://www.pobox.com/~mol ------

Ashikaga


Ashikaga

unread,
Sep 7, 2001, 5:25:12 PM9/7/01
to
"Andrew Hunter" <and...@logicalshift.demon.co.uk> wrote...

> On Thu, 06 Sep 2001 03:01:45 GMT, Ashikaga wrote:
> >I think an AI is a simulation of human intelligence, and it's suppose to.
> >It should have a pre-set perimeter on which stuffs it is capable of
doing.
>
> Originally, yes. These days, no. The main aim of AI research is to produce
> software that is good at solving ill-defined problems, image recognition
> being an example. Usually the route to this goal is adding some form of
> adaptation to the software, so it can learn from previous attempts.
> Creativity is not usually a big part of research, but for things like
> genetic algorithms it can be the primary aim.

Okay, I was trying to see AI in the practical use in an IF. I think that
creativity thing maybe an overkill for such purpose, but maybe not.... We
don't know yet.

Let me insert your soup example in another post here (we need to consolidate
our posts to save the U.S. economy!)

<recall Mr. Hunter's soup example about how can an AI know it's "bad" or
"good?">

Okay, bad or good is really a matter of opinion. Does an AI actually gives
an opinion? I think not. An AI actually processes data using the logic of
a precedent case and then print out the results. A lot of bad and good are
not the same among different people, so how can an AI define such
un-universal standard? Maybe some people really like unheated milk in soup,
and I happen to like orange juice on my cereal.

Apart from opinions, and chemical stimulus (like your sea slug example),
"not all" moral standards are universal either, so how can an AI know
whether what he did was the right thing or wrong thing, given the situation
falls under one of those gray area, and no one appraised or punished it for
its behavior?

<snip>


> Something of a sore point among AI researchers. My neural networks
lecturer
> believed that conciousness was an illusion, and presented evidence to
support
> this hypothesis. (Personally, I don't believe this, but there's a circular

> argument here). At any rate, the human brain and nervous system operates
> much like a machine in many respects. The entire nervous system of a few
> simple creatures has been mapped, and *shown* to act like a machine
> (there are errors because natural creatures are not designed to be
> particularily precise, but a considerable amount of error correction is
> built in, to the point that neurons are largely digital in nature, and
> use repeaters to remove transmission errors).

You could argue that morality is also an illusion, think it as a set of
normally acceptable behaviors rather than absolute truth. People sometimes
don't get punished for what is considered as immoral, if you discount the
idea of this "guilty feeling," which I am sure some criminals actually is
stimulated by such. So is morality an illusion? (I am not taking side on
this thing, but try to open up some discussion here).

Human beings do thing mechanically time to time without any upper
conciousness, and sometimes without any error. How many times you have been
driving on freeway and your mind was somewhere else? I probably do this
everyday on my way to school, and obviously we haven't died yet (I am not
saying this is the best way to drive..., but you know what I mean :-)).

A computer is designed to behave like a human, and if you think this way,
CPU is the brain, and BIOS is really your lower brain (what's that called?)
where reflex and vital functions (like breathing, or heartbeat) are carried,
so in fact, the machine acts like a human being.

<snip>


> If you can create a computer program that can feel, then there's the
ethical
> question of whether or not it should be switched off.

If a machine can feel, yeah, I think people should get very worried. :-)

<snip>

I put "decision-making" in quotation marks because it's really something we
need to think twice about. I believe most of you people here have some sort
of science background, and I've learned some BASIC and Pascal programmings,
but have never gone beyond the high school level stuff, and now I am
learning finance, just want you know where I am coming from first.

Yes, an AI's utmost goal is to make a rational and precise decision based on
existing data. Believe it or not, quantitative analysis is actually
required, in my school at least. But here is the thing..., people don't
always follow that result of all those long equations. It's merely
something decision makers need to "think about" when they make a decision.

Most of the time, the result of a qunatitative analysis is a "justification"
of a decision someone proposed. That mean, you don't go do the math and say
we do this project because we'll make money according to this number. In
fact, decisions are made first, then we verified it with the number, partly
because the other way around doesn't work. Business is an art, not a
science. Lots of things happen in business cannot be proven or solved using
conventional science. If so, then Greenspan's policy would work like a
wonder, and we wouldn't in recession anymore. What is theoretically correct
is not necessarily the way it reflects in real life.

So all of my rambling comes to this, it's impossible make an AI function
exactly the same way human behaves. AI is a scientific product. If
designed correctly and it can do logical reasoning correctly without a
fault, it still cannot replace human brain, because that's NOT how a human
brain functions. We simply don't think as rationally as a mechanical
creature, there are just too many exceptions in our lives, that AI would
eventually encounter and don't know what to do with it.

For example, a rational being would never sell things below cost if no data
supports such action, but tons of business entities do it anyways, and
"sometimes" for good reasons. Then price will always an accurate reflection
of supply and demand, and no company would go bankruptcy (company size
doesn't matter, assuming every company uses a standard rational decision
making process without undercutting others).

BTW, it's the irrational behavior that creates stock market fluctuation,
because theoretically, if we all use computers to buy and sell based on the
fair price of the stock, the price of the stock will always reflect the fair
market value of the company, given data is efficient (meaning everyone has
the same information about the company at the same time), and nobody would
make absurd amount of profit, makes no difference whether you invest in in
that company's bond or stock, given the dividend+growth equal to the
interest payment and risks equate (actually that's the formula for fair
value of a stock right there: Common Equilty Yield = dividend yield +
growth rate, but would you trust it 100%? ;-) Wouldn't you just use it as
a justification to support your own final decision?).

If we were all rational, all of us would act as one "very boring*" being (or
the geist, if you believe in Kantian theory). AI would make a very
"consistent" advisor, but it can never be the final abstract decision maker.
What is "fashionable" at one point may not be the trend for some indefinite
period after, and AI would run into insufficient relevant data error if such
thing happened.

==
* That part in quotation mark is just my opinion. :-)

<took some bytes out of the message>


> >Another intriuguing fact you've brought up that makes me believe AI would
> >never act like human. AI, if programmed correctly and assume no bug is
> >encountered, should always work in a totally logical manner. Just like a
> >MIDI music will always play the correct tune, but a human with a real
> >instrument will not. It's that "flaw" that makes us value a real
> >performance more than the other medium. It's more authentic.
>
> A person does this because she has to learn to play music. If she realises
> her mistake, she will attempt to correct it, and may not make the same
> mistake next time. The first time she tries to play a piece of music, most
> of what she does will be mistakes. MIDI has no margin for error, so it has
> no margin for learning (or decision-making, for that matter). A MIDI
player
> cannot apply knowledge gained through learning to play music to the
creation
> of new music. An AI programmed to learn how to play music might be able
> to apply the concepts to 'unusual' music. That is, create new works.

Okay... Maybe I should rephrase a little bit (and stop talking in a
not-so-direct manner... I assume AI wouldn't take "nudge, nudge, wink, wink"
either...). I didn't mean playing the right or wrong note, I mean a real
performer may not play the same song twice because of something unique s/he
can do with it. Maybe you play a song with some swing feel to it, or stress
on certain part which you only decided to do three seconds ago. You know,
adlib a bit, and not following the notes one by one as it was written.
That's the "flaw" I was talking about.

> The nature of creativity is obviously one for debate, though. Any
> decision-making AI has to have *some* creativity, so that it can learn,
> and so that it can deal with unusual situations. Neural network
researchers
> have observed that even simple nets tend to 'dream' when presented with
> neutral input - this could be where creativity comes from. Then again,
> it might not: we simply don't know enough to be sure. The human brain
> is very obviously a system of structures built on top of earlier
structures,
> though, so this type of hypothesis can be popular with NN researchers.

Creativity is something worth a while to research. I am pretty sure no one
can figure out the secret of creativity.... :-D There is simply no
empiracal data we can use (or is it?). Everybody has their own "original"
creation, and that has something to do with the way our brains function
(which "coincidentally" ;-) corresponds with our original thoughts).

<saving U.S. economy would have been time consuming...>


> >I think you are trying to make a replicant. How are you going to use DNA
on
> >a computer AI? :-) DNA has no direct impact on your consciousness, as
far
> >as I know. Consciousness is not even a physical trait. Consciousness
has
> >something to do with the mind, not a characteristic. Someone could be
doing
> >things without any consciousness because he was depressed, and being
easily
> >depressed is a trait possibly caused by a hereditary thingy encoded on
the
> >person's DNA, but it does not suggest there is a direct link between the

> >two. Besides, there might be other complication that caused the person


to do
> >things without consciousness, not just depression alone.
>
> There's a point here, though. Learning is an evolutionary process. There's
> evidence that competition is a major process in the formation of the
brain,
> and there's a way that the operation of the brain can be thought of as
> competition. This is evolution, by and large. Things that win a
competition
> are done more, and things that lose are done less.

I have no background information on this, so please explain it (and in
easier, digestible phrases, thanks).

<snip>


> That's true. But if the aim is to make 'better' decisions under various
> circumstances, then current AI research could well be relevant. The key
> is learning: both a clever and a stupid NPC might burn itself, but the
> clever one wouldn't do it twice.

Assuming he isn't already burned down to into ash.... :-D

> In a pre-scripted world, this might not be useful. In a simulationist one,
> where interactions are suitably complex, a learning AI would certainly
> increase the level of realism, even if its responses are limited. You'd
> be able to reason with it, after a fashion :-) World model view
differences
> might have an effect though. If the AI was in a GM role, you might find
that
> you can persuade it to change the rules (essentially by tricking it).
> It's arguable that this would also increase realism :-)

;-) I like that last one.

Yes, I firmly believe a good AI would increase realism of the game, but
now..., my business background comes to play yet again..., is it too costly
to build such a model just for a game? Obviously there is no real fiscal
cost we are talking here, but I mean, does the increase of realism generate
enough enthusiasm (from people) and attractiveness to a game that would
justify the time we invested in for such? If we can create a comparable
exciting rating with a prescripted NPC, we would save a lot of the time.
Not to mention the uncertainty associates with the AI project (I don't
always agree with cost/benefit analysis, but it's worth a while to think
about it).

> >That's true, except I find most of the chat rooms to be rather boring.
> >People pretty much do not talk with their consciousness in mind. :-)
>
> Heh, hang around enough chat rooms, and soon becomes apparent how
depressingly
> similar people can be to each other.

a/s/l... :-D

> >I think the original post mentioned though, it's the merit of developing
AI
> >using the IF as a stage, which I think it's very interesting.
> >
> >> Anson.

Ashikaga (recycle your name today)


> Andrew.
>
> --
> ____
> \ \ \ Andrew Hunter <and...@logicalshift.demon.co.uk>
> > > > http://www.logicalshift.demon.co.uk (me)
> /_/_/ http://www.impulse.org.uk (impulse)

Ashikaga


Billy Bissette

unread,
Sep 7, 2001, 7:15:40 PM9/7/01
to
In article <9nam01$ntj$1...@news.lth.se>, m...@df.lth.se says...

I heard a similar argument several years ago by some researcher.
His claim was that concious thought is only an after-effect or
side effect of stimulus-response, and had no influence over the
stimulus-response, similar to the light from a lightbulb... An
extension of basic stimulus-response...

Basically, everything you do is straight stimulus-response with
neurons firing and cells growing and breaking down and all that
procedure. 'Concious thought' only occurs after the actual actions
and has no influence over what your body actually does at any
point in time. It is only a rationalization at best.

You are stimulated. You respond. 'Concious thought' rationalizes
the action as something it wanted to perform, if you even want to
be that kind in describing it...

Gringo G. Scumm

unread,
Sep 7, 2001, 9:07:36 PM9/7/01
to
> > Now you're really putting words in my mouth, and I resent that.
>So you're saying that men are inherently superior to women, who are
nothing
>but baby-making machines, whose minds have been corrupted by ugly
lesbians who
> wish they were men?

Is this a bizarre joke?

> You also wrote "another thinks this knowledge only adds to the enjoyment".

think = is of the opinion that

> > Are the stars in the sky less beautiful ...
> Perhaps my reading comprehension is just not as good as I think, but it sure
> sounds to me like you are making these out to be rhetorical questions rather
> than the purely subjective issues that they are.

This paragraph is clearly expressing the authors personal opinion, and
why he believes what he does.

Enough quibbling.

-- Gringo

L. Ross Raszewski

unread,
Sep 7, 2001, 11:10:46 PM9/7/01
to
On Fri, 07 Sep 2001 12:57:07 -0400, Anson Turner
<anson@DELETE_THISpobox.com> wrote:

>result of different ideas of what "machine" means. No existing machine is
>generally believed to have consciousness, feelings, etc., therefore to some,
>saying that people are machines is saying that they are mindless automatons
>(with the "illusion" of consciousness, I suppose).

And, of course, there's the cartesian escape from the argument: Even
if you are being deceived by an "illusion" of consiousness, there
still must be a consciousness that's being rooked in by the illusion.

Arcum Dagsson

unread,
Sep 8, 2001, 2:51:06 PM9/8/01
to
In article <9nallv$mqg$1...@news.panix.com>, Andrew Plotkin <erky...@eblong.com>
wrote:

> Anson Turner <anson@delete_thispobox.com> wrote:


>
> > As an aside, I think it was John Holt who wrote, on the belief that people
> > are
> > machines, that if an idea can be evil, that one is. (Having noted the
> > progression from people having some things in common with machines, to
> > people
> > being like machines, to people being machines.)
>
> "Some people look at this idea and say 'How awful, how can you think
> that people are anything like machines?' Other people look at it and
> say 'How wonderful! I never knew that machines could be as amazing as
> *people*!'" -- Raymond Smullyan, paraphrased from memory
>

Since the quote's been paraphrased twice from memory in this thread, here is the
full quote, from Smullyan's "This book needs no title":

Recently I was with a group of mathmaticians and philosophers. One philosopher
asked me whether I believed man was a machine. I replied, "Do you really think
it makes any difference?" He most earnestly replied, "Of course! To me it is the
most important question in philosophy."
I had the following afterthoughts: I imagine that if my friend finally came to
the conclusion that he <I>were</I> a machine, he would be infinitely
crestfallen. But if I should find out <I>I</I> were a machine, my attitude would
would be totally different. I would say: "How amazing! I never before realised
that machines could be so marvelous!"

On a similar note, I never realised Raymond Smullyan was so popular on this
newsgroup...

--
--Arcum
"There was not supposed to be fear in the structured and ordered society of the
civilized worlds; there was some sort of law against it. Clearly there was
nothing to fear any more. And in a society like that, somebody that knew the
true folly of complacency could get away with almost anything."
(Jack Chalker: "Cerberus: A Wolf in the Fold")

John W. Kennedy

unread,
Sep 8, 2001, 5:07:46 PM9/8/01
to

That's "footnotes to that poem by a deluded pseudointellectual", if you
please.

--
John W. Kennedy
(Working from my laptop)

Paul Trembath

unread,
Sep 9, 2001, 5:12:12 PM9/9/01
to

"Anson Turner" <anson@DELETE_THISpobox.com> wrote in message
news:anson-829E05....@nntp.mindspring.com...
...
> Exactly. "If it is told". In other words, it is dependant on an actual
mind to
> provide feedback.

The actual minds, of course, have been "told" by 4 billion years of
evolution, plus some decades of personal development and experience. There
is no magic ingredient. The difference is huge, but in some degree
quantitative.

--
pt

Paul Trembath

unread,
Sep 9, 2001, 5:13:02 PM9/9/01
to
"Andrew Hunter" <and...@logicalshift.demon.co.uk> wrote in message
news:slrn9pgbt5...@chrysoprase.localdomain...
...

> It may be that boredom is something we evolved to cope with certain
classes
> of survival conditions. It may be that its a byproduct of the human
brain's
> design.

Are you sure? I always thought it was because of the evil purple
mushrooms from planet Sequelon :-).

--
pt

Paul Trembath

unread,
Sep 9, 2001, 6:09:12 PM9/9/01
to

"OKB -- not okblacke" <bren...@aol.comRemove> wrote in message
news:20010906164830...@mb-mj.aol.com...
> marti...@hotmail.com (Martin Bays) wrote:
...
> >So the point is that you can't explicitly
> >programme a machine to say 'call this red, and this not red' without
> >taking the entire, very deep concept of redness into account.
>
> Right. This is what I mean when I say the computer must know what
red
> "means", as opposed to having a list of objects which it knows are "red".

But, but, but! There are any number of marginal cases where you, me, and
Hal would be unable to agree on whether a particular thing is a red guitar.
In some sense it's an implementation issue - a red guitar is whatever my
red-guitar neuron recognises. This involves optics, chemistry, neurology,
linguistics, context, and other issues.

There isn't any useful thing that red "means", except that human beings have
a consensus that works most of the time. Actually, wouldn't it lead to an
infinite regress if there were? What does (what red means) mean? And so
on.

...


> I think a real step to developing a higher-order AI is to explore
these
> concepts to find as many of the "innate" ones as possible. I do think
that
> there certain concepts which form the foundation of human (and, such as it
is,
> animal) intelligence (for example, equality, or "sameness").

Funnily enough, this might work in practice - despite what I said above.
The concepts that are "innate" in humans will be those that evolution has
found to be useful and generally applicable in the situations our ancestors
have encountered. And survived.

--
pt


Chris Hadgis

unread,
Sep 9, 2001, 7:36:30 PM9/9/01
to
I have followed this IF discussion with interest. It has led me to think
about something I have not seen discussed in this group, and I could find no
mention of it in Google.

It is the concept of multi-player IF games.

The initial focus of the AI thread was on making NPC more real, or more
believable, or more complex (I can't quite remember which). If another
player took on the character of an "NPC", then wouldn't we have a "better
NPC"? Of course, it would the C would no longer be NP, but I think you all
know what I am getting at :)

So, has multi-player IF been discussed before?

I know it may be difficult to get a multi-player IF game going, but not
impossible. I imagine it would be something like a MUD (multi user dungeon)
but within the framework of a story.

Someone gave the example of the player going up against an adversary in an
IF story - it was not Black vs White in "Jigsaw", but something else. What
would be the problem with one player taking on the role of the "hero" and
the other player playing the "villain"? It would be even better if the two
roles were not so black and white (sorry) but the lines between them blurred
a little. Each character would have their own goals, and may or may not need
the help of the other character to achieve them.

I would not expect a story for multiple players would be any more (or much
more) difficult to come up with than a story involving two or more players.

To IF authors, if such a system was available would you write games with it?

To IF players, if such a system was available would you play games with it?

I am curious to know what problems such a system would encounter. I don't
think I would be the first person to think about such a system, but as I
said, I have not seen it discussed here before.

Your feedback will be appreciated.

Cheers,
ChrisH

David A. Cornelson

unread,
Sep 9, 2001, 8:21:16 PM9/9/01
to
"Chris Hadgis" <chr...@mincom.com> wrote in message
news:9ngu4d$t0g$1...@sol.mincom.oz.au...

> I have followed this IF discussion with interest. It has led me to think
> about something I have not seen discussed in this group, and I could find
no
> mention of it in Google.
>

Oh there has to be conversations on Google somewhere. We talked about it
quite about 2 1/2 years ago and then at least once since. The link that goes
into some of the thoughts on ifMUD are at
http://ifmud.port4000.com/alex/logs/networked-if.txt.

I think Adam Thornton had some ideas about it in previous years as well.

I think the problems with no one ever really following through on anything
"practical" is that we all have differing notions of what multiple-pc-IF
would be like. Some of the questions that seem to have no consensus are:

1) Is it turn based or is there some sort of timer? How does the system
react if one player is playing and the other one goes away?

My thoughts were pretty simple on this. If you build a system exactly like
current IF, only add for the programmer the ability to code for a second
"player character", then the rest of it seems pretty obvious to me. If you
have a game with more x PC's, then that many people need to be playing at
the same time. Each turn passes when any of the players does something. If
they by some miracle type something in at the same time, then the parser
will have an adjudicator that determines precedence the same way it does
disambiguation. There should be routines though that allow the programmer to
force the parser to do things a certain way if they wish it so (say the
author wants one of the PC's to be able to do things regardless of another
PC...so maybe a parse_timing() property is used to force the issue).

All of these language additions would enable the author to handle things
happening and things being viewed by each player. Something like an
objectloop that checked for all PC objects that were not the 'current'
player and send the appropriate message to them.

(the PC named Dave)
****
> TAKE APPLE
You take the apple.

>
****
(the PC named Joe on another client sees)
****
You see Dave pick up the apple.

>
****
and if the adjudicator kicked in when both tried...
****
You and Joe reach for the apple at the same time, but you were a little
quicker.

>
****
You and Dave reach for the apple at the same time, but he was a little
quicker.

>
****

Anyway....I think this is one of the NEXT BIG LEAPS in IF and someday
someone will implement an author-friendly environment to do this sort of
thing. I think it would take a zarf, gn, mr, or kt to do it, but that's my
opinion.

Short of that, there is sort of a mock multipleIF on ifMUD by using a bot
named Floyd. We play a game called werewolf via Floyd that is kind of fun,
but not really what multipleIF could be if done properly. Floyd can allow
many people to 'play' many different TADS and Inform games as well, which is
sort of fun.

I'm all for this technology to be built.

Jarb


Sean T Barrett

unread,
Sep 9, 2001, 8:55:37 PM9/9/01
to
David A. Cornelson <dcorn...@placet.com> wrote:
>Oh there has to be conversations on Google somewhere. We talked about it
>quite about 2 1/2 years ago and then at least once since. The link that goes
>into some of the thoughts on ifMUD are at
>http://ifmud.port4000.com/alex/logs/networked-if.txt.

[snip]

>1) Is it turn based or is there some sort of timer? How does the system
>react if one player is playing and the other one goes away?
>
>My thoughts were pretty simple on this. If you build a system exactly like
>current IF, only add for the programmer the ability to code for a second
>"player character", then the rest of it seems pretty obvious to me.

[snip]


>(the PC named Joe on another client sees)
>****
>You see Dave pick up the apple.

[snip]


>Anyway....I think this is one of the NEXT BIG LEAPS in IF and someday
>someone will implement an author-friendly environment to do this sort of
>thing. I think it would take a zarf, gn, mr, or kt to do it, but that's my
>opinion.

[snip]


>I'm all for this technology to be built.

[no 2), 3), etc.]

Time for my twice-yearly posting on this subject:

Please let's not spend lots of time focusing on the
technology problems; they're pretty effectively solved
on any of the programmable MUD systems out there (and
there are tens of them). Yes, they're not geared so
heavily to single authors, and they're all "real-time",
but forcing them to be turn-based (you can't take another
turn until everybody else does) wouldn't be hard.

The technology is not the problem.

The game design is the problem.

SeanB
and by that I don't mean "real-time or turn based"--I mean,
how do you give both players interesting experiences? How
do you tell a story if both players aren't there? How many
non-lame multiplayer puzzles can you design?

David A. Cornelson

unread,
Sep 9, 2001, 9:21:32 PM9/9/01
to
"Sean T Barrett" <buz...@world.std.com> wrote in message
news:GJF98...@world.std.com...

> David A. Cornelson <dcorn...@placet.com> wrote:
> >Oh there has to be conversations on Google somewhere. We talked about it
> >quite about 2 1/2 years ago and then at least once since. The link that
goes
> >into some of the thoughts on ifMUD are at
> >http://ifmud.port4000.com/alex/logs/networked-if.txt.
>
> The technology is not the problem.
>
> The game design is the problem.

I'm an author and game designer. I think attempting to code a mud to do
multiIF has certain technnological aspects that make it an attractive
choice, but in reality, would never fly as a valid multi-player-IF
technology. If it did, we would have likely adopted it for that purpose
already. Muds have been around for a long time. No one to date has submitted
a game to the annual comp, or any IF comp that I know of, that is played via
mud server.

I think technology is exactly the problem. The underlying system doesn't
necessarily matter, but the interface, or language to author a game on top
of _whatever_ technology has to match almost identically, the
systems/languages we already use. So either TADS, Hugo, or Inform would need
to be the language that was used to create the multip-player-IF game code.

If I have to hop into a c++ compiler to write multi-player-IF and worry
about pointers and crap, it simply ain't going to happen. I suspect many
other IF authors are similarly disinterested in c++ interfaces to muds or
other technologies.

Sorry Sean, we disagree again. But in this instance, multi-player-IF
_cannot_ be solved with XML....but wait....maybe...(nevermind) (;

Jarb


Gadget

unread,
Sep 10, 2001, 3:12:26 AM9/10/01
to
On Mon, 10 Sep 2001 09:36:30 +1000, "Chris Hadgis" <chr...@mincom.com>
wrote:

<snip muliplay post>

Funny, the way this goes ;-)

The first MUD, Multi User Dungeon, was called this way because it was
meant to be a multiplayer version of Dungeon (aka Zork). The creators
were big fans of that game and actually thought the name Dungeon would
become the standard name for IF. Unfortunately for them, the name
Adventure became the actual word, soon known to all of us as Text
Adventure.

If you have played the original MUD, or MUD2, then you can see its
roots clearly: you have to solve puzzles to find treasures which you
dump in the swamp (as opposed to leaving them in a house)

If you want to see what would happen if people try to create a
multiplayer IF, check out one of the MUD lists on the net. You will
find *hundreds* of attempts at actual story driven MUDs. And then you
will also see why it will never become what you imagine it:

It is basically chaos theorie. The more people take part in the game,
the more possible actions can be chosen. This way, any form of
rigorous scripting stands in the way of the players individuality. The
more people are playing, the more people are inclined to interact and
make up their own story (even if they don't intend to)

-------------
It's a bird...
It's a plane...
No, it's... Gadget?
-------------------
To send mail remove SPAMBLOCK from adress.

Sean T Barrett

unread,
Sep 10, 2001, 3:23:35 AM9/10/01
to
David A. Cornelson <dcorn...@placet.com> wrote:
>Sean Barrett wrote:
[use MUDs]

>> The technology is not the problem.
>> The game design is the problem.
>I'm an author and game designer. I think attempting to code a mud to do
>multiIF has certain technnological aspects that make it an attractive
>choice, but in reality, would never fly as a valid multi-player-IF
>technology. If it did, we would have likely adopted it for that purpose
>already. Muds have been around for a long time. No one to date has submitted
>a game to the annual comp, or any IF comp that I know of, that is played via
>mud server.

Technically, nothing in my post literally advocated using a
programmable MUD for this situation--although I certainly
think that's a viable technique; my point is that we keep
rehashing these technological "issues" and nobody ever
confronts the game design issues, and I think that's just
going down a similar path as "I want to write an adventure
but I have no clue what game I want to do so I'll start
by writing a new IF development system"...

But to respond to your point as if I had so advocated them:
The fact that nobody has submitted a MUD-based multi-player IF game
to the comp is probably because it wouldn't play well as a comp game
where everyone has to judge it independently.

The fact that nobody's released a multi-player IF-ish game *period*
using a MUD technology implies that either there's little interest
in writing a MUD-based multi-player IF game, or it's really hard to
write one.

Since I'd argue it's really hard to write a multi-player IF game
*period*, independent of technology, your "if it did" evidence
works just as well for me as it does for you. It's not a disproof
of the technology; it proves that even if you have the technology
in hand, the design problem is a doozy.

>I think technology is exactly the problem. The underlying system doesn't
>necessarily matter, but the interface, or language to author a game on top
>of _whatever_ technology has to match almost identically, the
>systems/languages we already use. So either TADS, Hugo, or Inform would need
>to be the language that was used to create the multip-player-IF game code.
>
>If I have to hop into a c++ compiler to write multi-player-IF and worry
>about pointers and crap, it simply ain't going to happen. I suspect many
>other IF authors are similarly disinterested in c++ interfaces to muds or
>other technologies.

All well and good, but if I were advocating using a mud, I'd be advocating


a programmable mud--as I said:

>>Please let's not spend lots of time focusing on the
>>technology problems; they're pretty effectively solved
>>on any of the programmable MUD systems out there

Inform and WinFrotz are both written in C, but Inform games
are not. When I say "programmable MUDs", I'm referring to
a similar class of MUD systems--systems which provide an
interpreted programming language with adventure-oriented OO,
Tads-like no-pointers, full polymorphism, garbage collection,
etc. And, unlike Inform, Tads, and Hugo, all of them already
provide a multiplayer world model.

I can name five such systems off the top of my head, seeing
as I co-administered a MUD using one for four years, I wrote
another one myself, I consulted on the design of a third,
I wrote a pre-compiler for a fourth, and I beta-tested a
fifth.

See http://www.ccs.neu.edu/home/dougo/mud/ and look at
LPC, MOO, and COOLMUD, (those are the three on that page I have
some familiarity with); I'm not sure how many of the other
things there are both muds and programmable. Others that were
relatively well-known in the past included TinyMUCK and Ubermud.

If somebody wants to go hack sockets into Inform and make
a great multiplayer IF game, let them at it. But can't we
skip the discussions of "how do we keep the machines in
synch" and "how do we show other players what's going on
correctly" when those sorts of problems have been adequately
solved at least ten times already by MUD-system authors?
They are not the hard part of the problem, IMNAAHO.

SeanB

Richard Bos

unread,
Sep 10, 2001, 6:47:56 AM9/10/01
to
"Paul Trembath" <ptre...@compuserve.com> wrote:

> "Andrew Hunter" <and...@logicalshift.demon.co.uk> wrote in message
>

> > It may be that boredom is something we evolved to cope with certain
> > classes of survival conditions. It may be that its a byproduct of the
> > human brain's design.
>
> Are you sure? I always thought it was because of the evil purple
> mushrooms from planet Sequelon :-).

If there really were evil purple mushrooms from planet Sequelon, we
wouldn't be half as bored as we are.

Richard

Richard Bos

unread,
Sep 10, 2001, 6:47:55 AM9/10/01
to

No, there mustn't. There merely has to be an illusion of consciousness
falling for the illusion. All you need to have is an emergent phenomenon
that _claims_ it has (or is) consciousness. In fact, I can't think of a
single experiment that could prove that our own consciousness isn't the
same kind of emergent phenomenon, emerging not from chips and programs,
but from neurons and brain structure.
None of which changes my opinion that humans _do_ have consciousness,
free will, and in fact a soul, but that opinion is based on theological
arguments, not on scientific ones.

Richard

Richard Bos

unread,
Sep 10, 2001, 6:47:56 AM9/10/01
to
ad...@fsf.net (Adam Thornton) wrote:

Then you'll have no objection to my claim that Plato was nothing but an
Aristotelian footnote to Socrates, I presume?

Richard

John Colagioia

unread,
Sep 10, 2001, 8:59:02 AM9/10/01
to
Adam Thornton wrote:

MORE SPOILERS ABOUND...

Worry not, my friend. My sources at NASA tell me that the rocky desertscape is
merely the outer crust of the moon--kind of like an over-baked brie,
really--and that the inside is, indeed, green cheese.

Of course, my sources have always been a little...well...odd. Actually, he
also thinks the "Face on Mars" is really on the Moon, too...


Jason Melancon

unread,
Sep 10, 2001, 8:54:43 AM9/10/01
to
This is a minor side issue, but

On Fri, 07 Sep 2001 02:36:14 -0400, Anson Turner
<anson@DELETE_THISpobox.com> wrote:

> But I guess if you study AI (or anything, for that matter) too
> intently, you could start to be convinced of some very bizarre things, even
> when they exist in flat contradiction of the patently self-evident.

You say this like it's a bad thing. You mean self-evident like the
flatness of the Earth, or the central position it occupies? Or how
about the observation that air and water are pure elements? I'd say
that if the results of your study don't surprise you, you're wasting
your time.

--
Jason Melancon

Adam Thornton

unread,
Sep 10, 2001, 12:52:45 PM9/10/01
to
In article <3b9c9122....@news.worldonline.nl>,

Richard Bos <in...@hoekstra-uitgeverij.nl> wrote:
>ad...@fsf.net (Adam Thornton) wrote:
>> In article <3b989a85...@news.worldonline.nl>,
>> Richard Bos <in...@hoekstra-uitgeverij.nl> wrote:
>> >ad...@fsf.net (Adam Thornton) wrote:
>> >> Western Philosophy really *IS* a succession of footnotes to Plato, isn't
>> >> it?
>> >Only if you accept that most footnotes are longer than the original
>> >work, and largely deny its validity.
>>
>> I have no problem whatsoever with a text's critical apparatus both
>> exceeding it--often vastly--in terms of length, and raising objections
>> to it.
>
>Then you'll have no objection to my claim that Plato was nothing but an
>Aristotelian footnote to Socrates, I presume?

I object to the adjective "Aristotelian."

And, alas, we really don't *know* how much of a footnote to Socrates
Plato was, since we can't really determine how much of what is
attributed to Socrates is Socratic and how much is Platonic, or, indeed,
how divergent their thought was. Myself, I'm inclined to give Plato
credit for more originality than he claimed, but I freely admit that
that is not a terribly well-considered historical or philosophical
position.

So perhaps I should have said "The Socratic/Platonic Cabal" above.

Ehrm. Anyhow.

Adam

Neil Cerutti

unread,
Sep 10, 2001, 1:08:01 PM9/10/01
to
Adam Thornton posted:

>In article <3b9c9122....@news.worldonline.nl>,
>Richard Bos <in...@hoekstra-uitgeverij.nl> wrote:
>>ad...@fsf.net (Adam Thornton) wrote:
>>> In article <3b989a85...@news.worldonline.nl>,
>>> Richard Bos <in...@hoekstra-uitgeverij.nl> wrote:
>>> >ad...@fsf.net (Adam Thornton) wrote:
>>> >> Western Philosophy really *IS* a succession of footnotes to Plato, isn't
>>> >> it?
>>> >Only if you accept that most footnotes are longer than the
>>> >original work, and largely deny its validity.
>>>
>>> I have no problem whatsoever with a text's critical apparatus
>>> both exceeding it--often vastly--in terms of length, and
>>> raising objections to it.
>>
>>Then you'll have no objection to my claim that Plato was
>>nothing but an Aristotelian footnote to Socrates, I presume?
>
>I object to the adjective "Aristotelian."

Then A.E. Van Vogt has a problem with you.

Don't make him come over there.

--
Neil Cerutti <cer...@together.net>

Adam Thornton

unread,
Sep 10, 2001, 1:10:09 PM9/10/01
to
In article <slrn9ppt1d...@fiad06.norwich.edu>,
Neil Cerutti <cer...@together.net> wrote:
>Adam Thornton posted:

>>I object to the adjective "Aristotelian."
>Then A.E. Van Vogt has a problem with you.
>Don't make him come over there.

Bring him on!

I can take on any dead SF author with one hand tied behind my back.

Except Heinlein. His zombie could kick my ass.

Adam

David Given

unread,
Sep 10, 2001, 8:33:01 AM9/10/01
to
In article <GJFr7...@world.std.com>,
buz...@world.std.com (Sean T Barrett) writes:
[...]

> The fact that nobody's released a multi-player IF-ish game *period*
> using a MUD technology implies that either there's little interest
> in writing a MUD-based multi-player IF game, or it's really hard to
> write one.

Here's the problem with multi-person MUD games: synchronisation. You have
to get a bunch of people to all start playing at the same time, and keep
playing, and if the game's any size they'll eventually have to all stop
playing, sleep, and come back and continue.

If you allow people to connect and disconnect at will you'll have people
who joined recently and don't know what's going on, and the plot will have
left them behind.

This is why most multiplayer games go with plotless --- most MUDs, Quake
servers, etc --- or very short plots --- e.g. C&C, or Quake again, where
an entire game can be played at one sitting --- or scenario based very
long plots --- theme based MUDs, Ultima Online, Everquest.

Unless you can design a game that is both strongly plot-driven *and*
allows arbitrary connection and disconnection, multiplayer IF isn't really
going to go anywhere. Any ideas?

--
+- David Given --------McQ-+ "`Aplysia californica' is your taxonomic
| Work: d...@tao-group.com | nomenclature.
| Play: d...@cowlark.com | A slug, by any other name, is still a slug by
+- http://www.cowlark.com -+ nature." --- drushel on a.f.c

David Given

unread,
Sep 10, 2001, 8:22:17 AM9/10/01
to
In article <anson-1D97B3....@nntp.mindspring.com>,
Anson Turner <anson@DELETE_THISpobox.com> writes:
[...]

> So you're saying that men are inherently superior to women, who are nothing
> but baby-making machines, whose minds have been corrupted by ugly lesbians who
> wish they were men?

Gaaaah!

[ducks under desk]

Don't say things like that! Not even as a joke! No mortal may utter the
name of one of the Elder Gods, even as a jest, and live! Don't you recall
the great Gun Control wars of rasfw? The Heinlein/Libertarian Action? Or
the Rockoid Incident of rasfc? They nearly rendered their newsgroups
uninhabitable! Don't do that here!

Remember: there is no statement you can make on Usenet, no matter how
ludicrous, that there won't be someone, somewhere, who will take you
seriously.

David Given

unread,
Sep 10, 2001, 8:16:10 AM9/10/01
to
In article <wycm7.377058$ai2.28...@bin2.nnrp.aus1.giganews.com>,
Billy Bissette <bai...@coastalnet.com> writes:
[...]

> Basically, everything you do is straight stimulus-response with
> neurons firing and cells growing and breaking down and all that
> procedure. 'Concious thought' only occurs after the actual actions
> and has no influence over what your body actually does at any
> point in time. It is only a rationalization at best.

Surely talking about conscious thought is an action directly influenced by
the fact (or illusion) of conscious thought?

Gabe McKean

unread,
Sep 10, 2001, 3:41:33 PM9/10/01
to
David Given wrote in message ...

>Here's the problem with multi-person MUD games: synchronisation. You have
>to get a bunch of people to all start playing at the same time, and keep
>playing, and if the game's any size they'll eventually have to all stop
>playing, sleep, and come back and continue.
>
>If you allow people to connect and disconnect at will you'll have people
>who joined recently and don't know what's going on, and the plot will have
>left them behind.

Instead of a MUD-type system where anyone can join or leave at will, why not
use a more limited system, at least to start out with? Have a game designed
for an arbitrary small number of people (somewhere in the range 2-5, I would
guess), and don't let the game start until everyone has connected. The
players would have to arrange ahead of time when they would play, but that
wouldn't be much more difficult than arranging a game of Monopoly among
aquaintances. Then, when someone has to leave, save the game somewhere (I'm
deliberately avoiding the technical details here) and have the players
arrange another meeting time.

A system to do this shouldn't be too difficult to design. A game to take
advantage of it may be a little harder, but there's lots of game-writing
talent present around here...


Gadget

unread,
Sep 10, 2001, 4:10:19 PM9/10/01
to
On Mon, 10 Sep 2001 12:41:33 -0700, "Gabe McKean" <gmc...@wsu.edu>
wrote:

For me, the problem would be this: can you imagine reading the same
book with five other people? And if that is not how you imagine it to
be, then you (unknowingly) just described a session of tabletop D&D
over the internet, but with a computer controlled DM. Which would be a
MUD.

David A. Cornelson

unread,
Sep 10, 2001, 5:00:05 PM9/10/01
to
"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message
news:597qptcnb0ok78rpv...@4ax.com...

> On Mon, 10 Sep 2001 12:41:33 -0700, "Gabe McKean" <gmc...@wsu.edu>
> wrote:
>
> >David Given wrote in message ...
> >>Here's the problem with multi-person MUD games: synchronisation. You
have
<snip>

> >Instead of a MUD-type system where anyone can join or leave at will, why
not
> >use a more limited system, at least to start out with? Have a game
designed
> >for an arbitrary small number of people (somewhere in the range 2-5, I
would
> >guess), and don't let the game start until everyone has connected.

Yes, yes - this is how I see it too...just like Monopoly, only it's a story
that ends...

> For me, the problem would be this: can you imagine reading the same
> book with five other people?

No...each person only interacts with the same environment...occasionally
seeing things that the others are doing. You wouldn't all walk together or
see the same output...each person would see the world from the state that
their character is in...this designed by the game author...

And if one person wants to interact with the world while others leave, that
should be allowed, but game progression would probably be controlled
(designed by the author) by events that require another character or
characters to do something. So you can wander about, but you'll have to
setup that game time with everyone on to get further....

I'd even note that some characters would see things that others
wouldn't...like say one PC is a rabbit and can fit into a rabbit hole, while
another is a monkey that can't fit into the hole, but can climb trees. Lot's
of world-view possibilities with multiple-PC IF...

Jarb


Gadget

unread,
Sep 10, 2001, 5:18:42 PM9/10/01
to
On Mon, 10 Sep 2001 16:00:05 -0500, "David A. Cornelson"
<dcorn...@placet.com> wrote:

>No...each person only interacts with the same environment...occasionally
>seeing things that the others are doing. You wouldn't all walk together or
>see the same output...each person would see the world from the state that
>their character is in...this designed by the game author...
>
>And if one person wants to interact with the world while others leave, that
>should be allowed, but game progression would probably be controlled
>(designed by the author) by events that require another character or
>characters to do something. So you can wander about, but you'll have to
>setup that game time with everyone on to get further....
>
>I'd even note that some characters would see things that others
>wouldn't...like say one PC is a rabbit and can fit into a rabbit hole, while
>another is a monkey that can't fit into the hole, but can climb trees. Lot's
>of world-view possibilities with multiple-PC IF...
>
>Jarb
>

Sorry, maybe I am missing something, but how would this be different
from a MUD? What you describe *is* a MUD!

Daniel Barkalow

unread,
Sep 10, 2001, 5:45:35 PM9/10/01
to

It seems like a senisble possibility for a first game would be a murder
mystery of the "Clue" variety. Players (and possibly NPCs) are at a dinner
party, lights go out, host dies. Killer might be a player (who knows this,
but nobody else does), an NPC, or something stranger. Some events are set
up to happen, and the killer might have to do certain things to keep the
secret. People write such things as party games, so writing it is at least
not an entirely new skill.

I think the main issues would come from the passage of time. (1) Actions
that are single turns in IF generally wouldn't take to same amount of
time: saying something, walking between adjacent locations (which may be
close or distant), taking one or many things, and so forth. (2) It might
make sense to have the characters take time to think if the players are
taking time to think. But being able to think on other people's turns
doesn't make sense (it matter if everyone in the game is typing fast or
slow, even in totally different places). I can't think of any
consistant solution that doesn't give fast typists an advantage, aside
from being completely turn-based.

There are, in any case, a number of things that could either be called
game design issues or called system issues, depending on where you decide
to try to solve them. They are, in any case, not so much a matter of a
successful implementation as a matter of figuring out what the desired
behavior is.

-Iabervon
*This .sig unintentionally changed*

Gabe McKean

unread,
Sep 10, 2001, 5:26:44 PM9/10/01
to
Gadget wrote in message <597qptcnb0ok78rpv...@4ax.com>...

>For me, the problem would be this: can you imagine reading the same
>book with five other people? And if that is not how you imagine it to
>be, then you (unknowingly) just described a session of tabletop D&D
>over the internet, but with a computer controlled DM. Which would be a
>MUD.

Heh. It wasn't quite unknowing, the D&D analogy had actually crossed my
mind briefly. Yes, it could be considered a MUD, but it would be different
from all the MUDs I am aware of in that only a set number of people could
play at a time, each with a role assigned by the author. I don't think it
would be much like reading a book with other people; since it would be a
game you could easily set up all kinds of cooperative or antagonistic
scenarios between the players.

Aris Katsaris

unread,
Sep 10, 2001, 6:00:52 PM9/10/01
to

Richard Bos <in...@hoekstra-uitgeverij.nl> wrote in message
news:3b9c8eaf....@news.worldonline.nl...

> lrasz...@loyola.edu (L. Ross Raszewski) wrote:
>
> > On Fri, 07 Sep 2001 12:57:07 -0400, Anson Turner
<anson@DELETE_THISpobox.com> wrote:
> >
> > >result of different ideas of what "machine" means. No existing machine
is
> > >generally believed to have consciousness, feelings, etc., therefore to
some,
> > >saying that people are machines is saying that they are mindless
automatons
> > >(with the "illusion" of consciousness, I suppose).
> >
> > And, of course, there's the cartesian escape from the argument: Even
> > if you are being deceived by an "illusion" of consiousness, there
> > still must be a consciousness that's being rooked in by the illusion.
>
> No, there mustn't. There merely has to be an illusion of consciousness
> falling for the illusion.

I'm not certain how one could define the word "illusion" without first
accepting
as given the idea of consciousness/awareness. "illusion" of consciousness
seems something of a contradiction in terms.

> All you need to have is an emergent phenomenon
> that _claims_ it has (or is) consciousness.

Well, sure, perhaps other consciousnesses are illusions - but I can't see
how one could believe his own consciousness to be an illusion. How could
an illusion exist if there's no consciousness that it can fool?

Aris Katsaris

John W. Kennedy

unread,
Sep 10, 2001, 7:29:57 PM9/10/01
to
Adam Thornton wrote:
> And, alas, we really don't *know* how much of a footnote to Socrates
> Plato was, since we can't really determine how much of what is
> attributed to Socrates is Socratic and how much is Platonic, or, indeed,
> how divergent their thought was. Myself, I'm inclined to give Plato
> credit for more originality than he claimed, but I freely admit that
> that is not a terribly well-considered historical or philosophical
> position.

Well, we have some control from Xenophon.

--
John W. Kennedy
(Working from my laptop)

David A. Cornelson

unread,
Sep 10, 2001, 7:32:45 PM9/10/01
to
"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message
news:ubbqptosb1r9etdrv...@4ax.com...

> On Mon, 10 Sep 2001 16:00:05 -0500, "David A. Cornelson"
> <dcorn...@placet.com> wrote:
>
> >
> Sorry, maybe I am missing something, but how would this be different
> from a MUD? What you describe *is* a MUD!
>

I set you up...

I cannot program a mud in Inform and I know of no mud with an english parser
that I can program against, and a mud has no way that I know of to
disambiguate sentences, and on and on...the features required for an IF
development env


David A. Cornelson

unread,
Sep 10, 2001, 7:36:00 PM9/10/01
to
"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message
news:ubbqptosb1r9etdrv...@4ax.com...

> On Mon, 10 Sep 2001 16:00:05 -0500, "David A. Cornelson"
> <dcorn...@placet.com> wrote:
>
> >
> Sorry, maybe I am missing something, but how would this be different
> from a MUD? What you describe *is* a MUD!
>
(sorry - my news reader sent accidentally - this is the _complete_ post

I set you up...

I cannot program a mud in Inform and I know of no mud with an english parser
that I can program against, and a mud has no way that I know of to
disambiguate sentences, and on and on...the features required for an IF

development environment aren't necessarily excluded from a mud platform, but
would it be easier to add these things to a mud or would it be easier to
create an inform/z-machine type client/server environment that does all the
things we want to do in IF coding?

Jarb


Chris Hadgis

unread,
Sep 10, 2001, 8:51:44 PM9/10/01
to

"Gabe McKean" <gmc...@wsu.edu> wrote in message

> Instead of a MUD-type system where anyone can join or leave at will, why
not
> use a more limited system, at least to start out with? Have a game
designed
> for an arbitrary small number of people (somewhere in the range 2-5, I
would
> guess), and don't let the game start until everyone has connected. The
> players would have to arrange ahead of time when they would play, but that
> wouldn't be much more difficult than arranging a game of Monopoly among
> aquaintances. Then, when someone has to leave, save the game somewhere
(I'm
> deliberately avoiding the technical details here) and have the players
> arrange another meeting time.

This is just the sort of thing I was thinking about. It's not a true MUD,
but it has elements of a MUD. And I only expect there to be two people
playing - one playing the main character and the other playing the role of
one or more NPC.

> A system to do this shouldn't be too difficult to design. A game to take
> advantage of it may be a little harder, but there's lots of game-writing
> talent present around here...

And that was another question I had. How many people would write games for
such a system if it was available?

Cheers,
ChrisH


Chris Hadgis

unread,
Sep 10, 2001, 8:54:52 PM9/10/01
to

"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message

> On Mon, 10 Sep 2001 16:00:05 -0500, "David A. Cornelson"
> <dcorn...@placet.com> wrote:

> >I'd even note that some characters would see things that others
> >wouldn't...like say one PC is a rabbit and can fit into a rabbit hole,
while
> >another is a monkey that can't fit into the hole, but can climb trees.
Lot's
> >of world-view possibilities with multiple-PC IF...
> >
> >Jarb
> >
> Sorry, maybe I am missing something, but how would this be different
> from a MUD? What you describe *is* a MUD!

There is a fine line between multi-player IF and a MUD. I would go as far
as to say there is a considerable region where they overlap. This is where
the system designer and the author would have to sit down and decide who
does what, and if certain things should be done at all.

Cheers,
ChrisH

Chris Hadgis

unread,
Sep 10, 2001, 9:05:37 PM9/10/01
to

"David A. Cornelson" <dcorn...@placet.com> wrote

> I cannot program a mud in Inform and I know of no mud with an english


parser
> that I can program against, and a mud has no way that I know of to
> disambiguate sentences, and on and on...the features required for an IF
> development environment aren't necessarily excluded from a mud platform,
but
> would it be easier to add these things to a mud or would it be easier to
> create an inform/z-machine type client/server environment that does all
the
> things we want to do in IF coding?

I propose it would be easier to add the MUD features to an IF authoring
system. After all, the end result I am after is an IF game where there can
be more than one player. I don't care about all of the features a MUD may
have. I only want to borrow a few of them.

Cheers,
ChrisH

Chris Hadgis

unread,
Sep 10, 2001, 8:46:51 PM9/10/01
to

"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message

> It is basically chaos theorie. The more people take part in the game,
> the more possible actions can be chosen. This way, any form of
> rigorous scripting stands in the way of the players individuality. The
> more people are playing, the more people are inclined to interact and
> make up their own story (even if they don't intend to)

I understand this about MUD. I was not suggesting multi-player IF be a MUD,
but perhaps take some of the elements from MUD and incorporate them into IF.

When I was thinking about this originally, I wanted a player to take on the
role of the NPC to provide the intelligence for the NPC which, as can be
seen from the AI thread, is no easy task. An NPC typically has a minor role.
The NPC may only have a limited verb list. I'm not sure about that because I
have not though that far ahead yet.

If there is more than one NPC and they do not interact perhaps the player
can take on the role of more than one NPC.

I did not intend for there to be a hero and side-kick or for a party to go
adventuring (at least not initially). As you and others have pointed out,
this brings with it its own set or problems.

Cheers,
ChrisH


Chip Hayes

unread,
Sep 10, 2001, 9:04:18 PM9/10/01
to

"David A. Cornelson" wrote:

> I cannot program a mud in Inform and I know of no mud with an english parser
> that I can program against, and a mud has no way that I know of to
> disambiguate sentences, and on and on...the features required for an IF
> development environment aren't necessarily excluded from a mud platform, but
> would it be easier to add these things to a mud or would it be easier to
> create an inform/z-machine type client/server environment that does all the
> things we want to do in IF coding?
>
> Jarb

Actually, the creator of the original MUD, Richard Bartle, developed a
low-level language (called muddle,) which can do some pretty amazing
things, all along the lines of what everyone has been discussing here.
His MUD2, which is still up and running in various incarnations, can be
either be programmed from within using an internal high level language
(BLANKing, much like the other muds that have since spawned from his,
only quite detailed in its abilities) or, if you are able to get a hold
of it (the license is rather expensive) using the muddle system itself,
which is extremely robust.

Parser disambiguation is as detailed as you want; very hearty object
creation and manipulation are available; and there's a plethora of
multi-user features that do all of what has been discussed here and more.

MUD2 the game is basically an old-fashioned treasure hunt, with a very
loose story (VERY loose) surrounding a collection of puzzles and scoring
opportunities. Some of those can be done alone, others require up to
four players simultaneously to succeed. NPC's are not very robust on
the AI front, but that is not because the language doesn't allow it...
just that for the MUD2 game itself, it isn't absolutely needed.
Especially when an immortal player can take over control of the NPC and
do with it as he/she pleases.

Muddle as a language is not the most user-friendly, though. I had a
chance a few years back to do some programming in it and found it to be
a steep learninng curve. Bartle wanted to develop a language that only
used symbols. So there are an awful lot of >.< and $-> type keywords
instead of "print"s and "objectloop"s.

Bartle's company, MUSE, has a web site at www.mud.co.uk for anyone
interested in taking a peek.

Chip

Gadget

unread,
Sep 11, 2001, 6:41:39 AM9/11/01
to
On Mon, 10 Sep 2001 18:36:00 -0500, "David A. Cornelson"
<dcorn...@placet.com> wrote:

>"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message
>news:ubbqptosb1r9etdrv...@4ax.com...
>> On Mon, 10 Sep 2001 16:00:05 -0500, "David A. Cornelson"
>> <dcorn...@placet.com> wrote:
>>
>> >
>> Sorry, maybe I am missing something, but how would this be different
>> from a MUD? What you describe *is* a MUD!
>>
>(sorry - my news reader sent accidentally - this is the _complete_ post
>
>I set you up...
>
>I cannot program a mud in Inform and I know of no mud with an english parser
>that I can program against, and a mud has no way that I know of to
>disambiguate sentences, and on and on...the features required for an IF
>development environment aren't necessarily excluded from a mud platform, but
>would it be easier to add these things to a mud or would it be easier to
>create an inform/z-machine type client/server environment that does all the
>things we want to do in IF coding?
>
>Jarb
>
>
>

I could not disagree more. A lot of Muds, MOO's and MUSH-es have a
programming language built in which can be used to customize the
parser. As has been pointed out, the original Mud used such a
language.

And my actual point in this thread is: Mud evolved from IF. Both have
the same roots. Both have undergone separate developement but both
also provide for a very specific need:

IF is story driven.
Mud is player driven.

By making a Mud more story driven would make it a story driven Mud.
By adding Multiplayer to IF, you create a Mud.
By removing players from IF and MUD altogether, you create a story.

That is just what the words have become to mean.

It's not that I think there should be no experiments or whatever, I
just feel that it would be reinventing the wheel to just add more
players to an IF.

Mark Hulme-Jones

unread,
Sep 11, 2001, 6:57:25 AM9/11/01
to
In article <9nj51v$4pmi$1...@murrow.murrow.it.wsu.edu>, Gabe McKean wrote:
>Instead of a MUD-type system where anyone can join or leave at will, why not
>use a more limited system, at least to start out with? Have a game designed
>for an arbitrary small number of people (somewhere in the range 2-5, I would
>guess), and don't let the game start until everyone has connected.

Although it's a graphical game, I believe that this is the approach
being taken by Bioware's forthcoming Neverwinter Nights. The whole
thing is basically just a conventional RPG with nice graphics. One
person (the GM) is in charge of leading the players through the
adventure. It's an interesting solution to the kinds of problems thrown
up by games like UO and Everquest.

[...]

>A system to do this shouldn't be too difficult to design. A game to take
>advantage of it may be a little harder, but there's lots of game-writing
>talent present around here...

NN seems to be taking forever, but assuming you limit yourself to text
only then the task would be easier.

--
Mark Hulme-Jones <mjo...@frottage.org>

David A. Cornelson

unread,
Sep 11, 2001, 8:36:35 AM9/11/01
to
"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message
news:77qrptk5veic4hvoi...@4ax.com...

> On Mon, 10 Sep 2001 18:36:00 -0500, "David A. Cornelson"
> <dcorn...@placet.com> wrote:
>
> >"Gadget" <gad...@SPAMBLOCKhaha.demon.nl> wrote in message
> >news:ubbqptosb1r9etdrv...@4ax.com...
> >> On Mon, 10 Sep 2001 16:00:05 -0500, "David A. Cornelson"
> >> <dcorn...@placet.com> wrote:
> >>
> >> >
> I could not disagree more. A lot of Muds, MOO's and MUSH-es have a
> programming language built in which can be used to customize the

I am not unfamiliar with mud programming. I have my own at plover.net:4096.
I'm aware of the abilities to create a world and characters, but maybe I
think the point I'm trying to make is that I have a particular vision of how
multi-IF would function and a mud I think would do some things that, to me,
are not really IF related. I note: I'm a traditionalist...I really only like
white text on a blue background.

So the thing I'd be looking for would be identical to running winfrotz only
with some synchronization questions at the beginning like "Which character
are you playing?

***

1) Jonathon: A shy, enigmatic young man with a zeal for quiet disruption. He
has a crush on Amy.
2) Amy: A feminine, argumentative young woman with a penchant for leading
people whether they like it or not.
3) Horace: A bully. Loud, obnoxious, foul-mouthed and very very big. He also
has a crush on Amy.

So far, Jonathon has been chosen. You can still choose Emily or Horace: 3

"Alright!" you scream at the top of your lungs, "I'm ready to kick some
teenage ass! I'm gonna love high school!"

***

The thing about coding _everything_ as we do with the current IF langauges
is that we direct scenes. We direct a sort of 'dance' that moves characters
from place to place and from scene to scene, and we do these things
intentionally, to raise the drama, the add to the suspense. We control a
story.

The difference here is that instead of one player character, I've had to
code three. I've had to code all of the interactions between these three
players and it's really only subtly different than a single player IF game.
Things have to be reported to each character based on POV. As the author, I
want complete control over the whole POV reporting task.

Now we come to the coding methods. Your argument is that I can do these
things with a mud language. But to me, that's like using a tablesaw to build
a model airplane. A balsawood model airplane is a small, delicate, sometimes
extremely complex architecture that needs small knives, small brushes, and a
steady hand. Not a tablesaw.

Building an IF story is a very complex task and many arguments over the
years have centered on which language is the best. I think that there is
almost no competition where language-type is concerned. Inform, TADS, and
the other IF-specific languages always win out over notn IF languages like
C++, Basic, and whatever else people come up with. Python seems to be making
some inroads, but Inform and TADS are still clearly the leaders.

Why would we try to extend IF into the mud world when we're not really doing
anything there to begin with? It would be a far more natural progression to
see someone move the Inform or TADS language to a multi-PC architecture in
my mind.

Jarb


Matthew F Funke

unread,
Sep 11, 2001, 9:58:51 AM9/11/01
to
>> All you need to have is an emergent phenomenon
>> that _claims_ it has (or is) consciousness.
>
>Well, sure, perhaps other consciousnesses are illusions - but I can't see
>how one could believe his own consciousness to be an illusion. How could
>an illusion exist if there's no consciousness that it can fool?

I appreciate this point. What, then, would be your answers to the
folowing questions?:
* Can a non-sentient be fooled by, say, an optical illusion? (I
think of dogs barking at their reflections in the mirror... and as far as
I know, digs are not conscious.)
* Is it possible that consciousness is an illusion similar to ones
that fool our senses? If not, how is it fundamentally different?
Thanks for your time.
--
-- Lurking in the Shadows,
"Nikolai" (m...@hopper.unh.edu)
Goth.Code 4.0 zUibba3baWabaaaaLbaa75KxARUSvacnmeiybZan3FmaH17T1aGbZueaqiq
5eedO#di1hbjrpk6!RpsEbacRZUFaaaicaeusnh

Richard Bos

unread,
Sep 11, 2001, 12:19:13 PM9/11/01
to
ad...@fsf.net (Adam Thornton) wrote:

> So perhaps I should have said "The Socratic/Platonic Cabal" above.

Yup. BUt then there's the whole pre-Socratic lot... what was that guy
called again who claimed that "panta rhei"? Not Empedokles, was he?

Richard

Adam Thornton

unread,
Sep 11, 2001, 12:42:45 PM9/11/01
to
In article <3b9e372a...@news.worldonline.nl>,

The pre-Socratics are interesting, but not, I think, terribly
influential.

That said, I got your same river RIGHT HERE, buddy.

Adam

Bryce

unread,
Sep 11, 2001, 12:49:49 PM9/11/01
to
Aris Katsaris:

> >Well, sure, perhaps other consciousnesses are illusions
> > - but I can't see how one could believe his own
> > consciousness to be an illusion. How could an illusion
> > exist if there's no consciousness that it can fool?

Matthew F "Nikolai" Funke:


> I appreciate this point. What, then, would be your
> answers to the folowing questions?:
> * Can a non-sentient be fooled by, say, an optical
> illusion? (I think of dogs barking at their reflections in
> the mirror... and as far as I know, digs are not
> conscious.)

Dogs are conscious of their immediate surroundings. All animals analyze their
sensory input to an extent, in order to distinguish food from non-food,
predators from trees, and potential mates from competitors. This is
consciousness at its most fundamental, "conscious as opposed to unconscious."

On the other hand, there's also "conscious as opposed to subconscious," which I
would say does not apply to a dog. This would include "intelligent mode"
thought, or thinking about thinking.

> * Is it possible that consciousness is an illusion
> similar to ones that fool our senses? If not, how is it
> fundamentally different?

I think Aris was saying that consciousness is what makes illusion possible.
Being conscious means that you apply some kind of analysis to the things that
you perceive; you classify them and look for patterns, and form expectations
about future stimuli. Illusion happens when those expectations turn out to be
wrong. In order for that to happen, you have to have the expectations in the
first place.

Bryce
-----
>RING BELL
You shake the handbell vigorously.

The dog leaps from the corner where it was napping and darts across the room.
It stands panting before the dog bowl, wagging its tail and drooling.

Matthew F Funke

unread,
Sep 11, 2001, 1:03:54 PM9/11/01
to
>Dogs are conscious of their immediate surroundings. All animals analyze their
>sensory input to an extent, in order to distinguish food from non-food,
>predators from trees, and potential mates from competitors. This is
>consciousness at its most fundamental, "conscious as opposed to unconscious."

So, then, for an illusion to occur, there has to be something to
illusify. :) I was stuck in thinking about higher-mode-of-thinking-as-
consciousness. Is it possible, then, that that higher mode is an illusion
akin to sensory illusions?

Bryce

unread,
Sep 11, 2001, 2:17:32 PM9/11/01
to
Matthew F "Nikolai" Funke:

> Is it possible, then, that that higher mode is an illusion
> akin to sensory illusions?

Do you mean "could the higher mode in general be an illusion," as in "does all
thought ultimately boil down to just conditioning and stimulus-response?" If
so, I have no idea. I suppose it could. In that case, it would follow that my
whole mind could be represented as one enormous switch statement.

On the other hand, if you're speaking about the conscious self or first-person
identity, that by definition can't be an illusion; you are what you think.

Bryce
-----
>DISBELIEVE MYSELF
Despite your best efforts, your identity clings to you like a lamprey.

It is loading more messages.
0 new messages