Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Erasmatron-generated story sample

39 views
Skip to first unread message

Jorn Barger

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

For the last few months I've been keeping mum about Chris Crawford's
Erasmatron, because I was doing some consulting for him, and felt a
little conflict of interest (though he's continually assured me I'm free
to say whatever I like).

But now the consulting is done-- and Chris has added a 'save text'
button to the Erasmaganza story reader-- so here's a sample log of a
very brief interaction with the game. As far as I know, this is the
most sophisticated storytext yet generated anywhere, by a considerable
margin:

> "Shattertown Sky, v.1.05" - Copyright 1997 Laura J. Mixon.
>
> Sky woke up. Sky rolled over and went back to sleep. Sky reawakened a
>while later. Sky got dressed quietly. Mara woke up. "hello, Boss Lady,"
>Sky said glumly to Mara.

This is a summary of my interactions-- the game itself has more detail.

> "Get a life," Mara snapped. "For Christ's sake."
> Mara gave Sky her usual warnings. Sky left the Scoop offices. Sky
>continued on her paper route. Sky passed through the Floes on her paper
>delivery route.

Sky is supposed to be a gender-neutral name.

> Later that day...

The game has a clock that shows the passage of time.

> Sky happened to come across Doc. Doc sat on a rise, staring out at some
>old ruins and looking thoughtful. Doc was a Head. Sky gave a copy of
>the Scoop to Doc. Doc appeared distracted and didn't respond.
> Sky asked Doc if he was OK.
> "Did I ever tell you," Doc asked Sky, "what my mama said about the
>Shattering? Why it happened, I mean." "Go on," Sky said.

This allows repeat players to skip the exposition.

> Doc told Sky what his mama said about the Shattering - how she thought it
>was caused by the heat of everyone's intolerance and hatred for each
>other. And he told her he didn't hold out a lot of hope for us
>survivors any more, because we're all just as bad as the old ones were.
>"There, there," Sky said quietly, touching Doc's hand. "There, there."
> "Thanks," Doc said.
> Sky blushed and told Doc, "Aw, gee..."

This opens a path for Doc to flirt with Sky. He's not in the mood now,
though.

> "I'd better be going," Doc said.
>
> The next day...
> Cat happened across Rocky, who was lying supine with a broken, bleeding
>leg.. "Are you all right?" Cat asked Rocky.

This is happening in Sky's absence, and will provide fuel for gossiping.

> The next day...
> Norm sought out Sky. "Brute and Faith are fighting!" he said. "Please
>talk to them, would you?" "I'll take care of it," Sky assured Norm.
>
> Later that day...
> Faith died.
> Sky saw something and headed over to it. Faith was lying prone. She
>was either unconscious or dead.
>
> Later that day...
> Sky bent over the still form of Faith, and then shook her head grimly at
>Doc. "She is dead, Jim," she said.
> "Oh, jeez!" Doc said. "That's terrible, Sky. Tell me it isn't true."
> "I wish I knew what to think. It's horrible, isn't it?" Sky said. "To
>think that the other day Faith was up and walking around, and the next
>minute someone turned her into ground round." She shuddered. "Times
>like this, I wish I lived on a desert island all alone. Away from the
>madness."
> "Thanks for your insights," Doc told Sky.

This sets up the murder mystery for the rest of the game.


The Win95 version is due out in a month or so, the Mac version is
downloadable free now. It requires a PPC, but a 68k version will be
created if there's enough demand.

See <URL:http://www.mcs.net/~jorn/html/ai/crawford.html> for links and
commentary.


j

Matthew Amster-Burton

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

jo...@mcs.com (Jorn Barger) wrote:

>For the last few months I've been keeping mum about Chris Crawford's
>Erasmatron, because I was doing some consulting for him, and felt a
>little conflict of interest (though he's continually assured me I'm free
>to say whatever I like).

>See <URL:http://www.mcs.net/~jorn/html/ai/crawford.html> for links and
>commentary.

Or, for another take:

http://www.xyzzynews.com/xyzzy.14f.html

Matthew

Julian Arnold

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

In article <1d2qhe7.nv7...@jorn.pr.mcs.net>, Jorn Barger

<URL:mailto:jo...@mcs.com> wrote:
> But now the consulting is done-- and Chris has added a 'save text'
> button to the Erasmaganza story reader-- so here's a sample log of a
> very brief interaction with the game.

[...transcript snipped...]

We can only pray that Erasmapronouns are introduced soon...

Seriously, a transcript such as this tells me nothing. It could easily
be the output from a standard IF game, or indeed a non-interactive piece
of writing.

Now, a transcript which included the *input* and showed how it related
to the output might be more impressive. Or maybe not if Neil deMause is
to be believed.

> As far as I know, this is the
> most sophisticated storytext yet generated anywhere, by a considerable
> margin:

Um, in what way? Is the thing actually generating sentences on the fly,
or just spitting out pre-formed chunks of text in response to
anticipated game states (like normal IF)?

Jools
--
"For small erections may be finished by their first architects; grand
ones, true ones, ever leave the copestone to posterity. God keep me from
ever completing anything." -- Herman Melville, "Moby Dick"


Jorn Barger

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Matthew Amster-Burton <mam...@u.washington.edu> wrote:
[...]
> >See <URL:http://www.mcs.net/~jorn/html/ai/crawford.html> for links and
> >commentary.
>
> Or, for another take:
>
> http://www.xyzzynews.com/xyzzy.14f.html

Thanks for the tip... not.

The Shattertown demo, I'll grant, is only gradually evolving towards
ready-for-primetime stature, but this 'review' was totally pinheaded.

Don't expect a game, yet. It's still a research project.

j

Thomas Aaron Insel

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

jo...@mcs.com (Jorn Barger) writes:

> Matthew Amster-Burton <mam...@u.washington.edu> wrote:
> > Or, for another take:
> > http://www.xyzzynews.com/xyzzy.14f.html

> Thanks for the tip... not.

> The Shattertown demo, I'll grant, is only gradually evolving towards
> ready-for-primetime stature, but this 'review' was totally pinheaded.

> Don't expect a game, yet. It's still a research project.

I know I'm not a regular member of this community, but I feel I should
defend Neil deMause's review. For a product that's being advertised
for sale, the Erasmatron is quite lacking in polish and value. It's
possible that an interesting piece of art could be made with it, but I
don't expect to see one.

The ``best'' available storyworld, the product of months of work is
just plain boring. Perhaps it's my own stupidity which keeps me from
solving the mystery, and I'm willing to overlook obvious bugs, such
as the murdered character (in an obvious non-Haunting encounter)
asking me if I've solved the murder yet. However, what's left is not
very exciting -- watching the clock spin, seducing other characters
to come pick berries, watching endless streams of characters repeat
``I overheard Jed refuse May's chicken soup,'' and fondling a useless
inventory menu.

I know that the author carries a good deal of credibility, and it's
quite possible that the Erasmatron will develop into something good,
most people don't sell their unfinished ``research projects'' for
two hundred dollars.

Tom
--
Thomas Insel (tin...@jaka.ece.uiuc.edu)
"If you think the United States has stood still, who built the largest
shopping center in the world?" -- Richard M. Nixon

Stephen Granade

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On Mon, 12 Jan 1998, Jorn Barger wrote:

> Matthew Amster-Burton <mam...@u.washington.edu> wrote:
> [...]
> > >See <URL:http://www.mcs.net/~jorn/html/ai/crawford.html> for links and
> > >commentary.
> >

> > Or, for another take:
> >
> > http://www.xyzzynews.com/xyzzy.14f.html
>
> Thanks for the tip... not.
>
> The Shattertown demo, I'll grant, is only gradually evolving towards
> ready-for-primetime stature, but this 'review' was totally pinheaded.

How so?

> Don't expect a game, yet. It's still a research project.

In that case it should be advertised as such, and not as a polished,
finished authoring system for which I should be willing to shell out
money.

Stephen

--
Stephen Granade | Interested in adventure games?
sgra...@phy.duke.edu | Check out
Duke University, Physics Dept | http://interactfiction.miningco.com


Jorn Barger

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Julian Arnold <jo...@arnod.demon.co.uk> wrote, quoting me:

> > As far as I know, this is the
> > most sophisticated storytext yet generated anywhere, by a considerable
> > margin:
>
> Um, in what way? Is the thing actually generating sentences on the fly,
> or just spitting out pre-formed chunks of text in response to
> anticipated game states (like normal IF)?

See <URL:http://www.mcs.net/~jorn/html/ai/crawford.html> for a quick
overview of the technology. Basically:

- Every event corresponds to a 'verb' object
- The set of verbs is shared equally by all characters
- All the author-programming involves writing code for verbs
- for NPCs, this programming determines which response to a verb will be
chosen
- The human player will be offered the identical set of responses
- All of Sky's actions in the sample were chosen by the player (me)
- (Though some of these were single-option forced choices)

In my opinion, this technology is better adapted to alife/ virtual
worlds than to goal-driven storytelling like murder mysteries, but Chris
understands it better than me and he doesn't seem very interested in the
alife aspect.


j

Jorn Barger

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On raif, Thomas Aaron Insel <tin...@jaka.ece.uiuc.edu> wrote, quoting
me:

> > Don't expect a game, yet. It's still a research project.
>
> I know I'm not a regular member of this community, but I feel I should
> defend Neil deMause's review. For a product that's being advertised
> for sale, the Erasmatron is quite lacking in polish and value. It's
> possible that an interesting piece of art could be made with it, but I
> don't expect to see one.

Now, be clear about the 'Tron vs the 'Ganza (vs Shattertown):

- The Erasmatron is the $200 tool for building worlds, and is highly
polished

- The Erasmaganza is the free story reader, and has some bugs

- Shattertown is the free sample storyworld, and has serious problems
still

> The ``best'' available storyworld, the product of months of work is
> just plain boring. Perhaps it's my own stupidity which keeps me from
> solving the mystery, and I'm willing to overlook obvious bugs, such
> as the murdered character (in an obvious non-Haunting encounter)
> asking me if I've solved the murder yet. However, what's left is not
> very exciting -- watching the clock spin, seducing other characters
> to come pick berries, watching endless streams of characters repeat
> ``I overheard Jed refuse May's chicken soup,'' and fondling a useless
> inventory menu.

One of my recommendations as their consultant was that they had to
'manage expectations' carefully, so that people wouldn't be put off by
this sort of problem. As I see it, Shattertown is a teaching tool, that
points the way towards the first generation of playable storyworlds.

Writing storyworlds requires an absolutely *new* set of skills, that you
have to be prepared to spend not months but *years* refining, imho.
Without the 'Tron you can't even begin, though.

> I know that the author carries a good deal of credibility, and it's
> quite possible that the Erasmatron will develop into something good,
> most people don't sell their unfinished ``research projects'' for
> two hundred dollars.

Again, the 'Tron is quite polished, and provides a huge value to
researchers. The Shattertown storyworld is unfinished... but it's free.

j

Jorn Barger

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Neil K. <fake...@anti-spam.address> wrote, quoting me:

> > The Shattertown demo, I'll grant, is only gradually evolving towards
> > ready-for-primetime stature, but this 'review' was totally pinheaded.
>
> Why was it totally pinheaded?

Zero attempt to understand the new technology. Total dismissal of the
whole technology, based on zero understanding.

(Were you the author?)


Andrew Plotkin

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Jorn Barger (jo...@mcs.com) wrote:
> - The Erasmatron is the $200 tool for building worlds, and is highly
> polished

> - The Erasmaganza is the free story reader, and has some bugs

> - Shattertown is the free sample storyworld, and has serious problems
> still

> One of my recommendations as their consultant was that they had to


> 'manage expectations' carefully, so that people wouldn't be put off by
> this sort of problem. As I see it, Shattertown is a teaching tool, that
> points the way towards the first generation of playable storyworlds.

> Writing storyworlds requires an absolutely *new* set of skills, that you
> have to be prepared to spend not months but *years* refining, imho.
> Without the 'Tron you can't even begin, though.

Well, yes and yes and yes and yes. And (...counts on fingers) yes.

But as a player, I'm not impressed, and it's the totality of the thing
I'm not impressed by. It's like the competition game judging; you can
explain reasons why your game didn't work right, and I'll nod and say
"Yup, that's why you got a low score."

If Er. storyworlds have potential as an art form, it's not being
demonstrated to me. This may be my own closed-mindedness and my own
problem. But since I'm also a potential author, and I'm not interested in
learning the tool, it's Chris Crawford's problem too.

When another storyworld comes out, I'll try it again.

--Z

--

"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the
borogoves..."

Andrew Plotkin

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Thomas Aaron Insel (tin...@jaka.ece.uiuc.edu) wrote:

> > The Shattertown demo, I'll grant, is only gradually evolving towards
> > ready-for-primetime stature, but this 'review' was totally pinheaded.

> > Don't expect a game, yet. It's still a research project.

> I know I'm not a regular member of this community, but I feel I should
> defend Neil deMause's review. For a product that's being advertised
> for sale, the Erasmatron is quite lacking in polish and value. It's
> possible that an interesting piece of art could be made with it, but I
> don't expect to see one.

I tried it. I very quickly got something like "Sky attacks Sky! Sky is
hurt!". I also saw "nothing" being passed around like an object.

My conclusion was, it's not interesting and it doesn't work.

I realize that these bugs are the result of very deep and complicated
simulation -- as opposed to IF game code, which is practically the
definition of shallow programming. But mimesis is falling out the
fifth-story window, here, and being riddled with bullets on the way down.
And Chris Crawford doesn't seem to be heading in a direction which will
produce something I'll enjoy.

Daryl McCullough

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

jo...@mcs.com (Jorn Barger) says...

>Now, be clear about the 'Tron vs the 'Ganza (vs Shattertown):
>

>- The Erasmatron is the $200 tool for building worlds, and is highly
>polished
>
>- The Erasmaganza is the free story reader, and has some bugs
>
>- Shattertown is the free sample storyworld, and has serious problems
>still

I think I may be repeating what has already been said, but how
are we supposed to judge the Erasmatron other than by what it
produces?

If everybody loved Shattertown, then people would be clamoring
for a tool to produce Shattertown-like games. But if Shattertown
is lousy, what incentive is there for anyone to use the Erasmatron?

In this newsgroup, most people became interested in tools for
interactive fiction (such as Graham Nelson's Inform)
through their experience in gaming, in particular the
Infocom games. If Chris wants people to use the Erasmatron,
he needs to pique their interest by giving us an example of
the great games that could be built using it. In my opinion,
it's kind of backwards to build a great tool, and *then*
hope somebody will come up with a nifty use for it.

Daryl McCullough
CoGenTex, Inc.
Ithaca, NY

Jorn Barger

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On raif, Daryl McCullough <da...@cogentex.com> wrote:
> [...] If Chris wants people to use the Erasmatron,

> he needs to pique their interest by giving us an example of
> the great games that could be built using it.

This is exactly right. And my posts here are attempts to find people
who want to try to write that game, not to find readers for Shattertown.

(If you notice, Chris is doing almost no publicity himself yet-- even
the Wired.com article was fallout from _my_ postings.)

Shattertown has improved a *lot* in the last few releases, since the
Xyzzy review even, and may still become a great game. Chris's Morte
D'Arthur will come out eventually-- but I have no idea where it falls in
Chris's list of priorities. Obviously, it was more efficient for him to
finish the tools before finishing the game, and he hired Laura to write
Shattertown with this in mind.

I'm looking at doing various experiments, myself, but I expect them to
be more like alife than stories.


My challenge to raif readers is: what do *you* think the next generation
of IF will look like, if not like the E-tron???


j

Andrew Plotkin

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Jorn Barger (jo...@mcs.com) wrote:

> My challenge to raif readers is: what do *you* think the next generation
> of IF will look like, if not like the E-tron???

To me, this is like asking "What will the next generation of the novel
look like?"

It'll look like the current generation, but with different words between
the covers.

If the E-tron is a successful new thing, it won't be the next generation
of "classic Colossal Caves" IF. It'll be a new thing.

Damien Neil

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On Tue, 13 Jan 1998 13:30:18 -0500, Jorn Barger <jo...@mcs.com> wrote:
>My challenge to raif readers is: what do *you* think the next generation
>of IF will look like, if not like the E-tron???

Rather like the current one, but more so. The commercial market will
spend increasing amounts of money on special effects (i.e., pretty
graphics). The regulars of r.a.i-f will turn out text adventures --
most will be junk, some will be brilliant.

We've seen games in the past that take a simulationist approach to
characters. While Chris's work is clearly far ahead of what has been
done before, it doesn't appear to be fundamentally different in character.
I'm certain you can get some interesting results out of it -- but I don't
feel it will produce a revolution. Computer controlled personalities
are currently (and will remain for the forseeable future) a bundle of
variables which circumstances tweak. They are a cheap copy of humanity,
and show this fact too glaringly.

Having said that, I'd be delighted if someone could come along and prove
me wrong. I'm just not betting on it.

Are you and/or Chris aware of Selmer Bringsjord? He's a philosophy
professor at Rensselaer Polytechnic Institute, and has been working on
automated story generation for some time -- some of his work sounds
quite a bit like the Erasmatron. Last I heard of, he was working on
modelling betrayal. (Fascinating word, betrayal. Such a slippery one
to define -- every time you get close, it twists off in another
direction.)

- Damien

Graham Nelson

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <1d2sqa9.1tr...@jorn.pr.mcs.net>, Jorn Barger

<URL:mailto:jo...@mcs.com> wrote:
>
> My challenge to raif readers is: what do *you* think the next generation
> of IF will look like, if not like the E-tron???

Why do you think there will be a "next generation of IF", in any
sense implying a transformation of the form? IF is just IF: a
different kind of IF would be something else, in the same way that
IF is a different kind of ordinary fiction, and exists perfectly
happily alongside. There's no sense in which one replaces the
other.

--
Graham Nelson | gra...@gnelson.demon.co.uk | Oxford, United Kingdom


Julian Arnold

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

In article <1d2sqa9.1tr...@jorn.pr.mcs.net>, Jorn Barger
<URL:mailto:jo...@mcs.com> wrote:
> [...]

> My challenge to raif readers is: what do *you* think the next generation
> of IF will look like, if not like the E-tron???

First priorities: Next generation IF (what are we up to now, 3rd or
4th?) will have a more powerful parser[1]; acceptable input will be even
closer to natural language than it is now. Great leaps forward will be
made in character interaction; both NPCs and PCs will be able to
converse with each other in a realistic way (not just "ask X about Y"
and variants, but real questions) and NPCs will have some sort of
knowledge representation; NPCs will be able to perform any action which
the PC can, without the programmer having to write lots of special case
code.

I only expect to see one of these things implemented in any serious and
useful way (we can already do it, and it's a mystery to me why it is not
done). Other things can be fudged and bluffed with existing tools, to a
point.

Jools

[1] I'll check out the URLs that you've mentioned this evening, but as I
understand it from the XYZZY review, the Erasma[tron/ganza] doesn't take
parsed input at all, but rather you select actions from a very limited
menu (like CYOA).

Er, no, the parsed input is everything.

Nate

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

hmm...having looked at the site, I think I agree with your statement,
as far as it goes. However, I believe that making a good-quality IF
game (as opposed to an interesting virtual world) this way is simply
beyond our current understanding. I think that within a few years,
the technology will be there (CyberLife's president wants a-life as
complex as a human within twenty years, and he may get there) to
create a very realistic storyworld; however, I'm not sure it's within
human capability to project the NPC actions well enough to make an
interesting game.

A key part of making IF interesting is setting up player expectations,
then allowing the player to fulfill those expectations. If the NPCs
are actually thinking, they may not decide to go along with the
author. The Norns in Creatures are far simpler than humans, but the
Creatures website contains many examples of them behaving in totally
unexpected ways.

If the E-tron, or anything like it, is to produce good IF, we will
need to completely redefine NPC interaction and plot structure in IF.
That said, the way the E-tron conceives reality could allow for
unparred realism--someday.

Carl Burke

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Damien Neil wrote:
...

> Are you and/or Chris aware of Selmer Bringsjord? He's a philosophy
> professor at Rensselaer Polytechnic Institute, and has been working on
> automated story generation for some time -- some of his work sounds
> quite a bit like the Erasmatron. ...

Is that part of the Oz project? That whole project sounds
similar to Erasmotron, or at least it overlaps it.

Personally, when Erasmotron comes out for Win95 I'll
probably buy it, just to be able to experiment with it.
It looks like verb construction is overly complicated,
though; it seems like you could go a long way by having
templates based on Polti's 36 plots and Murray's "needs".
In fact, that's what I'll probably play with once I have
the 'Tron in hand, some kind of front end to build basic
verb structures for character interactions. The 'Tron may
not be the perfect engine, it's probably a long way from that,
but at least it seems to have some useful personality
maintenance built in. Easier and quicker than building
my own engine, anyway.

--
--------------------------------------------------
Carl Burke, cbu...@mitre.org -- le nu ko batci mi kei cu zdile
My opinions are mine and mine alone, unless you
agree with them. Then I'll share.
--------------------------------------------------
"hee hee hee....I'm the jolly evil elf!" - Jess Nevins
--------------------------------------------------

Edan Harel

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Julian Arnold <jo...@arnod.demon.co.uk> writes:


>[1] I'll check out the URLs that you've mentioned this evening, but as I
>understand it from the XYZZY review, the Erasma[tron/ganza] doesn't take
>parsed input at all, but rather you select actions from a very limited
>menu (like CYOA).

Yeah, from what I remember of it (I played whatever was the first
example they had). It got terribly boring because there were only
2 or 3 choices, and it only seemed to end up eith a fight in the bar
where there weren't even sides. Someone would fight me. Then someone might
side with me, but a few moves down the rode, I'd be fighting with the
guy who sided with me.

I wasn't very impressed.

Edan
--
Edan Harel edh...@remus.rutgers.edu McCormick 6201
Research Assistant Math and Comp Sci Major Computer Consultant
USACS Member Math Club Secretary

Jorn Barger

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Carl Burke <cbu...@mitre.org> wrote:
> [...] it seems like you could go a long way by having

> templates based on Polti's 36 plots and Murray's "needs".
> In fact, that's what I'll probably play with once I have
> the 'Tron in hand

Yes, go for it! There's no inheritance hierarchy for verbs, but that
can be added once someone figures out what the top-level verbs should
be. (My fractal thicket approach is at:
<URL:http://www.mcs.net/~jorn/html/ai/thicketfaq.html>)


j

Alan Conroy

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

>Why do you think there will be a "next generation of IF", in any
>sense implying a transformation of the form? IF is just IF: a
>different kind of IF would be something else, in the same way that
>IF is a different kind of ordinary fiction, and exists perfectly
>happily alongside. There's no sense in which one replaces the
>other.

Yes, Yes, YES! There may very well be a new generation of adventure
games (a category in which I include IF as a sub-genre), but IF is IF.
If you make any significant changes to it, it will cease to be IF and
become something else. IF will remain alongside the new genre.

To make the point using one example: a common complaint I've heard
about IF is that it does not allow unlimited scope of actions. The
fact is, once you allow that, you cannot allow a canned conclusion to
the storyline. In fact, it doesn't take much for the entire storyline
to become invalid. IF's strength is precisely in this limitation. IF
provides fiction written by someone: setup, plot development, climax,
and conclusion. If you don't follow that, it is no longer IF. I'm in
no way denying the appeal of an open-ended virtual world, but it is,
by its very nature, not IF. You can probably think of other so-called
"improvements" which end-up gutting the very foundation of what IF is.

IF will improve. But it will be evolutionary, not revolutionary.

- Alan Conroy

I wanna be a firetruck when I grow up...

Jeff Hatch

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Graham Nelson wrote:
>
> In article <1d2sqa9.1tr...@jorn.pr.mcs.net>, Jorn Barger
> <URL:mailto:jo...@mcs.com> wrote:
> >
> > My challenge to raif readers is: what do *you* think the next generation
> > of IF will look like, if not like the E-tron???
>
> Why do you think there will be a "next generation of IF", in any
> sense implying a transformation of the form? IF is just IF: a
> different kind of IF would be something else, in the same way that
> IF is a different kind of ordinary fiction, and exists perfectly
> happily alongside. There's no sense in which one replaces the
> other.

People are basically people, but one generation of humans still succeeds
another as the years go by.

I don't think the phrase "next generation of IF" necessarily implies a
drastic change that changes IF into "something else." I certainly would
consider Zork and Trinity to be fundamentally different kinds of games.
I don't think this process of evolutionary change will slow down.

The Erasmatron seems to be a different beast entirely, and I personally
doubt any Erasmatron creations will achieve the quality of modern
"adventure games" for another decade at least.

-Rúmil

Jeff Hatch

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Julian Arnold wrote:

> Jorn Barger wrote:
> > My challenge to raif readers is: what do *you* think the next generation
> > of IF will look like, if not like the E-tron???
>
> First priorities: Next generation IF (what are we up to now, 3rd or
> 4th?) will have a more powerful parser[1]; acceptable input will be even
> closer to natural language than it is now. Great leaps forward will be
> made in character interaction; both NPCs and PCs will be able to
> converse with each other in a realistic way (not just "ask X about Y"
> and variants, but real questions) and NPCs will have some sort of
> knowledge representation; NPCs will be able to perform any action which
> the PC can, without the programmer having to write lots of special case
> code.
>
> I only expect to see one of these things implemented in any serious and
> useful way (we can already do it, and it's a mystery to me why it is not
> done). Other things can be fudged and bluffed with existing tools, to a
> point.
[snip]

I agree, mostly. I'd also add that next generation IF allow the player
to make a wider variety of choices and win the game. (That is, it will
incorporate some CYOA concepts, without entirely abandoning the focused
plot and puzzle-based storytelling of traditional IF.)

Which one of these things you mention is the one "we can already do"?
Obviously realistic NPC interaction is difficult. But I'm not sure what
you mean by a "more powerful parser," since I've seldom found existing
parsers too weak for my needs, except of course when interacting with
NPCs. What kind of sentences do you envision which would need a better
parser?

The last one seems like a fairly easy unsolved problem. As far as I can
tell, the sentence, "john, go north, then turn on the TV and set the
dial to 14" wouldn't work in any existing language without special code,
but it wouldn't really be terribly hard to allow. It's similar to the
concept of changing who the PC is, really, except that event messages
need to be suppressed when the acting character isn't in the room.

-Rúmil

Stephen Granade

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Jorn Barger wrote:

> My challenge to raif readers is: what do *you* think the next generation
> of IF will look like, if not like the E-tron???

I'll give you my thoughts on one possible direction. Bear in mind that
what I'm going to describe is not really "next-gen" IF, any more than TV
was "next-gen" radio.

What I envision is a setup which is a lot like a play, only with you
either a) acting one of the roles, or b) directing one or more of the
characters.

In playing a role, you would step into a character in the story and act it
out. The computer-controlled characters would be able to react to your
actions, yet be of a mind to keep the story going. Ever read Walter
Miller's short story "The Darfstellar"? The play-machine in that story
approaches what I'm thinking of. Half the fun of acting is watching your
partners act and react to what you do; if a computer could begin to match
that kind of malleable response, it could be fun. There's a question of
what the computer would do if you deviated too far from the story;
probably it would de-emphasise your role in the game/play.

In directing a role, the computer would act out the story and you would
have the leeway to change how a character reacts (within limits) and
watch the changes ripple throughout the rest of the story. Or the
computer could set up the scenario, give you the characters, and let you
choose a character's attitudes and responses to see where the story plays
out.

Of course, all of this presupposes computer programs capable of modelling
humans and doing a good job at it.

Jorn Barger

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Jeff Hatch <je...@hatch.net> and many others wrote variations on:
> [...] I don't think the phrase "next generation of IF" necessarily implies a
> drastic change that changes IF into "something else." [...]

I meant: what are the intermediate steps between Zork ...and the
Holodeck?


j

Daniel Shiovitz

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

In article <Pine.SUN.3.91.980115...@bigbang.phy.duke.edu>,

Stephen Granade <sgra...@phy.duke.edu> wrote:
>Jorn Barger wrote:
>
>> My challenge to raif readers is: what do *you* think the next generation
>> of IF will look like, if not like the E-tron???
>
>I'll give you my thoughts on one possible direction. Bear in mind that
>what I'm going to describe is not really "next-gen" IF, any more than TV
>was "next-gen" radio.
>
>What I envision is a setup which is a lot like a play, only with you
>either a) acting one of the roles, or b) directing one or more of the
>characters.
[..]

This is roughly what I'm thinking of. Pretty similar.
http://rhodes.www.media.mit.edu/people/rhodes/Papers/aaai95.html

>Stephen
--
(Dan Shiovitz) (d...@cs.wisc.edu) (look, I have a new e-mail address)
(http://www.cs.wisc.edu/~dbs) (and a new web page also)
(the content, of course, is the same)

Brock Kevin Nambo

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Jeff Hatch wrote in message <34BDEC...@hatch.net>...

>event messages
>need to be suppressed when the acting character isn't in the room.

Ack, I knew I forgot something... thanks for the reminder.

>>BKNambo. hehehe.
--
http://come.to/brocks.place | World Domination Through Trivia!
oah123 (in chatquiz, 12/27/97): "did you guys know during the SPIN cycle the
clothes are like being spun really fast? LOL i just found that out!"

Mary K. Kuhner

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

I spent an afternoon poking around on the Erasmatron
web site, and it's interesting stuff. It strikes me,
though, that the biggest problem in getting playable,
interesting games (rather than alife exercises) out
of it will be focus.

In writing a story or a conventional IF game, the author
picks, out of the vast number of events that happen
"in the area of" the story, those that are at least
somewhat helpful in telling it--they work to show
plot, or character, or setting, or theme, or to control
pacing.

The Erasmatron doesn't really let you do this, because
(unless you script it really heavily, in which case there
is not much point--conventional IF seems easier) you
can't decide which of the actors' actions the player
should be presented with and which she shouldn't. This
means that it's hard to prevent the player from
having to deal with long stretches of meaningless stuff
(hearing gossip that she's already heard, witnessing
interactions she doesn't care about, etc.) and that
there is a real risk of the interesting stuff happening
off-stage and never being adequately communicated.

I would like to have a copy of the thing, though it's
not worth $200 to me at the moment, but I suspect it's
an art form for the amusement of the programmer, not
for the amusement of a player, at least not after the
initial novelty wears off. Though maybe you could
overcome this to some extent by doing a very limited
scope game--the opposite direction from "Shattertown
Sky." Something like "Who Goes There?" maybe, with
a small stage, few actors, and an intense overriding
preoccupation such that not very many boring things
are likely to happen.

Mary Kuhner mkku...@genetics.washington.edu

Brock Kevin Nambo

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

support the smith bill wrote in message
>Note that I haven't actually run the program. I don't have Win95. Oh
>well.
>

I'm not an expert, but isn't it a Mac program?

>>BKNambo

Julian Arnold

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

In article <34BDEC...@hatch.net>, Jeff Hatch
<URL:mailto:je...@hatch.net> wrote:
> [...]

> I agree, mostly. I'd also add that next generation IF allow the player
> to make a wider variety of choices and win the game. (That is, it will
> incorporate some CYOA concepts, without entirely abandoning the focused
> plot and puzzle-based storytelling of traditional IF.)
>
> Which one of these things you mention is the one "we can already do"?

The bit about NPCs performing any action without special code.

> Obviously realistic NPC interaction is difficult. But I'm not sure what
> you mean by a "more powerful parser," since I've seldom found existing
> parsers too weak for my needs, except of course when interacting with
> NPCs. What kind of sentences do you envision which would need a better
> parser?

That's basically it. I'm really thinking of being able to have more
open-plan conversations with NPCs. I would be surprised if this is ever
really possible though.

> The last one seems like a fairly easy unsolved problem. As far as I can
> tell, the sentence, "john, go north, then turn on the TV and set the
> dial to 14" wouldn't work in any existing language without special code,
> but it wouldn't really be terribly hard to allow. It's similar to the

> concept of changing who the PC is, really, except that event messages


> need to be suppressed when the acting character isn't in the room.

Pretty much so.

Jools

Andrew Plotkin

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

support the smith bill (unava...@this.time) wrote:
> On Tue, 13 Jan 1998 20:35:18 GMT, erky...@netcom.com (Andrew Plotkin)
> wrote:

> >> My challenge to raif readers is: what do *you* think the next generation
> >> of IF will look like, if not like the E-tron???
> >

> >To me, this is like asking "What will the next generation of the novel
> >look like?"
> >
> >It'll look like the current generation, but with different words between
> >the covers.

> That's making the assumption that the mechanics of IF are fully
> perfected and that as far as mimesis, interaction, etc. this is as
> good as it's going to get.

Does it? You know, I think I *do* assume that. With the very strong
caveat that I'm talking about "IF as we know it", Colossal Cave model IF.

I'm willing to be proven wrong, of course. But I don't know of any
changes which can be made with available programming techniques. I do
feel like we're at a local maximum.

The E-tron is far, far away from this local maximum -- which means you
can't tell whether it's higher or lower; the question isn't even
meaningful. It changes so many things that it cannot be regarded as an
improvement or next generation of what we have now.

> IMO without more technological advances IF
> will quickly burn itself out as a literary form.

It already did. In the late 1980's. Doesn't seem to have stopped us.

> There are a LOT of
> frustrating technological limitations in writing IF that sharply limit
> the kind of stories that can be done in IF.

It is frustrating that we don't have AI, yes.

On the other hand, does not the novel have just as many technological
limitations? Limitations are the same thing as structure. We're a long
way from exploiting all the possible things you can do with IF-structure-
as-we-know-it.

Russell "Coconut Daemon" Bailey

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Hmmm... so the issue is how to properly prune the choices. I don't
know much about the Erasmatron in practice, only owning a PC, but I've
read Chris Crawford's essays, and the following solution sounds like it
*might* be possible in a future E-Tron type system:

Like any other character, the player is assumed to have attributes and
motivations. Therefore, the actions of the player could theoretically
be chosen in the same manner as the actions of an NPC. In that way,
the E-Tron could push a player linearly through a story. This would be
the first step. The second would be adjusting for more than one
possible choice for each scenario. We could expand the number of
choices above one, yet still maintain control, by evaluating the
options by more than one attribute/motivation set, each of which would
correspond to a certain player personality. This would be clearer in an
example:

You are Javier Gyffes, a guardsman at the Bastille. There is one
prisoner who wears a mask of black velvet (or iron, if you prefer
Alexander Dumas:)) at all times. Orders have been given that if anyone,
prisoner or jailer, sees the man's face without a mask, that they both
be killed immediately. One day, the mask's fastening bursts, and you
see his face. He is the king's exact double. This choice could be made
according to two obvious personalities: Royalist or Revolutionary.
That is, a Royalist player would slay the man immediately. A
Revolutionary would attempt to befriend him. Neither would be
interested in the number of other actions associated with him, such as
flirting, telling him that prisoner 24609 has been imprisoned for
stealing a loaf of bread, or casually that young Pierre has trampled
old woman Maria's prized garden.

I know this is all simplified, and that it would probably require a lot
of revision, but the essential notion of evaluating choices by multiple
personalities and then presenting the "winning" options to the player
would allow focus to be narrowed without over-scripting.

Russell

Jeff Hatch

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Julian Arnold wrote:
[snip]

> > Obviously realistic NPC interaction is difficult. But I'm not sure what
> > you mean by a "more powerful parser," since I've seldom found existing
> > parsers too weak for my needs, except of course when interacting with
> > NPCs. What kind of sentences do you envision which would need a better
> > parser?
>
> That's basically it. I'm really thinking of being able to have more
> open-plan conversations with NPCs. I would be surprised if this is ever
> really possible though.

Well, "more" open-plan conversations should be possible. I expect NPC
interaction to be slightly better in a decade or so. But I don't expect
anything remotely resembling real-life conversations. That's one reason
why I'm skeptical of the "simulationist" approach to IF; machines can't
simulate people well.

-Rúmil

Richard G Clegg

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Jeff Hatch (je...@hatch.net) wrote:

: Well, "more" open-plan conversations should be possible. I expect NPC


: interaction to be slightly better in a decade or so. But I don't expect
: anything remotely resembling real-life conversations. That's one reason
: why I'm skeptical of the "simulationist" approach to IF; machines can't
: simulate people well.

Open plan conversation I see as a rather more difficult goal. What
I'd like to see in I-F and that the Erasmatron goes someway towards is
NPCs whos actions are less hard-wired. At the moment, NPCs either
wander at random, stay still or follow the player and their actions are
very predetermined. What an approach like the Erasmatron could give us
is NPCs who seem to interact with the environment in a semi-predictable
way. As a programmer, I can "see the strings" on NPCs in current games.
It would be nice to see NPCs with a "tendancy" to wander about and
fiddle with things. This doesn't need to be quite as complete as the
Erasmatron approach but this kind of think - which has been
half-heartedly tried in some dreadful old games "Valhalla" for example
and never really worked.

Convincing dialogue is much further off. Essentially, convincing
dialogue with an NPC is a natural language processing problem and
therefore "very hard" but persuading NPCs to react in a more interesting
way with their environments should not, in theory, be as difficult.

--
Richard G. Clegg Only the mind is waving
Dept. of Mathematics (Network Control group) Uni. of York.
email: ric...@manor.york.ac.uk
www: http://manor.york.ac.uk/top.html


Jorn Barger

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Jeff Hatch <je...@hatch.net> wrote:
> Well, "more" open-plan conversations should be possible. I expect NPC
> interaction to be slightly better in a decade or so. But I don't expect
> anything remotely resembling real-life conversations. That's one reason
> why I'm skeptical of the "simulationist" approach to IF; machines can't
> simulate people well.

Imho, the big obstacle is to get a general model of human motivations,
which is necessary for understanding metaphors. And the way to get this
model is via alife/virtual-worlds, and the way to move forward in
alife/vw's is... Erasmatazz.


j

Graham Nelson

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

In article <erkyrathE...@netcom.com>, Andrew Plotkin

<URL:mailto:erky...@netcom.com> wrote:
>
> It is frustrating that we don't have AI, yes.

Well... it is and it isn't. The ethical debate over genetic
engineering will be as nothing by comparison.

Andrew Stern

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

On Fri, 16 Jan 1998 11:24:00 -0500, jo...@mcs.com (Jorn Barger) wrote:

>Jeff Hatch <je...@hatch.net> wrote:
>> That's one reason
>> why I'm skeptical of the "simulationist" approach to IF; machines can't
>> simulate people well.
>
>Imho, the big obstacle is to get a general model of human motivations,
>which is necessary for understanding metaphors. And the way to get this
>model is via alife/virtual-worlds, and the way to move forward in
>alife/vw's is... Erasmatazz.

.... as well as a few other projects moving forward in this arena,
specifically virtual characters that have motivations, goals,
personality and emotions. The Virtual Petz characters -- Dogz and
Catz -- are implemented in this way. (I've been lurking on this list
for a while, and here's a good opportunity for me to bring this up to
the IF community!)

Although they are animal characters and therefore don't use formal
spoken language, Dogz and Catz are intelligent autonomous characters
that form a true relationship with the 'player' (their owner) as they
grow up over time. They are the most expressive and interactive
animated virtual characters made to date -- you can directly touch,
pet and pick them up, and they respond immediately with sophisticated
gesture, sound and animation. They have a variety of objects and toys
in their environment, with enough critical mass and complexity such
that little dramatic situations and stories seem emerge (downloadable
demos are at www.pfmagic.com).

Is this of any interest to the IF community? I think so. Even though
it's very open-ended, free-form, unstructured play (different from
today's IF experience), you can see the potential of where this is
going. Similar to the efforts of the Oz project at CMU, the Improv
project at NYU and others (see my webpage www.netcom.com/~apstern for
a complete list of related projects), intelligent and autonomous
virtual characters will be the fundamental building blocks for the
interactive storytelling of tomorrow. I imagine these characters will
be combined with narrative techniques and plot craftsmanship pioneered
from IF projects. I think people from both camps will have to come
together to make this happen... this will be the next step towards the
Holodeck. (BTW, I'm sure you all know about this, but the new book
"Hamlet on the Holodeck: The Future of Narrative in Cyberspace" by
Janet Murray is an excellent discussion of this topic!)

Andrew Stern
www.netcom.com/~apstern, aps...@ix.netcom.com
www.pfmagic.com, and...@pfmagic.com


Andrew Stern

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

On Thu, 15 Jan 1998 07:27:10 GMT, al...@accessone.com (Alan Conroy)
wrote:

>Yes, Yes, YES! There may very well be a new generation of adventure
>games (a category in which I include IF as a sub-genre), but IF is IF.
>If you make any significant changes to it, it will cease to be IF and
>become something else. IF will remain alongside the new genre.

Except that if you take the words "interactive fiction" literally,
that term is too broad to be defined as what is currently thought of
as IF. For example, in its most general and literal interpretation,
the term "IF" should include non-text interactive experiences (ie,
graphical and animated ones), or any form of fiction where a "user"
can interact and change it. I don't think there are any non-text
experiences equivalent to the text IF yet, but there will be... :-)

The same can be said for the terms AI or A-life... right now A-life
refers to a biological/genetic approach to simulating life, but this
is too narrow as well. You can get what appears to be a very lifelike
artificial character without using a rigorous biological approach.

So perhaps what is currently called IF should be termed "text-based
IF" or even "hypertext fiction"? (although I know HTF is considered
another slightly different flavor than IF...)

Jeff Hatch

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Jorn Barger wrote:
[snip]

> Imho, the big obstacle is to get a general model of human motivations,
> which is necessary for understanding metaphors. And the way to get this
> model is via alife/virtual-worlds, and the way to move forward in
> alife/vw's is... Erasmatazz.

Andrew Stern wrote:
> .... as well as a few other projects moving forward in this arena,
> specifically virtual characters that have motivations, goals,
> personality and emotions. The Virtual Petz characters -- Dogz and
> Catz -- are implemented in this way. (I've been lurking on this list
> for a while, and here's a good opportunity for me to bring this up to
> the IF community!)

[snip]


> Is this of any interest to the IF community? I think so. Even though
> it's very open-ended, free-form, unstructured play (different from
> today's IF experience), you can see the potential of where this is
> going. Similar to the efforts of the Oz project at CMU, the Improv
> project at NYU and others (see my webpage www.netcom.com/~apstern for
> a complete list of related projects), intelligent and autonomous
> virtual characters will be the fundamental building blocks for the
> interactive storytelling of tomorrow. I imagine these characters will
> be combined with narrative techniques and plot craftsmanship pioneered
> from IF projects. I think people from both camps will have to come
> together to make this happen... this will be the next step towards the
> Holodeck. (BTW, I'm sure you all know about this, but the new book
> "Hamlet on the Holodeck: The Future of Narrative in Cyberspace" by
> Janet Murray is an excellent discussion of this topic!)

I was thinking about John Barger's question, "What will be the next step
toward the Holodeck?" I came up with mostly the same answer. I don't
think "the interactive storytelling of tomorrow" will be much like the
Erasmatron, or like traditional "adventure game" interactive fiction,
but more like a mix of both.

Well, more precisely, I think the interactive storytelling of the future
will be like that. Not tomorrow. No general model of human behavior
that I've heard of is sufficiently convincing yet, and I don't think one
will be for several years. I expect a more-or-less traditional IF model
to be far more successful than the Erasmatron for at least another
decade. But I expect text-based IF to gradually borrow more and more
traits from Erasmatron-style systems.

Time will tell.

-Rúmil

Trevor Barrie

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

In article <34BDEC...@hatch.net>, Jeff Hatch <je...@hatch.net> wrote:

>The last one seems like a fairly easy unsolved problem. As far as I can
>tell, the sentence, "john, go north, then turn on the TV and set the
>dial to 14" wouldn't work in any existing language without special code,

TADS will parse that sentence just fine, and I'd sort of assumed Inform
could handle it as well. As far as I can tell, the only problem is writing
your responses to check that the player can actually see/hear the current
actor.

Magnus Olsson

unread,
Jan 17, 1998, 3:00:00 AM1/17/98
to

In article <69ofhm$np$1...@drollsden.ibm.net>,

The problem is not really parsing, but disambiguation and scope. What
happens if there's a B/W TV in the room that I and John are in right
now, and a colour TV in the room to the north, and I give the command
"John, go north and turn on the TV"?

Three possibilities:

1) John goes north. From the north, you hear Johns saying:
"But the B/W TV isn't here!"

(This is what happens if disambiguation is done with the scope
John and I have before the command).


2) John goes north. From the north, you hear voices from the colour TV.

(Reasonable, and what the player probably meant - but how does
the parser know that? And what if the player hasn't been in the
northern room, se he doesn't know there's a TV in there, but John
has).

3) John asks you: "Which TV do you mean, the colour TV or the B/W TV?"

(Playing it safe, but John seems irritatingly literal-minded! And
the same objection as 2) holds if the player hasn't been north
yet).

--
Magnus Olsson (m...@df.lth.se, zeb...@pobox.com)
------ http://www.pobox.com/~zebulon ------
Not officially connected to LU or LTH.

Trevor Barrie

unread,
Jan 17, 1998, 3:00:00 AM1/17/98
to

In article <69r0j9$ecd$1...@bartlet.df.lth.se>,
Magnus Olsson <m...@bartlet.df.lth.se> wrote:

>>TADS will parse that sentence just fine, and I'd sort of assumed Inform
>>could handle it as well. As far as I can tell, the only problem is writing
>>your responses to check that the player can actually see/hear the current
>>actor.
>
>The problem is not really parsing, but disambiguation and scope. What
>happens if there's a B/W TV in the room that I and John are in right
>now, and a colour TV in the room to the north, and I give the command
>"John, go north and turn on the TV"?

Hmmm. Just a second, I'll whip up a two-room game and check.

[Elevator music]

Okay, it seems that the default when using TADS w/ Worldclass is that if
the player knows there's a TV in the other room, John will turn that on;
otherwise he'll try to turn on the black-and-white TV (and fail, of course).
Seems like a reasonably good way of handling things.

Joe Mason

unread,
Jan 18, 1998, 3:00:00 AM1/18/98
to

In article <34bfa00...@nntp.ix.netcom.com>,

Andrew Stern <aps...@ix.netcom.com> wrote:
>
>Although they are animal characters and therefore don't use formal
>spoken language, Dogz and Catz are intelligent autonomous characters
>that form a true relationship with the 'player' (their owner) as they
>grow up over time. They are the most expressive and interactive
>animated virtual characters made to date -- you can directly touch,
>pet and pick them up, and they respond immediately with sophisticated
>gesture, sound and animation. They have a variety of objects and toys
>in their environment, with enough critical mass and complexity such
>that little dramatic situations and stories seem emerge (downloadable
>demos are at www.pfmagic.com).

They were a big thing on my floor last term - everybody had one (I'm on a floor
of Computer Science geeks, by the way). The guy down the hall wanted to train
a guard dog, so he repeatedly tormented it - dropped it on its head, help out
food then moved it away, etc. Now it growls and tries to bite the pointer.

I was impressed.

Joe

Nate

unread,
Jan 19, 1998, 3:00:00 AM1/19/98
to

On Fri, 16 Jan 1998 18:49:39 GMT, aps...@ix.netcom.com (Andrew Stern)
wrote:

>On Fri, 16 Jan 1998 11:24:00 -0500, jo...@mcs.com (Jorn Barger) wrote:
>
>>Jeff Hatch <je...@hatch.net> wrote:
>>> That's one reason
>>> why I'm skeptical of the "simulationist" approach to IF; machines can't
>>> simulate people well.
>>

>>Imho, the big obstacle is to get a general model of human motivations,
>>which is necessary for understanding metaphors. And the way to get this
>>model is via alife/virtual-worlds, and the way to move forward in
>>alife/vw's is... Erasmatazz.
>

>.... as well as a few other projects moving forward in this arena,
>specifically virtual characters that have motivations, goals,
>personality and emotions. The Virtual Petz characters -- Dogz and
>Catz -- are implemented in this way. (I've been lurking on this list
>for a while, and here's a good opportunity for me to bring this up to
>the IF community!)
>

>Although they are animal characters and therefore don't use formal
>spoken language, Dogz and Catz are intelligent autonomous characters
>that form a true relationship with the 'player' (their owner) as they
>grow up over time. They are the most expressive and interactive
>animated virtual characters made to date -- you can directly touch,
>pet and pick them up, and they respond immediately with sophisticated
>gesture, sound and animation. They have a variety of objects and toys
>in their environment, with enough critical mass and complexity such
>that little dramatic situations and stories seem emerge (downloadable
>demos are at www.pfmagic.com).
>

>Is this of any interest to the IF community? I think so. Even though
>it's very open-ended, free-form, unstructured play (different from
>today's IF experience), you can see the potential of where this is
>going. Similar to the efforts of the Oz project at CMU, the Improv
>project at NYU and others (see my webpage www.netcom.com/~apstern for
>a complete list of related projects), intelligent and autonomous
>virtual characters will be the fundamental building blocks for the
>interactive storytelling of tomorrow. I imagine these characters will
>be combined with narrative techniques and plot craftsmanship pioneered
>from IF projects. I think people from both camps will have to come
>together to make this happen... this will be the next step towards the
>Holodeck. (BTW, I'm sure you all know about this, but the new book
>"Hamlet on the Holodeck: The Future of Narrative in Cyberspace" by
>Janet Murray is an excellent discussion of this topic!)
>

IMHO, Creatures is a much better implementation of a-life than
Catz/Dogz, since Norns do have a rudimentary spoken language, and you
can breed them (try to) to improve. Also, they're so darned cute...


FemaleDeer

unread,
Jan 21, 1998, 3:00:00 AM1/21/98
to

>From: Graham Nelson <gra...@gnelson.demon.co.uk>
>Date: Fri, Jan 16, 1998 07:45 EST

>Well... it is and it isn't. The ethical debate over genetic
>engineering will be as nothing by comparison.
>

Well, let's stay with this "generation of IF" for awhile please! It took me 2
years to learn Inform as well as I know it now (and I am still learning more
all the time.)

But I think AI's will raise tons of moral questions as well. If/when we have
"real" AI's (an arguable term) what will happen if one is ordered to do
something that goes against it's own internal moral code? Would an AI have an
internal moral code? Maybe it would be logic. What if it is ordered to do
something illogical? And then there is the whole issue of a "thinking being" --
if we had "real" thinking AI's would they still be JUST tools? Useable as
tools? Would that be fair?

And maybe AI's would raise some moral questions about their own useage,
themselves.

FD Who thinks we will probably never have "real" thinking AI's.


------------------------------------------------------------------------------
Femal...@aol.com "Good breeding consists in
concealing how much we think of ourselves and how
little we think of the other person." Mark Twain

Russell "Coconut Daemon" Bailey

unread,
Jan 22, 1998, 3:00:00 AM1/22/98
to

> I don't really see how, unless you equate intelligence with life.

That's why it'll be such a debate. :)

Russell

Magnus Olsson

unread,
Jan 23, 1998, 3:00:00 AM1/23/98
to

In article <34C7D6...@erols.com>,

Russell \"Coconut Daemon\" Bailey <cctd...@erols.com> wrote:
>> I don't really see how, unless you equate intelligence with life.
>
>That's why it'll be such a debate. :)

Already today, children are grieving over their "dead"
tamagochi. Imagine what it will be like when the game characters
actually behave like living people, with their own thoughts and
feelings. Will it be considered murder to kill a game NPC?

Russell "Coconut Daemon" Bailey

unread,
Jan 23, 1998, 3:00:00 AM1/23/98
to

> Already today, children are grieving over their "dead"
> tamagochi. Imagine what it will be like when the game characters
> actually behave like living people, with their own thoughts and
> feelings. Will it be considered murder to kill a game NPC?

Depends. Can they be brought back? If so, it will only be assault :).

Russell

Magnus Olsson

unread,
Jan 24, 1998, 3:00:00 AM1/24/98
to

In article <34C95C...@erols.com>,

Russell \"Coconut Daemon\" Bailey <cctd...@erols.com> wrote:

We've opened a philosophical can of worms here...

Suppose you have an intelligent and (supposedly) self-aware entity, such
as your (*very hypothetical*) Turing-complete AI program. Suppose
further that you terminate such a program, and then, ten years later,
bring it back in exactly the same state as before its termination.
Would that then be the same "person", or a clone?

Make the same thought experiment with a human being. Suppose that you
can (as in Star Trek :-)) record the exact state of a human being's
body. Then you kill the person. Then you re-assemble an exact clone,
down to the quantum state of her brain (never mind that this seems to
be impossible if our current understanding of qunatum mechanics is
correct).

Would you then have resurrected the person? Or would it just be an
exact clone, who *believes* she's the same person?

And suppose you make *two* clones?

Perhaps it's impossible to do such things with living creatures (in
fact, I'm almost convinced it *is* impossible). But then, on the other
hand, it might be practically impossible to do so with an AI as well
(if, say, the AI is implemented as the state of a huge neural network
or cellular automaton a'la Greg Egan's "Permutation City").

And even if you can save the exact state of a person (AI or living
being) - would it then be murder to erase the backup tapes?

I won't even try to answer these questions :-).

Sean T Barrett

unread,
Jan 25, 1998, 3:00:00 AM1/25/98
to

Julian Arnold <jo...@arnod.demon.co.uk> wrote:
>In article <34BDEC...@hatch.net>, Jeff Hatch
>> Which one of these things you mention is the one "we can already do"?
>
>The bit about NPCs performing any action without special code.

"NPCs use the same verbs as PCs".

If the Erasmatron people have their way, nobody
else is going to be able to do the rather obvious
combination of a shared verb base and AIs which
make plans.

Their patent on Erasmatron covers exactly this,
as far as I can read it (I'm not a lawyer!);
see the abstract on their site. Storytelling
where the plot is made up of subplots, e.g. verbs,
etc. etc. yadda yadda.

Shared verbs have been done before, for example
on muds (e.g. LPmuds). Plan-forming AIs have been
done before. See... oh, how about SHRDLU?

I think the Erasmatron stuff is a very important
correct next step needing to be taken for games:
improve NPCs _somehow_ without just waiting for
"all of AI" to be solved. AIs with internal
motivations, goals, plan forming, etc. are great.
Knowledge database, the ability to reason and
connect other ideas--all useful steps forward for
game AIs. Maybe more useful for sim-y computer
games than "pure" IF, but definitely a good step.

However, I am so tired of software patents. I'm going
to boycott it for that reason, and I'd recommend
anyone else who disapproves of software patents,
or thinks they gone and patented something they
shouldn't be, do so as well.

Sean Barrett

Sean T Barrett

unread,
Jan 25, 1998, 3:00:00 AM1/25/98
to

Trevor Barrie <tba...@ibm.net> wrote:
>Okay, it seems that the default when using TADS w/ Worldclass is that if
>the player knows there's a TV in the other room, John will turn that on;
>otherwise he'll try to turn on the black-and-white TV (and fail, of course).
>Seems like a reasonably good way of handling things.

Depends on how believable you want the character to be.
Why does John know what the player knows? I suppose
it's a fine behavior for rec.arts.int-fiction, but
I don't think so for comp.ai.games.

Sean barrett

Magnus Olsson

unread,
Jan 25, 1998, 3:00:00 AM1/25/98
to

In article <EnCnG...@world.std.com>,

Sean T Barrett <buz...@world.std.com> wrote:
>Trevor Barrie <tba...@ibm.net> wrote:
>>Okay, it seems that the default when using TADS w/ Worldclass is that if
>>the player knows there's a TV in the other room, John will turn that on;
>>otherwise he'll try to turn on the black-and-white TV (and fail, of course).
>>Seems like a reasonably good way of handling things.
>
>Depends on how believable you want the character to be.
>Why does John know what the player knows?

It's more complicated than that. The crux is that if the player says
"John, go north and turn on the TV", the interpretation of the sentence
is different depending on whether John, or the player, or both, know that
there is a TV in the room to the north.

If the player doesn't know that, what does he mean? Is he just assuming
that there is one? Or does he mean that John should pick up the TV,
carry it with him to the northern room, and turn it on there? Or that
he should go north, and use the remote to turn on the TV in the first
room.

And in real life John's actions will probably be influenced by his
(John's) knowledge of what the player knows. "Let's see, Paul told me to
go north and turn on the TV. But he can't know that there's a TV in
here, so he must have meant something else. Or maybe I misheard him."

The notion of scope enters here as well, in addition to knowledge: the
words "the TV" must in some sense be in scope for the player when the
command is given, and for John either when the command is given, or
after he has walked north (i.e. either John immediately knows which TV
is meant, or he doesn't know about it until he has actually walked
north and sees it.)

The way WorldClass handles this is just an approximation. It may be
"reasonable"; but AI it's not.

>I suppose
>it's a fine behavior for rec.arts.int-fiction, but
>I don't think so for comp.ai.games.

Now, now, there's no reason to get rude, is there?

Russell "Coconut Daemon" Bailey

unread,
Jan 25, 1998, 3:00:00 AM1/25/98
to

> "NPCs use the same verbs as PCs".
>
> If the Erasmatron people have their way, nobody
> else is going to be able to do the rather obvious
> combination of a shared verb base and AIs which
> make plans.
>
> Their patent on Erasmatron covers exactly this,
> as far as I can read it (I'm not a lawyer!);
> see the abstract on their site. Storytelling
> where the plot is made up of subplots, e.g. verbs,
> etc. etc. yadda yadda.

Jorn, is this true?

Russell

Jorn Barger

unread,
Jan 25, 1998, 3:00:00 AM1/25/98
to

If you know anything about Chris, you can be sure he's not going to use
the patent in a destructive way. He has a position-statement in the
works, that will make this explicit.


j

Anon

unread,
Jan 25, 1998, 3:00:00 AM1/25/98
to


Jorn Barger wrote:

I was under the impression that patents had to be based on _very_ specific
implementations... is this not true? (people can't patent the concepts of
AND, OR, XOR... so how can they patent shared verb?)


Sean T Barrett

unread,
Jan 26, 1998, 3:00:00 AM1/26/98
to

Magnus Olsson <m...@bartlet.df.lth.se> wrote:
>>Depends on how believable you want the character to be.
>>Why does John know what the player knows?
>
>It's more complicated than that. The crux is that if the player says
>"John, go north and turn on the TV", the interpretation of the sentence
>is different depending on whether John, or the player, or both, know that
>there is a TV in the room to the north.

If this is an AI question, I'd say that's false.
Whether the player knows is irrelevent; your
followup comment below is correct (and was what
I had intended to imply by my above comment):

>And in real life John's actions will probably be influenced by his
>(John's) knowledge of what the player knows. "Let's see, Paul told me to
>go north and turn on the TV. But he can't know that there's a TV in
>here, so he must have meant something else. Or maybe I misheard him."

Yes. What matters for John's understanding of the sentence
has nothing to do with the world, or Paul's knowledge; rather
it has to do with John's knowledge of the world and his knowledge
of Paul's knowledge. Approximating "knowledge of the world"
via the world is ok--it represents perfect knowledge of the
world, which in this case isn't too bad (e.g. knowing all of
the surrounding locations and how to get to them); but approximating
"knowledge of the player's knowledge" with the player's knowledge
(in this case, Paul knows exactly what objects the player
knows about) is silly; it makes all characters mind readers.

If John has some reason to _believe_ the player can't know
about the TV to the north (because John is guarding the door
to the north, and nobody should get in there without getting
past John, and he'd KNOW about that), then John might react
differently--as you say, "But he _can't_ know that there's
a TV in here".

But the original claim was that using the flag for whether
the player had been in the room was appropriate. Clearly,
this is _nothing_ like "But he can't know there's a TV in
there"--John is magically determining this fact.

>The notion of scope enters here as well, in addition to knowledge: the
>words "the TV" must in some sense be in scope for the player when the
>command is given, and for John either when the command is given, or
>after he has walked north (i.e. either John immediately knows which TV
>is meant, or he doesn't know about it until he has actually walked
>north and sees it.)

This is slightly oversimplifying it. Unless Paul
writes the instructions down, and John doesn't look
at each instruction until he's read the next, John
_will_ try to understand it _immediately_. But he
can make inferences. Here's my stab at a complete
set of plausible ways of understanding that order:

1. John knows about the TV in this room and not the other one:
1.a. John infers from the player's order that there's
a TV in the room to the north
1.b. It never crosses John's mind that there might be another TV
1.c. John realizes there might be a TV to the north, and asks
the player to disambiguate.

2. John knows about this TV and about the other TV, and has
no reason to think Paul might not know about the other one.
2.a. John assumes the other TV is meant.
2.b. John assumes this TV is meant.
2.c. John asks the player to disambiguate.

3. John knows about this TV and about the other TV, and has
some reason to believe Paul can't know about the other one.
3.a. John assumes this TV is meant.
3.b. John determines that his belief about Paul's knowledge might
be false, and asks Paul to disambiguate. ("How'd you know
there was a TV to the north?")
3.c. The existence of the other TV is supposed to be kept secret;
asking for disambiguation would violate this goal, so John
interprets it as being about this TV (John "keeps a poker
face" about his knowledge of the other TV).

4. John knows about this TV and knows for a fact there's no
TV in the other room.
4.a. John assumes this TV is meant.

I'd say all of the above are plausible real life; in
many of the cases multiple reactions from John are
(to me) plausible. But none of these in any way
depend on looking at "what rooms the player has seen".

Further variations:
In any case where John assumes this (local) TV is
meant, if it doesn't make sense to him to try to turn it
on after going north (there's no remote), he should
respond that the orders don't make sense. "I can't
turn on the TV if I go north first!".

I'm sure all of this may strike some people as overkill.
I'll reiterate what I said originally:

>>Depends on how believable you want the character to be.

moving on...

>The way WorldClass handles this is just an approximation. It may be
>"reasonable"; but AI it's not.
>
>>I suppose
>>it's a fine behavior for rec.arts.int-fiction, but
>>I don't think so for comp.ai.games.
>
>Now, now, there's no reason to get rude, is there?

This was not intended as a rude comment, it was intended
as a matter-of-fact one. There are two classes of people
reading this, and I'm trying to make sure I'm covering
all the bases. You said it above: it may be "reasonable"
(for IF), but AI it's not.

Discussion of "what it should do" as above is appropriate
to c.a.g and probably to r.i.f; the description of what
TADS does is useful; the claim that this is "acceptable"
is fine for implementing an IF, but is clearly not in
any sense related to AI, and hence not very interesting
from a c.a.g. standpoint. Hence, I'd say that the
described behavior of what John would do meets exactly
the description that you think is rude.

Sean Barrett

Sean T Barrett

unread,
Jan 26, 1998, 3:00:00 AM1/26/98
to

Jorn Barger <jo...@mcs.com> wrote:
>If you know anything about Chris, you can be sure he's not going to use
>the patent in a destructive way. He has a position-statement in the
>works, that will make this explicit.

I'll happily retract my worries about it, then.
I've never seen anything in writing about patents
from Crawford, although I haven't read everything.

I certainly wouldn't be surprised about him
being anti-patent, except that applying for a
patent is pretty unnecessary for defensive
purposes, unless you expect to need to
cross-license with someone.

Sean Barrett

Jorn Barger

unread,
Jan 26, 1998, 3:00:00 AM1/26/98
to

Anon <yello...@usa.net> wrote:
> I was under the impression that patents had to be based on _very_ specific
> implementations... is this not true? (people can't patent the concepts of
> AND, OR, XOR... so how can they patent shared verb?)

There's an interesting occasional 'Patent Newsletter' posted to
comp.software-eng, where corporations are shown trying to claim ideas
like webpage bookmarks!

Richard Stallman told me that my DecentWrite wp specs violate some
patent about keeping the page number steady in the window-frame, as you
scroll a page...

So unfortunately, this seems to be an area where the patent office is
utterly clueless, and what matters most is how high-priced your lawyers
are...?


j

Stephen Robert Norris

unread,
Jan 26, 1998, 3:00:00 AM1/26/98
to

In article <1d3f7u2.z40...@jorn.pr.mcs.net>,
jo...@mcs.com (Jorn Barger) intoned:

> If you know anything about Chris, you can be sure he's not going to use
> the patent in a destructive way. He has a position-statement in the
> works, that will make this explicit.
>
>
> j

I'd be astonished if there weren't prior art. I don't think it would
pass the non-obviousness rule either (I thought of it when playing with
ADL as an undergraduate and I'm not especially perceptive...).

Stephen

Matthew T. Russotto

unread,
Jan 26, 1998, 3:00:00 AM1/26/98
to

In article <34CC386D...@usa.net>, Anon <yello...@usa.net> wrote:

}I was under the impression that patents had to be based on _very_ specific
}implementations... is this not true? (people can't patent the concepts of
}AND, OR, XOR... so how can they patent shared verb?)

It's supposed to be true, but it isn't. There's a patent out there on
the combination of RLE and Huffman encoding -- also a patent on RLE
alone, I believe both owned by Hitachi. There's one on exponentiation
on a finite field, which is definitely mathematics (that one is part
of RSADSI's megapatent).
--
Matthew T. Russotto russ...@pond.com
"Extremism in defense of liberty is no vice, and moderation in pursuit
of justice is no virtue."

Coach

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

In article <6ac8p9$ogf$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus
Olsson) wrote:

>In article <34C95C...@erols.com>,
>Russell \"Coconut Daemon\" Bailey <cctd...@erols.com> wrote:
>>> Already today, children are grieving over their "dead"
>>> tamagochi. Imagine what it will be like when the game characters
>>> actually behave like living people, with their own thoughts and
>>> feelings. Will it be considered murder to kill a game NPC?
>>
>>Depends. Can they be brought back? If so, it will only be assault :).
>
>We've opened a philosophical can of worms here...
>
>Suppose you have an intelligent and (supposedly) self-aware entity, such
>as your (*very hypothetical*) Turing-complete AI program. Suppose
>further that you terminate such a program, and then, ten years later,
>bring it back in exactly the same state as before its termination.
>Would that then be the same "person", or a clone?

Depends. Under Asimov's rules, it would be automatically bound to
subservience, no matter its intellect. But that presumes, of course, that
Asimov's rules are hardwired into its personality matrix, and there's no
reason to assume that they would be in all cases.

The issue, I think, is where you draw the line between Turing-complete and
sentient. A fairly complex Eliza system with a fair amount of state memory
might be able to pull it off (more complex than what we have now, of
course), but that wouldn't make it a qualified psychologist. On the other
hand, how do we necessarily know what an artificial intelligence would be
like? We only have human and animal intelligence data; what would an
intelligent plant be like?

>Make the same thought experiment with a human being. Suppose that you
>can (as in Star Trek :-)) record the exact state of a human being's
>body. Then you kill the person. Then you re-assemble an exact clone,
>down to the quantum state of her brain (never mind that this seems to
>be impossible if our current understanding of qunatum mechanics is
>correct).

>Would you then have resurrected the person? Or would it just be an
>exact clone, who *believes* she's the same person?

For all intents and purposes it *is* the same person, so why not treat it
that way?

>And suppose you make *two* clones?

Funny thing, Star Trek actually did deal with this several times. The Trek
philosophy seems to be that regarding the existence of separate entities
created at the expense of of the original subject(s), restoration of the
original subjects is morally binding, no matter the nature of the created
being. However, if there is a simple duplication (as in the case of
Commander William Riker and Lieutenant Thomas Riker), then there is no
problem, and the duplicate recieves all the rights and privileges of the
source being, under an altered identity.

>Perhaps it's impossible to do such things with living creatures (in
>fact, I'm almost convinced it *is* impossible). But then, on the other

Maybe, maybe not. With our current understanding of science it is, but
energy teleportation has been done, so who knows?

>hand, it might be practically impossible to do so with an AI as well
>(if, say, the AI is implemented as the state of a huge neural network
>or cellular automaton a'la Greg Egan's "Permutation City").
>
>And even if you can save the exact state of a person (AI or living
>being) - would it then be murder to erase the backup tapes?

Assuming you've somehow destroyed the original, yes. Definitely yes,
though derezzing or duplicating the original being would probably be a
fairly serious crime in and of itself; an invasion of privacy not too
different from rape, I imagine.

>I won't even try to answer these questions :-).

Oh, why not? It's fun!

/Coach

--
Brian "Coach" Connors conn...@bc.edu

Cinnte, ta fhios agam labhairt Gaeilge. Cad chuige?

Magnus Olsson

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

In article <connorbd-270...@shiv1p5.bc.edu>,

Coach <conn...@bc.edu> wrote:
>In article <6ac8p9$ogf$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus
>Olsson) wrote:
>
>>In article <34C95C...@erols.com>,
>>Russell \"Coconut Daemon\" Bailey <cctd...@erols.com> wrote:
>>>> Already today, children are grieving over their "dead"
>>>> tamagochi. Imagine what it will be like when the game characters
>>>> actually behave like living people, with their own thoughts and
>>>> feelings. Will it be considered murder to kill a game NPC?
>>>
>>>Depends. Can they be brought back? If so, it will only be assault :).
>>
>>We've opened a philosophical can of worms here...
>>
>>Suppose you have an intelligent and (supposedly) self-aware entity, such
>>as your (*very hypothetical*) Turing-complete AI program. Suppose
>>further that you terminate such a program, and then, ten years later,
>>bring it back in exactly the same state as before its termination.
>>Would that then be the same "person", or a clone?
>
>Depends. Under Asimov's rules, it would be automatically bound to
>subservience, no matter its intellect.

I take it you're joking, because this statement is an utter non sequitur...

>But that presumes, of course, that
>Asimov's rules are hardwired into its personality matrix, and there's no
>reason to assume that they would be in all cases.

Why would they? I'm not saying that Asimov's laws, or something like them,
are implausible, but they are by no means necessary.

>The issue, I think, is where you draw the line between Turing-complete and
>sentient. A fairly complex Eliza system with a fair amount of state memory
>might be able to pull it off (more complex than what we have now, of
>course), but that wouldn't make it a qualified psychologist.

This is, of course, the crux of the debate. An interesting modern
development is that more and more people seem to think that it's
possible to pass the Turing test without any kind of "true" AI. But I
think that says more about the human way of interpreting incomplete
data than anything else.

On the other hand, a "real" AI, even a very human-like one as Mr. Data
in Star Trek TNG, would not necessarily pass the Turing test. A space
alien probably wouldn't either.

>>Make the same thought experiment with a human being. Suppose that you
>>can (as in Star Trek :-)) record the exact state of a human being's
>>body. Then you kill the person. Then you re-assemble an exact clone,
>>down to the quantum state of her brain (never mind that this seems to
>>be impossible if our current understanding of qunatum mechanics is
>>correct).
>
>>Would you then have resurrected the person? Or would it just be an
>>exact clone, who *believes* she's the same person?
>
>For all intents and purposes it *is* the same person, so why not treat it
>that way?

For the following reason: Supose it was *you* who were to be
disassembled into atoms, recorded, and then re-created. Even given
1000% reliable technology, so you could be absolutely sure that the
clone would be an exact copy of you, would you be happy to go through
such a process?

I wouldn't. Why? Because when you disassemble me into atoms, I die. And
how does the fact that a perfect copy of me is created an hour later
affect my status as dead? Am I reincarnated in the new body? Perhaps,
but I wouldn't want to bet my life (literally) on it.

>>Perhaps it's impossible to do such things with living creatures (in
>>fact, I'm almost convinced it *is* impossible). But then, on the other
>
>Maybe, maybe not. With our current understanding of science it is, but
>energy teleportation has been done, so who knows?

Well, it's not the "teleportation" part I'm worried about, it's the
recording of the state of the original. For sufficently simple systems
(such as a single photon, which I think the experiment you're
referring to deals with), I think teleportation is indeed possible,
because you can record the complete state of the original system and
then create a new system in an identical state. For a living being, I
think this isn't possible. If not for the sheer complexity of the
task, as for the fact that recording the quantum state of a system
tends to destroy the very state you're recording.

Coach

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

In article <6al5rr$ljk$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus
Olsson) wrote:

>In article <connorbd-270...@shiv1p5.bc.edu>,
>Coach <conn...@bc.edu> wrote:
>>In article <6ac8p9$ogf$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus
>>Olsson) wrote:
>>
>>>In article <34C95C...@erols.com>,
>>>Russell \"Coconut Daemon\" Bailey <cctd...@erols.com> wrote:
>>>>> Already today, children are grieving over their "dead"
>>>>> tamagochi. Imagine what it will be like when the game characters
>>>>> actually behave like living people, with their own thoughts and
>>>>> feelings. Will it be considered murder to kill a game NPC?
>>>>
>>>>Depends. Can they be brought back? If so, it will only be assault :).
>>>
>>>We've opened a philosophical can of worms here...
>>>
>>>Suppose you have an intelligent and (supposedly) self-aware entity, such
>>>as your (*very hypothetical*) Turing-complete AI program. Suppose
>>>further that you terminate such a program, and then, ten years later,
>>>bring it back in exactly the same state as before its termination.
>>>Would that then be the same "person", or a clone?
>>
>>Depends. Under Asimov's rules, it would be automatically bound to
>>subservience, no matter its intellect.
>
>I take it you're joking, because this statement is an utter non sequitur...

From a legal standpoint, it would be an issue, though.

>>But that presumes, of course, that
>>Asimov's rules are hardwired into its personality matrix, and there's no
>>reason to assume that they would be in all cases.
>
>Why would they? I'm not saying that Asimov's laws, or something like them,
>are implausible, but they are by no means necessary.
>
>>The issue, I think, is where you draw the line between Turing-complete and
>>sentient. A fairly complex Eliza system with a fair amount of state memory
>>might be able to pull it off (more complex than what we have now, of
>>course), but that wouldn't make it a qualified psychologist.
>
>This is, of course, the crux of the debate. An interesting modern
>development is that more and more people seem to think that it's
>possible to pass the Turing test without any kind of "true" AI. But I
>think that says more about the human way of interpreting incomplete
>data than anything else.

Do explain.

>On the other hand, a "real" AI, even a very human-like one as Mr. Data
>in Star Trek TNG, would not necessarily pass the Turing test. A space
>alien probably wouldn't either.

The problem is that the Turing Test is by definition subjective. I'd be
willing to bet that on some level, Data and the alien would. I think the
big issue is that apart from some basic ideas about intuition, nobody
really understands what intelligence truly is; a program can be
self-aware, but nobody's going to argue that a Lisp Machine is sentient
because it knows how to garbage collect. It certainly doesn't consist of
analytical ability; a bee can pick out the meaning of its sister's dance
without a great deal of strain, but not even a whole hive is capable of
displaying what we would consider intelligence.

>>>Make the same thought experiment with a human being. Suppose that you
>>>can (as in Star Trek :-)) record the exact state of a human being's
>>>body. Then you kill the person. Then you re-assemble an exact clone,
>>>down to the quantum state of her brain (never mind that this seems to
>>>be impossible if our current understanding of qunatum mechanics is
>>>correct).
>>
>>>Would you then have resurrected the person? Or would it just be an
>>>exact clone, who *believes* she's the same person?
>>
>>For all intents and purposes it *is* the same person, so why not treat it
>>that way?
>
>For the following reason: Supose it was *you* who were to be
>disassembled into atoms, recorded, and then re-created. Even given
>1000% reliable technology, so you could be absolutely sure that the
>clone would be an exact copy of you, would you be happy to go through
>such a process?

Yes.

>I wouldn't. Why? Because when you disassemble me into atoms, I die. And
>how does the fact that a perfect copy of me is created an hour later
>affect my status as dead? Am I reincarnated in the new body? Perhaps,
>but I wouldn't want to bet my life (literally) on it.

The crux of the point here seems to be whether or not we assume that it's
a proven technology. As far as I'm concerned, if my copy is entirely
identical to me to all intents and purposes, I couldn't care less. It
would still essentially be me.

>>>Perhaps it's impossible to do such things with living creatures (in
>>>fact, I'm almost convinced it *is* impossible). But then, on the other
>>
>>Maybe, maybe not. With our current understanding of science it is, but
>>energy teleportation has been done, so who knows?
>
>Well, it's not the "teleportation" part I'm worried about, it's the
>recording of the state of the original. For sufficently simple systems
>(such as a single photon, which I think the experiment you're
>referring to deals with), I think teleportation is indeed possible,
>because you can record the complete state of the original system and
>then create a new system in an identical state. For a living being, I
>think this isn't possible. If not for the sheer complexity of the
>task, as for the fact that recording the quantum state of a system
>tends to destroy the very state you're recording.

Which is why it can't be done with today's science. Don't forget, we know
what we know, but we don't necessarily know what we don't know...

(So call me a hopeless romantic. I'm not a big sci-fi fan, but it's nice
to dream...)

Russell "Coconut Daemon" Bailey

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

> the patent in a destructive way. He has a position-statement in the
> works, that will make this explicit.

Well, I'm glad about that... I'm toying with AI engine right now, and
it uses a common-verb system. I probably won't distribute it, but it's
good to know that Chris is going to be reasonable.

As far as Chris, I've read a lot of his essays, but didn't really know
if he was the territorial type...

Russell

Russell "Coconut Daemon" Bailey

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

> I was under the impression that patents had to be based on _very_ > specific
> implementations... is this not true? (people can't patent the concepts > of
> AND, OR, XOR... so how can they patent shared verb?)
But there is at least one patent on the use of XOR. Don't know the
details, but it caused Commodore grief with the Amiga.

Russell

John Francis

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

In article <34CE50...@erols.com>,

Russell \"Coconut Daemon\" Bailey <cctd...@erols.com> wrote:

There was an attempt to claim that the use of XOR to display a graphical
cursor was a patented technique. I was working at Apollo at the time,
and we decided not to license the technique, but instead to be prepared
to defend our use of this technology with the two standard defenses
against this sort of thing:

1) The technique was in use prior to the date of the patent

2) The technique was "obvious to a skilled practitioner in the art".

Showing either of these to be true is suffcient to overturn the patent.


Note that the granting of a patent does not in itself set any legal
precedent to influence the outcome of a challenge in the courts -
that only happens when a suit is actually tried. The grant of a
patent just says that (as far as the patent office can tell) the
patent claim covers a patentable invention, and one that doesn't
appear to have been the subject of a prior patent. They explicitly
do *not* claim to have adjudicated the technical merits of the
patent. (e.g. I'm a patent clerk, not Albert Einstein :-)

Many holders of dubious patents have chosen not to pursue their
rights through the courts, but instead to just continue to rake
in the fees from folks who have chosen to pay, rather than fight.
Should they go to court and lose, they wouldn't be able to collect
*any* fees.
--
John Francis jfra...@sgi.com Silicon Graphics, Inc.
(650)933-8295 2011 N. Shoreline Blvd. MS 43U-991
(650)933-4692 (Fax) Mountain View, CA 94043-1389
Unsolicited electronic mail will be subject to a $100 handling fee.

Trevor Barrie

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

In article <6al5rr$ljk$1...@bartlet.df.lth.se>,
Magnus Olsson <m...@bartlet.df.lth.se> wrote:

>On the other hand, a "real" AI, even a very human-like one as Mr. Data
>in Star Trek TNG, would not necessarily pass the Turing test. A space
>alien probably wouldn't either.

Well, that's presumably because the Turing test is designed to be as
rigorous as possible. Passing a Turing test is near-incontravertible
evidence of sentience, but failing one is no evidence for lack of
sentience.

David Glasser

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

Coach <conn...@bc.edu> wrote:

> In article <6ac8p9$ogf$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus
> Olsson) wrote:
>

> >And even if you can save the exact state of a person (AI or living
> >being) - would it then be murder to erase the backup tapes?
>
> Assuming you've somehow destroyed the original, yes. Definitely yes,
> though derezzing or duplicating the original being would probably be a
> fairly serious crime in and of itself; an invasion of privacy not too
> different from rape, I imagine.

Piers Anthony's* book Split Infinitely involves, when a man finds out
that a female is a robot, him forcing her to show her where her "data
dump port" (or some such nonsense) is, and then sticking a plug into it
to try to figure out who sent her, and why.

This is referred to by the girl as rape, and she notes that it is even
physically similar to that act.

*Yes, I did read Piers Anthony once.
--David Glasser
gla...@NOSPAMuscom.com

David Glasser

unread,
Jan 27, 1998, 3:00:00 AM1/27/98
to

Magnus Olsson <m...@bartlet.df.lth.se> wrote:

> In article <connorbd-270...@shiv1p5.bc.edu>,
> Coach <conn...@bc.edu> wrote:

> >For all intents and purposes it *is* the same person, so why not treat it
> >that way?
>
> For the following reason: Supose it was *you* who were to be
> disassembled into atoms, recorded, and then re-created. Even given
> 1000% reliable technology, so you could be absolutely sure that the
> clone would be an exact copy of you, would you be happy to go through
> such a process?
>

> I wouldn't. Why? Because when you disassemble me into atoms, I die. And
> how does the fact that a perfect copy of me is created an hour later
> affect my status as dead? Am I reincarnated in the new body? Perhaps,
> but I wouldn't want to bet my life (literally) on it.

Ah, but let's say that, instead of disassembling you, we simply wrote
down the position and magnitude of every particle in your body, and then
set up the same amount of particles in that arrangement. OK, maybe
simply isn't the correct word, and quantum mechanics may prove that
impossible, but...

> >Maybe, maybe not. With our current understanding of science it is, but
> >energy teleportation has been done, so who knows?
>
> Well, it's not the "teleportation" part I'm worried about, it's the
> recording of the state of the original. For sufficently simple systems
> (such as a single photon, which I think the experiment you're
> referring to deals with), I think teleportation is indeed possible,
> because you can record the complete state of the original system and
> then create a new system in an identical state.

But I thought that the whole point of the Uncertainty Principle (or
something related to that) was that you can't record the entire state of
a photon!

--David Glasser, king of inconsistency
gla...@NOSPAMuscom.com

ct

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

In article <1d3j0kz.1b6...@usol-phl-pa-037.uscom.com>, David Glasser

Gunther Schmidl

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

>But I thought that the whole point of the Uncertainty Principle (or
>something related to that) was that you can't record the entire state of
>a photon!

Ah, but Heisenberg helps in this matter. AFAIK, the professor at the
University of Innsbruck *used* the principle of Heisenberg to "beam" a
photon from one place to another.

--
+------------------------+----------------------------------------------+
+ Gunther Schmidl + "I couldn't help it. I can resist everything +
+ Ferd.-Markl-Str. 39/16 + except temptation" -- Oscar Wilde +
+ A-4040 LINZ +----------------------------------------------+
+ Tel: 0732 25 28 57 + http://gschmidl.home.ml.org - new & improved +
+------------------------+---+------------------------------------------+
+ sothoth (at) usa (dot) net + please remove the "xxx." before replying +
+----------------------------+------------------------------------------+

Magnus Olsson

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

In article <6algvu$nv$1...@drollsden.ibm.net>,

Of course, that depends on how you define the Turing test. If you
define it as "An entity passes the Turing test if it is impossible to
distinguish a text-only communication with it from one with a human",
then you're right.

However, with the current "popular" definition, "An entity passes the
Turing test if a limited (in time and context) text-only communication
with it can make some humans believe that the entity is human", then
it's an entirely different matter.

And when most people talk about "passing the Turing test", at least in
a non-scientific context, they mean something more like the second
definition. This is, of course, partly due to a misunderstanding (and
misrepresentation by sensationalist journalists), but also because
the second definition is much more practical.

Graham Nelson

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

In article <1d3j071.3h...@usol-phl-pa-037.uscom.com>, David Glasser

<URL:mailto:gla...@NOSPAMuscom.com> wrote:
>
> Piers Anthony's* book Split Infinitely involves, when a man finds out
> that a female is a robot, him forcing her to show her where her "data
> dump port" (or some such nonsense) is, and then sticking a plug into it
> to try to figure out who sent her, and why.
>
> This is referred to by the girl as rape, and she notes that it is even
> physically similar to that act.

Don't you just love the way Piers Anthony feels that a metaphor is
just not finished until it's explained to you in words of one
syllable?

> *Yes, I did read Piers Anthony once.

I find that an amusing conversation to have with sci-fi readers
is the "When did you stop reading Piers Anthony?" game. Typically,
it goes like this:

"'Macroscope' isn't all that bad."
"Well... I did just read the Vicinity Cluster trilogy."
"In some ways, the first of the Xanth books was a semi-
original idea."

... about 30 moves omitted ...

"Well, anyway, I've decided to get 'The Color of Her Panties'
out of the library, not to buy it."

Has anyone here read what is possibly the worst SF novel of all
time, his dufferpiece "Triple Detente"?

--
Graham Nelson | gra...@gnelson.demon.co.uk | Oxford, United Kingdom


Stephen Granade

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

On Wed, 28 Jan 1998, Graham Nelson wrote:

> I find that an amusing conversation to have with sci-fi readers
> is the "When did you stop reading Piers Anthony?" game.

Is this not like the question, "When did you stop beating your spouse?" It
presupposes my guilt.

> Has anyone here read what is possibly the worst SF novel of all
> time, his dufferpiece "Triple Detente"?

Heh. Yes*, though my memories of it are (thankfully) sketchy at best.

Stephen

* Okay, so now I'm confirming my guilt. So sue me.

--
Stephen Granade | Interested in adventure games?
sgra...@phy.duke.edu | Check out
Duke University, Physics Dept | http://interactfiction.miningco.com


Magnus Olsson

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

In article <connorbd-270...@shiv1p34.bc.edu>,

Coach <conn...@bc.edu> wrote:
>In article <6al5rr$ljk$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus
>Olsson) wrote:
>
>>In article <connorbd-270...@shiv1p5.bc.edu>,
>>Coach <conn...@bc.edu> wrote:
>>>>Suppose you have an intelligent and (supposedly) self-aware entity, such
>>>>as your (*very hypothetical*) Turing-complete AI program. Suppose
>>>>further that you terminate such a program, and then, ten years later,
>>>>bring it back in exactly the same state as before its termination.
>>>>Would that then be the same "person", or a clone?
>>>
>>>Depends. Under Asimov's rules, it would be automatically bound to
>>>subservience, no matter its intellect.
>>
>>I take it you're joking, because this statement is an utter non sequitur...
>
>From a legal standpoint, it would be an issue, though.

Well, it could be an issue. There used to be a category of humans
"bound to subservience" (slaves) in the laws of many countries, and
sometimes killing such a person didn't count as murder. We don't want
that kind of laws anymore.

>>This is, of course, the crux of the debate. An interesting modern
>>development is that more and more people seem to think that it's
>>possible to pass the Turing test without any kind of "true" AI. But I
>>think that says more about the human way of interpreting incomplete
>>data than anything else.
>
>Do explain.

Well, what I meant was that it's easy to fool people, because we are
looking for certain kinds of patterns, which means that we will
sometimes find them even when they aren't there.

And I have the impression that an increasing number of both AI
researchers and psychologists believe that it doesn't take "real" AI
to simulate human conversation to the point where you'll fool most
people.

>>For the following reason: Supose it was *you* who were to be
>>disassembled into atoms, recorded, and then re-created. Even given
>>1000% reliable technology, so you could be absolutely sure that the
>>clone would be an exact copy of you, would you be happy to go through
>>such a process?
>
>Yes.
>
>>I wouldn't. Why? Because when you disassemble me into atoms, I die. And
>>how does the fact that a perfect copy of me is created an hour later
>>affect my status as dead? Am I reincarnated in the new body? Perhaps,
>>but I wouldn't want to bet my life (literally) on it.
>
>The crux of the point here seems to be whether or not we assume that it's
>a proven technology.

No, that's not the crux at all. I stressed the assumption of proven
technology just because it's pretty obvious that most people would be
uneasy to use a teleporter that might turn them into frogs :-).

>As far as I'm concerned, if my copy is entirely
>identical to me to all intents and purposes, I couldn't care less. It
>would still essentially be me.

I hope you realize what you're saying here. You're saying that you
couldn't care less if you were killed and your body disintegrated,
as long as there was a perfect copy at hand to replace you.

Well, to each his own. I can only say that *I* would care. I'd care
very much, thank you.

Ralph Barbagallo

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

How can you be 'reasonable' with a software patent? Isn't it basically
a rule that you have to enforce your patent in order for it to be valid? Or
is that just with trademarks? Let's wait and see what actually happens before
we applaud someone for being 'reasonable' with a software patent.
--
*Ralph Barbagallo http://www.cs.uml.edu/~rbarbaga *rbar...@cs.uml.edu*
"I have known many game designers; they encompass a broad range of
personalities. Yet all these disparate people share one common trait;
they all sport towering egos."--Chris Crawford, 1987.

Andrew Plotkin

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

Stephen Granade (sgra...@lepton.phy.duke.edu) wrote:
> On Wed, 28 Jan 1998, Graham Nelson wrote:

> > I find that an amusing conversation to have with sci-fi readers
> > is the "When did you stop reading Piers Anthony?" game.

> Is this not like the question, "When did you stop beating your spouse?" It
> presupposes my guilt.

I have not stopped reading Piers Anthony.

I haven't read Piers Anthony in years, true. I certainly don't expect
I'll ever buy another Piers Anthony book. But I still have a bunch --
notably _Tarot_, and the first few Proton/Phaze books -- and I have no
reason to think I won't re-read them someday.

(That's not dodging the question. I hadn't read any David Eddings in about
five years, either -- unless you count skimming bits of _Belgarath_ in a
bookstore. Then a few months ago I pulled out the Belgariad and re-read
it. Had just as much fun as I remembered having when I was a kid.)

--Z

--

"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the
borogoves..."

Chris Marriott

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

In article <6ao1f7$2p9$1...@jupiter.cs.uml.edu>, Ralph Barbagallo
<rbar...@cs.uml.edu> writes

> How can you be 'reasonable' with a software patent? Isn't it basically
>a rule that you have to enforce your patent in order for it to be valid? Or
>is that just with trademarks? Let's wait and see what actually happens before
>we applaud someone for being 'reasonable' with a software patent.

As a professional programmer I am *totally* against software patents,
and am very disappointed that a man such as Chris Crawford, for whom I
have a great deal of respect, should get involved in the whole sordid
business.

Thank goodness that the European Union has the good sense not to permit
such abominations here in Europe.

Chris

----------------------------------------------------------------
Chris Marriott, Microsoft Certified Solution Developer.
SkyMap Software, U.K. e-mail: ch...@skymap.com
Visit our web site at http://www.skymap.com

Zaphod Beeblebrox

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

Magnus Olsson wrote:
>
> I hope you realize what you're saying here. You're saying that you
> couldn't care less if you were killed and your body disintegrated,
> as long as there was a perfect copy at hand to replace you.
>
> Well, to each his own. I can only say that *I* would care. I'd care
> very much, thank you.

No you wouldn't - you'd be programmed not to.

Tony Blews

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

Quoth Stephen Granade <sgra...@lepton.phy.duke.edu>:

>On Wed, 28 Jan 1998, Graham Nelson wrote:

>> I find that an amusing conversation to have with sci-fi readers
>> is the "When did you stop reading Piers Anthony?" game.

>Is this not like the question, "When did you stop beating your spouse?" It
>presupposes my guilt.

I once wrote a mud based on Xanth themes, but i see your point.

>> Has anyone here read what is possibly the worst SF novel of all
>> time, his dufferpiece "Triple Detente"?
>Heh. Yes*, though my memories of it are (thankfully) sketchy at best.

I remeber it all too well. Then again, I remember walking across a
farmyard too.
--
Tony Blews, tony @ netlrp.uk.com (autoresponder)
http://jumper.mcc.ac.uk/~tonyb/

David Glasser

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

Graham Nelson <gra...@gnelson.demon.co.uk> wrote:

> In article <1d3j071.3h...@usol-phl-pa-037.uscom.com>, David Glasser
> <URL:mailto:gla...@NOSPAMuscom.com> wrote:
> >
> > Piers Anthony's* book Split Infinitely involves, when a man finds out
> > that a female is a robot, him forcing her to show her where her "data
> > dump port" (or some such nonsense) is, and then sticking a plug into it
> > to try to figure out who sent her, and why.
> >
> > This is referred to by the girl as rape, and she notes that it is even
> > physically similar to that act.
>
> Don't you just love the way Piers Anthony feels that a metaphor is
> just not finished until it's explained to you in words of one
> syllable?

It always made me wonder what age group they were aimed at; they
suffered from what you say but yet have an extraordinary amount of sex
for a book that used that type of writing. The series I was talking
about earlier, oh, I don't even want to keep thinking, and his short
stories...not for kids.

> > *Yes, I did read Piers Anthony once.
>

> I find that an amusing conversation to have with sci-fi readers

> is the "When did you stop reading Piers Anthony?" game. Typically,
> it goes like this:
>
> "'Macroscope' isn't all that bad."
> "Well... I did just read the Vicinity Cluster trilogy."
> "In some ways, the first of the Xanth books was a semi-
> original idea."
>
> ... about 30 moves omitted ...
>
> "Well, anyway, I've decided to get 'The Color of Her Panties'
> out of the library, not to buy it."

I must say, though, that the Incarnations of Immortality are quite good.
And, as you say, the first Xanth book was pretty good. And that's about
it.

Then again, I did read about half of his books, and for some reason
bought most of them; I think they are listed on my web page for the next
sucker^H^H^H^H^H^Hconneisseur of science fiction and fantasy to take off
my hands.

> Has anyone here read what is possibly the worst SF novel of all
> time, his dufferpiece "Triple Detente"?

That's not the one that was originally called "3.14 Erect" or whatever,
right?

--David Glasser, who really should *not* have read his short stories
gla...@NOSPAMuscom.com

Russell "Coconut Daemon" Bailey

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

Whether something passes a "Turing Test" depends largely on whose
grading it.

Russell

Russell "Coconut Daemon" Bailey

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

> > Well, to each his own. I can only say that *I* would care. I'd care
> > very much, thank you.
>
> No you wouldn't - you'd be programmed not to.

Not nessecarily. However, the point stands that not all humans would
agree, so human-level AIs would probably develop different notions
about it. Religious faith/belief in spiritual immortality modifies this
response, because it would insist that an exact copy of a person is not
that person.

Russell

Trevor Barrie

unread,
Jan 28, 1998, 3:00:00 AM1/28/98
to

In article <6an4lf$rqg$1...@bartlet.df.lth.se>,
Magnus Olsson <m...@bartlet.df.lth.se> wrote:

>>Well, that's presumably because the Turing test is designed to be as
>>rigorous as possible. Passing a Turing test is near-incontravertible
>>evidence of sentience, but failing one is no evidence for lack of
>>sentience.
>
>Of course, that depends on how you define the Turing test. If you
>define it as "An entity passes the Turing test if it is impossible to
>distinguish a text-only communication with it from one with a human",
>then you're right.

I didn't realize there was another definition.

>However, with the current "popular" definition, "An entity passes the
>Turing test if a limited (in time and context) text-only communication
>with it can make some humans believe that the entity is human", then
>it's an entirely different matter.

Well, yes... in that case, Eliza probably passed the Turing Test decades
ago.

JC

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

On Mon, 26 Jan 1998 00:25:45 GMT, buz...@world.std.com (Sean T Barrett)
wrote:


>Magnus Olsson <m...@bartlet.df.lth.se> wrote:
>>>Depends on how believable you want the character to be.
>>>Why does John know what the player knows?
>>
>>It's more complicated than that. The crux is that if the player says
>>"John, go north and turn on the TV", the interpretation of the sentence
>>is different depending on whether John, or the player, or both, know that
>>there is a TV in the room to the north.

>If this is an AI question, I'd say that's false.
>Whether the player knows is irrelevent; your
>followup comment below is correct (and was what
>I had intended to imply by my above comment):

>>And in real life John's actions will probably be influenced by his
>>(John's) knowledge of what the player knows. "Let's see, Paul told me to
>>go north and turn on the TV. But he can't know that there's a TV in
>>here, so he must have meant something else. Or maybe I misheard him."


Ok, what I'm going to say is not directly related to this discussion, but
it's something which I think people tend to overlook when considering this
topic.

The command "John, go north and turn on the TV" is an abstraction. In real
life there are many facets to communication, both verbal and non-verbal,
which to help qualify it. Such as: tone, timing, hand gestures, etc.

The real life equivalent of an IF command directed at at character is like
them having a peice of paper with typed simple verb-noun commands (or
something along those lines) put in their hand. Imagine trying to
communicate with people if this was your only means.

As an example of a possible real life situation:

You point to the bedroom and say to John "Could you go in there and turn
on the TV?" You're request meets a blank look on John's face.
'Whoops', you think. 'I forgot to tell him that I'd moved the TV in
there the other day'. "I put the TV in the bedroom" you give as an
explanation to his puzzlement.

You would base your communication upon your knowledge of what John knows,
and many other things. And if an assumption of what he knows was wrong
there would (most likely) be some cue from him.

Another example:

In the previous example, John didn't know that there was a tv in the
bedroom. Yet, he might be able to infer from your plain and matter-of-
fact tone of voice that there was now a TV in the bedroom; that you knew
he'd be able to work it out.

In real life, the person making the request would use their knowledge of
the situation, the other person, and what both you and (what you think)
they know, to tailor the communication to make it less ambiguous.

The comprehension, of what you say, by the other person is rarely done in a
vaccumm either. If they can't disambiguate something they will make this
fact apparent, so that the other person can qualify the meaning.

What I'm saying is that disambiguation can occur at both ends of the
communication, yet it is "normally" only considerd at the receiving end.

>Yes. What matters for John's understanding of the sentence
>=has nothing to do with the world=, or Paul's knowledge; rather
>it has to do with John's knowledge of the world and his knowledge
>of Paul's knowledge.

Are we still talking about real life here? If so, your statement is
incorrect. I would say that the majority of dismabiguation would come from
cues in the communication from the other person. Tone of voice, hand
guestures, timing, etc. These sorts of things are certanly part of the
world.

> Approximating "knowledge of the world"
>via the world is ok--it represents perfect knowledge of the
>world, which in this case isn't too bad (e.g. knowing all of
>the surrounding locations and how to get to them); but approximating
>"knowledge of the player's knowledge" with the player's knowledge
>(in this case, Paul knows exactly what objects the player
>knows about) is silly; it makes all characters mind readers.

>If John has some reason to _believe_ the player can't know
>about the TV to the north (because John is guarding the door
>to the north, and nobody should get in there without getting
>past John, and he'd KNOW about that), then John might react
>differently--as you say, "But he _can't_ know that there's
>a TV in here".

>But the original claim was that using the flag for whether
>the player had been in the room was appropriate. Clearly,
>this is _nothing_ like "But he can't know there's a TV in
>there"--John is magically determining this fact.

>>The notion of scope enters here as well, in addition to knowledge: the
>>words "the TV" must in some sense be in scope for the player when the
>>command is given, and for John either when the command is given, or
>>after he has walked north (i.e. either John immediately knows which TV
>>is meant, or he doesn't know about it until he has actually walked
>>north and sees it.)

>This is slightly oversimplifying it. Unless Paul
>writes the instructions down, and John doesn't look
>at each instruction until he's read the next, John
>_will_ try to understand it _immediately_. But he
>can make inferences. Here's my stab at a complete
>set of plausible ways of understanding that order:

>1. John knows about the TV in this room and not the other one:
>1.a. John infers from the player's order that there's
> a TV in the room to the north
>1.b. It never crosses John's mind that there might be another TV
>1.c. John realizes there might be a TV to the north, and asks
> the player to disambiguate.

>2. John knows about this TV and about the other TV, and has
> no reason to think Paul might not know about the other one.
>2.a. John assumes the other TV is meant.
>2.b. John assumes this TV is meant.
>2.c. John asks the player to disambiguate.

>3. John knows about this TV and about the other TV, and has
> some reason to believe Paul can't know about the other one.
>3.a. John assumes this TV is meant.
>3.b. John determines that his belief about Paul's knowledge might
> be false, and asks Paul to disambiguate. ("How'd you know
> there was a TV to the north?")
>3.c. The existence of the other TV is supposed to be kept secret;
> asking for disambiguation would violate this goal, so John
> interprets it as being about this TV (John "keeps a poker
> face" about his knowledge of the other TV).

>4. John knows about this TV and knows for a fact there's no
> TV in the other room.
>4.a. John assumes this TV is meant.

>I'd say all of the above are plausible real life; in
>many of the cases multiple reactions from John are
>(to me) plausible. But none of these in any way
>depend on looking at "what rooms the player has seen".

>Further variations:
>In any case where John assumes this (local) TV is
>meant, if it doesn't make sense to him to try to turn it
>on after going north (there's no remote), he should
>respond that the orders don't make sense. "I can't
>turn on the TV if I go north first!".

>I'm sure all of this may strike some people as overkill.
>I'll reiterate what I said originally:

>>>Depends on how believable you want the character to be.
>
>moving on...
>
>>The way WorldClass handles this is just an approximation. It may be
>>"reasonable"; but AI it's not.
>>
>>>I suppose
>>>it's a fine behavior for rec.arts.int-fiction, but
>>>I don't think so for comp.ai.games.
>>
>>Now, now, there's no reason to get rude, is there?

>This was not intended as a rude comment, it was intended
>as a matter-of-fact one. There are two classes of people
>reading this, and I'm trying to make sure I'm covering
>all the bases. You said it above: it may be "reasonable"
>(for IF), but AI it's not.

>Discussion of "what it should do" as above is appropriate
>to c.a.g and probably to r.i.f; the description of what
>TADS does is useful; the claim that this is "acceptable"
>is fine for implementing an IF, but is clearly not in
>any sense related to AI, and hence not very interesting
>from a c.a.g. standpoint. Hence, I'd say that the
>described behavior of what John would do meets exactly
>the description that you think is rude.

Correctness is entirely serparate from tone.


>Sean Barrett

';';James';';

Sean T Barrett

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

JC <jrc...@netspace.net.au> wrote:
>What I'm saying is that disambiguation can occur at both ends of the
>communication, yet it is "normally" only considerd at the receiving end.

Of course, your example involved _first_ the receiving
end communicating that it needed disambiguation (the
blank look). If the sending end disambiguates without
requiring any communication from the reciver, then the
sender isn't being ambiguous.

>>Yes. What matters for John's understanding of the sentence
>>=has nothing to do with the world=, or Paul's knowledge; rather
>>it has to do with John's knowledge of the world and his knowledge
>>of Paul's knowledge.
>
>Are we still talking about real life here? If so, your statement is
>incorrect. I would say that the majority of dismabiguation would come from
>cues in the communication from the other person. Tone of voice, hand
>guestures, timing, etc. These sorts of things are certanly part of the
>world.

Fine. If you want to get technical, refine it to
"What matter's for John's understanding of the
_communication act_" etc. etc.

I was trying to stick with the existing concrete
case. If you want to change the case you can,
but that doesn't make my claim incorrect, since
rewording my claim in the obvious way for your
case makes it true again.

I understand its really your intention not to
dispute my claim, but rather to raise a different
issue; but to me it's largely irrelevent whether
a game displays:
John says, "I'm not sure what you mean."
or
John looks a little puzzled but says "Ok."

It's all text on a screen at this point.
The flavor and density of communication
are entirely different, of course, but
I don't see what lessons we can learn
from it to apply to this AI question.

Sean Barrett

Patrick Kellum

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In article <1d3kvg9.1hz...@usol-csh-pa-002.uscom.com>, David Glasser was talking about:

>It always made me wonder what age group they were aimed at; they
>suffered from what you say but yet have an extraordinary amount of sex
>for a book that used that type of writing. The series I was talking
>about earlier, oh, I don't even want to keep thinking, and his short
>stories...not for kids.

I have to step in and say I am a very big fan of Piers Anthony. Yes his
books do sometimes have sex in them (Bio of a Space Tyrant being on of the
bigest offenders) but they are well written and are original. Personally,
I think the Mode series was his greatest series yet, better then nearly
anything else out there IMO. Xanth is great, it contains little or no sex
and is aimed at a wide range of readers ranging from young kids to old
farts. And last but not least the Authors Notes in his books are great.
How many other authors go out of their way to not only attempt to answer
their fan mail but also mention readers in the authors notes and use there
ideas (in Xanth he goes a little overboard though.)

Just throwing in my 2 cents.

Patrick
---
A Title For This Page -- http://www.syix.com/patrick/
Bow Wow Wow Fan Page -- http://www.syix.com/patrick/bowwowwow/
The Small Wonder Page -- http://smallwonder.simplenet.com/
My Arcade Page -- http://ygw.bohemianweb.com/arcade/
"I have photographs of you naked with a squirrel." - Dave Barry

FemaleDeer

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

>From: pat...@syix.com (Patrick Kellum)
>Date: Thu, Jan 29, 1998 02:49 EST

Re: Piers Anthony

>How many other authors go out of their way to not only attempt to answer
>their fan mail but also mention readers in the authors notes

Not to mention the "Society for Creative Anachronism".

FD :-)
------------------------------------------------------------------------------
Femal...@aol.com "Good breeding consists in
concealing how much we think of ourselves and how
little we think of the other person." Mark Twain

Magnus Olsson

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In article <34CFA6CB...@magrathea.com>,

Zaphod Beeblebrox <zap...@magrathea.com> wrote:
>Magnus Olsson wrote:
>>
>> I hope you realize what you're saying here. You're saying that you
>> couldn't care less if you were killed and your body disintegrated,
>> as long as there was a perfect copy at hand to replace you.
>>
>> Well, to each his own. I can only say that *I* would care. I'd care
>> very much, thank you.
>
>No you wouldn't - you'd be programmed not to.

I'm talking about me as a human person, not as a hypothetical "me as
an AI".

But this is getting so far off-topic that I think we shouldn't
proceed further. Not here, anyway.

Steve McKinney

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

And this, from Tony Blews:

>Quoth Stephen Granade <sgra...@lepton.phy.duke.edu>:
>
>>On Wed, 28 Jan 1998, Graham Nelson wrote:
>

>>> I find that an amusing conversation to have with sci-fi readers
>>> is the "When did you stop reading Piers Anthony?" game.

-snip-


>>> Has anyone here read what is possibly the worst SF novel of all
>>> time, his dufferpiece "Triple Detente"?
>>

>>Heh. Yes*, though my memories of it are (thankfully) sketchy at best.
>
>I remeber it all too well. Then again, I remember walking across a
>farmyard too.

Speaking of farmyards, I remember reading a story called "In the
Barn."
--
Steve McKinney <sj...@bellsouth.net>

"Never let your sense of morals keep you from doing what is right."
--Isaac Asimov

Chris [Steve] Piuma

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In article <1d3kvg9.1hz...@usol-csh-pa-002.uscom.com>,

gla...@NOSPAMuscom.com (David Glasser) wrote:
> It always made me wonder what age group they were aimed at; they
> suffered from what you say but yet have an extraordinary amount of sex
> for a book that used that type of writing. The series I was talking
> about earlier, oh, I don't even want to keep thinking, and his short
> stories...not for kids.

And yet, being thirteen or fourteen when I read those stories (along with
most of the rest of his, uh, oeuvre), I think I was able to appreciate
these stories all the more...

Remember the cow one? Remember the body suit one, where after coitus the
woman... ah, I see you do remember. You must admit, for pornography,
they're fairly, uh, insane.



> I must say, though, that the Incarnations of Immortality are quite good.
> And, as you say, the first Xanth book was pretty good. And that's about
> it.

Now, now, Macroscope really isn't all that bad...



> > Has anyone here read what is possibly the worst SF novel of all
> > time, his dufferpiece "Triple Detente"?

> That's not the one that was originally called "3.14 Erect" or whatever,
> right?

Uh, I don't think so. I don't remember much of anything about the plot of
Triple Detente. Three alien groups take over each other's planet in a
circle? A takes over B takes over C takes over A?

No, now that the plot of 3.14 Erect (which I never was able to find) is
coming back to me, it's nothing like Triple Detente.

--
Chris [Steve] Piuma, etc. Nothing is at: http://www.brainlink.com/~cafard
[Editor of _flim_, Keeper of the R.E.M. Lyric Annotations FAQ, MST3K #43136]
Again: haven't read Piers since I was fourteen or fifteen. Look, there,
near the mouse: the copy of The Crying of Lot 49 with the wild British
cover that I've almost finished reading...!

John W. Kennedy

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In <6an4lf$rqg$1...@bartlet.df.lth.se>, m...@bartlet.df.lth.se (Magnus Olsson) writes:
>However, with the current "popular" definition, "An entity passes the
>Turing test if a limited (in time and context) text-only communication
>with it can make some humans believe that the entity is human", then
>it's an entirely different matter.

By that definition, Eliza passed the test back in the 60's.

Magnus Olsson

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

Yes. I think most of us agree that it's too simple a definition of AI :-).

However, there is a continuous scale between this definition and
the "strong" one (which is basically that the AI would make all
humans believe that it's human). Where do you draw the line?

Graham Nelson

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In article <6apca1$5hb$2...@neko.syix.com>, Patrick Kellum

<URL:mailto:pat...@syix.com> wrote:
>
> I have to step in and say I am a very big fan of Piers Anthony. Yes his
> books do sometimes have sex in them (Bio of a Space Tyrant being on of the
> bigest offenders) but they are well written and are original. Personally,
> I think the Mode series was his greatest series yet, better then nearly
> anything else out there IMO. Xanth is great, it contains little or no sex
> and is aimed at a wide range of readers ranging from young kids to old
> farts. And last but not least the Authors Notes in his books are great.
> How many other authors go out of their way to not only attempt to answer
> their fan mail but also mention readers in the authors notes and use there
> ideas (in Xanth he goes a little overboard though.)


Wibble's half-brother Wobble looked down from the heights. The game now
stood at four-three to Gerec the golem. If he could just steer the ball
upwards, where the unicorn could give it a neat side-flick of her horn...
but under the rules of the Gaming Hall, that would only be allowed if he
could testify as to the colour of her underwear.

"Believe me," he said, "I do respect you and regret the necessity for
this," as his hand slid over her flank and magically the unicorn's thigh
seemed to become human and female. Strong magic indeed! "You know that I
am oriented on my lady Saddama, rider of gold Baath -" but he stopped and
blushed. She had become entirely human, and was naked, her honey-blonde
hair rippling in the light from the firepits!

"Oh, Wobble," she said sadly, "I know you can never truly be mine, when
you will have to be Champion soon, but at least once and probably lots of
times, would it be so terrible if I performed utterly obliging and
submissive acts on your body under the pretext of a set of rules in which
you had no choice but to enjoy yourself without obligation?"

Although he was a short man, he made up for it in passion, and she with
dexterity. Afterwards, as he lay across some of her chest, they decided to
re-order the socio-political structure of society so that each person's daily
wage was paid by the cake-slicing method. "If only every world had someone
of such brilliantly incisive social thought! Shall we do it again? You'll
never guess where my horn is now -"

"Oy!" called Gerec, "Are you still playing?"

---

Amazingly, my publishers resisted when I first asked to include this
Afterword, though my thousands of fan letters every hour beg to hear more
details of my digestive disorder and of the little shack where I write out
in the woods, eating pancakes with syrup and never suffering once from a
lack of invention like some SF authors I could mention! Why, only the other
day, after my second novel of the afternoon...

Russell "Coconut Daemon" Bailey

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

Ooh. Sorry, I meant "who's grading it."

Russell

Patrick Kellum

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In article <ant291552f7fM+4%@gnelson.demon.co.uk>, Graham Nelson was talking about:

[ Graham-generated Piers Anthony sample clipped ]

Ok, I'll admit that was funny. But, I think you're focusing more on his
older works and his porn which isn't the best (he's elderly and still
writting porn, cut him some slack :-) His newer books are a grat
improvement over the older works.

Andrew Plotkin

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

Patrick Kellum (pat...@syix.com) wrote:
> His newer books are a grat
> improvement over the older works.

You suddenly started speaking another language, or something.

David Glasser

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

Patrick Kellum <pat...@syix.com> wrote:

> I have to step in and say I am a very big fan of Piers Anthony. Yes his
> books do sometimes have sex in them (Bio of a Space Tyrant being on of the
> bigest offenders) but they are well written and are original. Personally,
> I think the Mode series was his greatest series yet, better then nearly
> anything else out there IMO. Xanth is great, it contains little or no sex
> and is aimed at a wide range of readers ranging from young kids to old
> farts. And last but not least the Authors Notes in his books are great.
> How many other authors go out of their way to not only attempt to answer
> their fan mail but also mention readers in the authors notes and use there
> ideas (in Xanth he goes a little overboard though.)

Ok, I went a bit overboard. He is a pretty good writer. He gets a bit
boring, though, and some of his books are terrible, but some are good.
As I mentioned, I love the Incarnations, and Mode is pretty good (odd,
but good).

> Just throwing in my 2 cents.

--David Glasser
gla...@NOSPAMuscom.com

David A. Cornelson

unread,
Jan 29, 1998, 3:00:00 AM1/29/98
to

In article <ant291552f7fM+4%@gnelson.demon.co.uk>,

Graham Nelson <gra...@gnelson.demon.co.uk> wrote:
>
> In article <6apca1$5hb$2...@neko.syix.com>, Patrick Kellum
> <URL:mailto:pat...@syix.com> wrote:
> >
> > I have to step in and say I am a very big fan of Piers Anthony. Yes his

Anthony is a homophobic cretin. He couldn't write his way out of a box. I
read (sorry to admit) Bio of a Space Tyrant, and I was disgusted with his
rules for society. Kill all of the biggots (which even though there are
some evil people, you just can't have someone at the top deciding who the
evil people are), suppress all homosexuality (probably even in himself -
some of the best people I know are on the other team), and all of this
'for the good of the universe'.

Please.

Try reading something significant like Dune or Stranger in a Strange
Land. Outside of SciFi-Fantasy try a little Mark Helprin, but I don't
want you to drown, so maybe try Pat Conroy, nope still to much, John
Irving, there ya go, he'll knock you on the head with some good ol'
symbolism.

David A. Cornelson, Chicago

-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet

It is loading more messages.
0 new messages