AI and NPCs

6 views
Skip to first unread message

Julian Arnold

unread,
Jan 9, 1996, 3:00:00 AM1/9/96
to
Branko Collin (U24...@vm.uci.kun.nl) wrote:
> The Turing test, for those who do not know, is a test whereby people
> have to sit behind a terminal and have to hold a conversation using
> screen and keyboard. At the end they have to guess whether they talked to
> a computer program or to a human being sat at another terminal. A
> computer labelled by the testers as a human passes the Turing test.

And computers actually pass this test?!?
--
Jools Arnold jo...@arnod.demon.co.uk


Branko Collin

unread,
Jan 9, 1996, 3:00:00 AM1/9/96
to
There's an article on comp.ai.nat-lang that might be of interest to
you. In fact, there are probably a lot more articles in the Artificial
Intelligence newsgroups which may be of interest, but this one
specifically caught my eye. It is called:
Losing the Loebner Competition Forced me to Re-evaluate my Humanity.

As it is rather lengthy and not entirely on-topic for this group, I
thought it wise not to crosspost it.

The Loebner competition is a rather controversial competition, where
programs have to take the Turing test. The controversiality seems to
be for the most part in the fact that the organiser, Hugh Loebner, has
named the price after himself :-).

The Turing test, for those who do not know, is a test whereby people
have to sit behind a terminal and have to hold a conversation using
screen and keyboard. At the end they have to guess whether they talked to
a computer program or to a human being sat at another terminal. A
computer labelled by the testers as a human passes the Turing test.

The post is a report of the runner-up on why he feels he did not win.
The major issue is that he tried to give the program a personality to
emulate. On the other hand, the program of the winner did not have
a personality, but contained an impressive array of different ways
of saying "I don't understand what you're saying".

Anyway, read for yourself, if anything it makes for interesting reading.

.......................................................................
. Branko Collin . | , .
. . met |/ .
. // u24...@vm.uci.kun.nl . |\ .
. \X/ bco...@mpi.nl . | \ .
.......................................................................

Larry Smith

unread,
Jan 10, 1996, 3:00:00 AM1/10/96
to

In article <19960109....@arnod.arnod.demon.co.uk>, jo...@arnod.demon.co.uk (Julian Arnold) writes:

>Branko Collin (U24...@vm.uci.kun.nl) wrote:
>> The Turing test, for those who do not know, is a test whereby people
>> have to sit behind a terminal and have to hold a conversation using
>> screen and keyboard. At the end they have to guess whether they talked to
>> a computer program or to a human being sat at another terminal. A
>> computer labelled by the testers as a human passes the Turing test.

>And computers actually pass this test?!?

Have for years. Ever since Eliza, in fact. As it turns out,
it is fairly easy to fool people who have little or no computer
experience. Heck, I've met people who are convinced their _cars_
are sentient, never mind something as anthropomorphic as a com-
puter programmed to reflect grammatical English back at you in a
clever simulation of sentience. Eliza was notorious for sucking
people in, it scared Weintraub (I think that's his name - the
author of Eliza) how easy it was to fool people. Eliza doesn't
impress computer types because they quickly spot the pattern of
responses, but non-computer people are seldom so sensitive.

--
| .-. .---..---. .---. .-..-. |"In general, the art of government consists |
| | |__ | | || |-< | |-< > / | in taking as much money as possible from |
| `----'`-^-'`-'`-'`-'`-' `-' | one ... citizen ... to give to the other." |
| My opinion alone, every word. | - Voltaire, "Money" (1764). |

John Weir

unread,
Jan 10, 1996, 3:00:00 AM1/10/96
to
In article <4d0nlp$a...@nntpd.lkg.dec.com>, lar...@zk3.dec.com wrote:

> In article <19960109....@arnod.arnod.demon.co.uk>,
jo...@arnod.demon.co.uk (Julian Arnold) writes:
> >Branko Collin (U24...@vm.uci.kun.nl) wrote:
> >> The Turing test, for those who do not know, is a test whereby people
> >> have to sit behind a terminal and have to hold a conversation using
> >> screen and keyboard. At the end they have to guess whether they talked to
> >> a computer program or to a human being sat at another terminal. A
> >> computer labelled by the testers as a human passes the Turing test.
>
> >And computers actually pass this test?!?
>
> Have for years. Ever since Eliza, in fact. As it turns out,


I thought that the actual Turing test set up seven specific tests for
computer AI. If you follow Turings rules (hence the Turing Test) programs
such as Eliza would fail.


j. weir

tes...@wt.com.au

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
jo...@arnod.demon.co.uk (Julian Arnold) wrote:

>Branko Collin (U24...@vm.uci.kun.nl) wrote:
>> The Turing test, for those who do not know, is a test whereby people
>> have to sit behind a terminal and have to hold a conversation using
>> screen and keyboard. At the end they have to guess whether they talked to
>> a computer program or to a human being sat at another terminal. A
>> computer labelled by the testers as a human passes the Turing test.

>And computers actually pass this test?!?

>--

There is a old lisp psychologist emulating program that did fairly
well. It just turns your statements into questions and repeated them
back to you.

As to a program emulating normal conversion? Programmers are reduced
to seeing how many seconds it takes for the person to wake up to fact
that they are talking to a computer and getting really depressed about
the low numbers on their stop watches.

Geoff

Branko Collin

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
In article <19960109....@arnod.arnod.demon.co.uk>

jo...@arnod.demon.co.uk (Julian Arnold) writes:

>
>Branko Collin (U24...@vm.uci.kun.nl) wrote:
>> The Turing test, for those who do not know, is a test whereby people
>> have to sit behind a terminal and have to hold a conversation using
>> screen and keyboard. At the end they have to guess whether they talked to
>> a computer program or to a human being sat at another terminal. A
>> computer labelled by the testers as a human passes the Turing test.
>
>And computers actually pass this test?!?

Oh yes, a program like Eliza has fooled many people into thinking they
were talking to another human being.

But the Turing test was originally devised as a test to decide if
a computer(program) is intelligent. In this it has failed, because of
two reasons (as far is I can tell):
-there is no good definition of intelligence.
-the human factor: people can be fooled into thinking they are dealing
with an intelligent being.

Andrew C. Plotkin

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
U24...@vm.uci.kun.nl (Branko Collin) writes:
> But the Turing test was originally devised as a test to decide if
> a computer(program) is intelligent. In this it has failed, because of
> two reasons (as far is I can tell):
> -there is no good definition of intelligence.
> -the human factor: people can be fooled into thinking they are dealing
> with an intelligent being.

I view the Turing test *as* a definition of intelligence. The idea
being, if something can fool me into think it's a person, on what
basis can I possibly refuse to grant it the rights of a person?

But this gets far off-topic. I don't expect a game to pass the Turing
test any time soon -- possibly not in my lifetime. That's not what I
want in a game.

--Z

"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."

Julian Arnold

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
Xiphias Gladius (i...@cs.brandeis.edu) wrote:
> As I undersand it, the concept of a Turing Test really is a functional
> definition of intelligence. The concept is that something *is*
> inteligent when it can function as something intelligent.
>
> Basically, a computer that passes the Turing Test is one that acts
> intelligent. If I am typing over a terminal, and I can't tell whether
> I'm talking to a reasonably inteligent, if ignorant, human, or a
> computer, then the computer passes, and is intelligent.

But I don't see that this is much of a test of intelligence, at least not the
computer's intelligence. This is mostly new to me, but Eliza, the programs
from the article Jorn posted, and our own IF NPC's are just using various
crude techniques to trick human users into thinking they have some sort of
intelligence.

Eliza, for instance, merely repeats parrot-like (okay, with some simple
modification) the users input. None of the methods seem able to adequately
cope with anything which hasn't been directly hardwired into them, and even
then are prone to grievious mistakes.

Surely these programs or algorithms or whatever are just reflections of the
intelligence of the programmer, not the computer. In other words, the
"intelligent" computer is merely a crudified mouthpiece for the human
programmer. They have no capacity for learning or understanding, merely a
bit of simplistic repetition and faulty stimulus response.
--
Jools Arnold jo...@arnod.demon.co.uk


Xiphias Gladius

unread,
Jan 11, 1996, 3:00:00 AM1/11/96
to
lar...@enemax.zk3.dec.com (Larry Smith) writes:

>In article <19960109....@arnod.arnod.demon.co.uk>, jo...@arnod.demon.co.uk (Julian Arnold) writes:
>>Branko Collin (U24...@vm.uci.kun.nl) wrote:
>>> The Turing test, for those who do not know, is a test whereby people
>>> have to sit behind a terminal and have to hold a conversation using
>>> screen and keyboard. At the end they have to guess whether they talked to
>>> a computer program or to a human being sat at another terminal. A
>>> computer labelled by the testers as a human passes the Turing test.

>>And computers actually pass this test?!?

> Have for years. Ever since Eliza, in fact. As it turns out, it is


> fairly easy to fool people who have little or no computer
> experience.

As I undersand it, the concept of a Turing Test really is a functional


definition of intelligence. The concept is that something *is*
inteligent when it can function as something intelligent.

Basically, a computer that passes the Turing Test is one that acts
intelligent. If I am typing over a terminal, and I can't tell whether
I'm talking to a reasonably inteligent, if ignorant, human, or a
computer, then the computer passes, and is intelligent.

So far, people have been able to make programs that can fool people
for brief periods of time on narrow subjects, or that simulate humans
with certain disorders.

Hell, *I* can write a program that simulates a human with catatonia.

I think most of us could write programs that simulated someone who was
dead. . .

A program that simulated somone with paranoid schitzophrenia actually
fooled certain psychiatrists in the late seventies or early eighties.
They hooked the think up to an Eliza program, and it quickly fell
apart. . .

"Why am I here?"

"Why do you ask why you are here?"

"Why do you want to know why I ask why you are here?"

"Why do you ask why do you want to know whyI ask why you are here?"

"Why do you want to know why do you ask why do you want to know why I
ask why you are here?"

No, I'm serious. . . two programs based on the question reflection
method just don't work well together. . . .

In the yearly Turing Competition, questioners are given a terminal,
and a subject, and may ask only questions relating to that subject.
After a brief period of interaction, they are given a chance to guess
whether they were talking to a human or a program.

Some programs can act as well as humans in narrow subjects.

- Ian

Brian J Parker

unread,
Jan 12, 1996, 3:00:00 AM1/12/96
to
I'd like to add my $.02 to the discussion of the Turing test. I don't
have my cognitive psych textbooks in front of me, but I took a Behavioral
Models class last semester, and we discussed this.

Turing's suggested definition of AI was an old one, and more of a
philosophical statement than a practical one. He was advocating "black
box" psychology; the idea that it doesn't matter what goes on INSIDE the
mind, all that was worth studying was the input and output. Therefore,
looking at intelligence in that light, he was (IMHO) suggesting that what
happened INSIDE the computer didn't matter... only that it functioned
similar to the real "black box" of the human mind.

Cheers,

--
Brian J. Parker | VOX (home): (412) 688-0171
Student Technical Analyst | WEB: [http://www.pitt.edu/~bjp3]

Rosetta Stone Home Page: [http://www.pitt.edu/~bjpst6/rosettastone]

Felton42

unread,
Jan 12, 1996, 3:00:00 AM1/12/96
to
Branko Collin (U24...@vm.uci.kun.nl) wrote:
> [...] A computer labelled by the testers as a human passes the
> Turing test.

Can you imagine how few computers perceive intelligence on our side
of the screen?

(No smiley... I hate 'em.)

--- Churl

Neil K. Guy

unread,
Jan 12, 1996, 3:00:00 AM1/12/96
to
Xiphias Gladius (i...@cs.brandeis.edu) wrote:

> As I undersand it, the concept of a Turing Test really is a functional
> definition of intelligence. The concept is that something *is*
> inteligent when it can function as something intelligent.

> Basically, a computer that passes the Turing Test is one that acts
> intelligent. If I am typing over a terminal, and I can't tell whether
> I'm talking to a reasonably inteligent, if ignorant, human, or a
> computer, then the computer passes, and is intelligent.

The problem I have with this definition of intelligence, popular as it
is, is that it really doesn't measure the abilities of the computer
program IMO. It measures the ability (or the culpability as appropriate)
of the human tester more than anything... :)

- Neil K.

--
Neil K. Guy * ne...@sfu.ca * te...@tela.bc.ca
49N 16' 123W 7' * Vancouver, BC, Canada

Larry Smith

unread,
Jan 12, 1996, 3:00:00 AM1/12/96
to

In article <4d3gff$b...@news.cc.brandeis.edu>, i...@cs.brandeis.edu

(Xiphias Gladius) writes:
>lar...@enemax.zk3.dec.com (Larry Smith) writes:

>>In article <19960109....@arnod.arnod.demon.co.uk>,
>jo...@arnod.demon.co.uk (Julian Arnold) writes:
>>>Branko Collin (U24...@vm.uci.kun.nl) wrote:
>>>> The Turing test, for those who do not know, is a test whereby
>people
>>>> have to sit behind a terminal and have to hold a conversation
>using
>>>> screen and keyboard. At the end they have to guess whether they
>talked to

>>>> a computer program or to a human being sat at another terminal. A


>>>> computer labelled by the testers as a human passes the Turing
>test.

>>>And computers actually pass this test?!?

>> Have for years. Ever since Eliza, in fact. As it turns out, it is
>> fairly easy to fool people who have little or no computer
>> experience.

>As I undersand it, the concept of a Turing Test really is a functional


>definition of intelligence. The concept is that something *is*
>inteligent when it can function as something intelligent.

I think you misinterpret the above facts. Yes, the Turing test was
proposed as a functional test. Turing presumed that if an intelligent
being had an intelligent conversation with another intelligent being
without being sure the other being was a computer, then the computer
was, in fact, intelligent by any reasonable standard. Problem is, he
didn't specify the nature of the conversation. He apparently presumed
a wide-ranging, unbound conversation. Well, Eliza can handle that.
Not well - not well enough to fool anyone with enough computer savvy
to see her little grammar transformation tricks - but well enough to
really and thoroughly fool lots of non-computer literate people and do
it on a regular basis. I was helping do such studies when I was in
school years ago. This is _not_ a tribute to Eliza, Ian, it is, if
anything, a stinging indictment on the cognitive abilities of a large
percentage of humanity. What Weintraub did, in essence, was to prove
that the Turing test could be passed by a very simple algorithm - so
unless we want to tighten up the terms and bounds, the test as it stands
is not very helpful. Keeping up a conversation with a human being, it
turns out, does not require much intelligence, if you choose the right
human beings to converse with.

>Basically, a computer that passes the Turing Test is one that acts
>intelligent. If I am typing over a terminal, and I can't tell whether
>I'm talking to a reasonably inteligent, if ignorant, human, or a
>computer, then the computer passes, and is intelligent.

Sure. And for many people, a computer running Eliza _is_ "intelligent".
Eliza _does_ pass the Turing test for large numbers of people. But is
Eliza "really" intelligent, no. This is a warning that we are dealing
with things that are very poorly defined.

>So far, people have been able to make programs that can fool people
>for brief periods of time on narrow subjects, or that simulate humans
>with certain disorders.

Only Parry simulates a person with a disorder, and I've no idea how
well Parry fools people. But neither Eliza nor Racter are in any
way limited to narrow subjects, nor have they only fooled people for
brief periods of time. Most non-computer literate people, in fact,
_never_ catch on to Eliza's semantic tricks, and of those that do not,
many people don't _believe_ the researcher when her tricks are explained
to them, because she "seemed" so intelligent. I've had this argument
with test subjects myself.

>Some programs can act as well as humans in narrow subjects.

You are persisting in trying to understand this as a critique of
the test or the methodology. It isn't. It is a critique of educ-
ation, and what constitutes the perception of intelligence in our
society. We aren't talking about narrow subjects here - that just
makes it easier to try to fool better-educated people, but _no_
computer-literate person can be fooled for long by a program - but
it's rather easy for non-computer-literate types to be fooled, and
fooled permanently - and even continue to be fooled even when showed
the trick. We will continue to have this problem until we know what
intelligence really _is_ and how it works.

Adrian Preston

unread,
Jan 12, 1996, 3:00:00 AM1/12/96
to
Julian Arnold (jo...@arnod.demon.co.uk) wrote:

: Branko Collin (U24...@vm.uci.kun.nl) wrote:
: > The Turing test, for those who do not know, is a test whereby people
: > have to sit behind a terminal and have to hold a conversation using
: > screen and keyboard. At the end they have to guess whether they talked to
: > a computer program or to a human being sat at another terminal. A
: > computer labelled by the testers as a human passes the Turing test.

: And computers actually pass this test?!?

Worse than that, _people_ actually fail it!

It's true.

--
+----------------------------------------------------------------------+
Adi, Lecturer in ??????? | no .sig
Kingston University | no .sig
+----------------------------------------------------------------------+


Kenneth Jason Fair

unread,
Jan 13, 1996, 3:00:00 AM1/13/96
to
In article <4d68ei$d...@usenet.srv.cis.pitt.edu>, bjp...@pitt.edu (Brian J
Parker) wrote:

>Turing's suggested definition of AI was an old one, and more of a
>philosophical statement than a practical one. He was advocating "black
>box" psychology; the idea that it doesn't matter what goes on INSIDE the
>mind, all that was worth studying was the input and output. Therefore,
>looking at intelligence in that light, he was (IMHO) suggesting that what
>happened INSIDE the computer didn't matter... only that it functioned
>similar to the real "black box" of the human mind.

Precisely. Turing never meant his comments to be taken as an actual test
for intelligence. What he was trying to point out was that there is no
magic involved in determining what "intelligence" is all about. He was
trying to say that only the empirical results were what mattered.

In fact, if you think about it a bit, you begin to realize that the
Turing Test as it is often cited would leave out things that we might
consider intelligence. As a hypothetical, suppose an alien visitor
comes down to Earth and we figure out a way to talk to it. If we were
to put the alien in one box and a human in another, it would not be hard
for the questioner to determine which was the human, since there would
be knowledge common to humans (like facial expressions) that the alien
would not know. That lack of acommon knowledge base would trip up our
alien (or our computer). And yet this alien race was scientifically
advanced enough to communicate, to travel through space, use tools,
make deductions, and so on.

Another example happened to me a couple of years ago when I was working
in Europe. One of the guys there was Norwegian, but had been living in
the U.S. for about eight years and spoke English without a trace of an
accent, so much so that we Americans thought of him as American when
conversing with our French colleagues. But one day, two of us started
to speak in Pig Latin, confusing both the French guys and the Norwegian.
The Americans had all learned Pig Latin as kids, so we just assumed that
everyone who spoke English, especially those who spoke it as well as the
Norwegian did, would know it too.

--
KEN FAIR - U. Chicago Law | Power Mac! | Net since '90 | Net.cop
kjf...@midway.uchicago.edu | CABAL(tm) Member | I'm w/in McQ - R U?
The Constitution is more than simply the words. It includes
all of the legal, political, and social history of America.

Nele Abels

unread,
Jan 13, 1996, 3:00:00 AM1/13/96
to
On 10 Jan 1996, Larry Smith wrote:

> clever simulation of sentience. Eliza was notorious for sucking
> people in, it scared Weintraub (I think that's his name - the
> author of Eliza) how easy it was to fool people. Eliza doesn't
> impress computer types because they quickly spot the pattern of
> responses, but non-computer people are seldom so sensitive.

I have problems believing that. Eliza is so exorbitantly un-intelligent
that every person with a minimal feeling for language should notice after
the third sentence that something is wrong. I mean, you simply die of
boredom after talking for five minutes with this would-be psychatrist who
is not even able to form syntactically correct sentences. After all, what
can you expect from a 5Kb Basic-programme? :-)

For the interested ones, there is an ELIZA lying around in gmd
/unprocessed, right now.

Nele

ct

unread,
Jan 13, 1996, 3:00:00 AM1/13/96
to
In article <19960111....@arnod.arnod.demon.co.uk>,
Julian Arnold <jo...@arnod.demon.co.uk> wrote:

>Eliza, for instance, merely repeats parrot-like (okay, with some simple
>modification) the users input.

yep...

> None of the methods seem able to adequately
>cope with anything which hasn't been directly hardwired into them, and even
>then are prone to grievious mistakes.

yep, yep...

>They have no capacity for learning or understanding, merely a
>bit of simplistic repetition and faulty stimulus response.

...yep, sounds like human intelligence to me!

cynical regards, ct

ps Has anyone impemented Eliza in Inform yet?


Alexander Schwarz

unread,
Jan 14, 1996, 3:00:00 AM1/14/96
to
>> >> The Turing test, for those who do not know, is a test
>> >> whereby people have to sit behind a terminal and have
>> >> to hold a conversation using screen and keyboard. At
>> >> the end they have to guess whether they talked to a
>> >> computer program or to a human being sat at another
>> >> terminal. A computer labelled by the testers as a
>> >> human passes the Turing test.
>>
>> >And computers actually pass this test?!?
>>
>> Have for years. Ever since Eliza, in fact. As it turns
>> out,

No computer has EVER passed the Turing test. Some days ago,
someone posted a story of the competion Loebner sets up in
the US every year. So every year, some program is awarded
best and gets some money, but Loebner has promised a prize
of 100,000 $ for the first program that really passes as a
human.

Alex


Xiphias Gladius

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
jo...@arnod.demon.co.uk (Julian Arnold) writes:
>Xiphias Gladius (i...@cs.brandeis.edu) wrote:

>> Basically, a computer that passes the Turing Test is one that acts
>> intelligent. If I am typing over a terminal, and I can't tell whether
>> I'm talking to a reasonably inteligent, if ignorant, human, or a
>> computer, then the computer passes, and is intelligent.

> But I don't see that this is much of a test of intelligence, at


> least not the computer's intelligence. This is mostly new to me,
> but Eliza, the programs from the article Jorn posted, and our own IF
> NPC's are just using various crude techniques to trick human users
> into thinking they have some sort of intelligence.

> Eliza, for instance, merely repeats parrot-like (okay, with some
> simple modification) the users input. None of the methods seem able


> to adequately cope with anything which hasn't been directly
> hardwired into them, and even then are prone to grievious mistakes.

Right. And none of them pass the Turing Test. They don't function as
intelligent entities.

Eliza merely repeats, as you've pointed out. The fact that certain
humans using certain psychology methods, do the same thing does *not*
make Eliza intelligent, although I might argue that it makes the
psychologists nonsentient. . .

Now, if you were to type into a computer terminal, "So, how do you
like baseball?" and it replied, "Actually, I'm not familiar with that
-- what is it?" and you discribed it, and it asked intelligent
questions, and appeared to formulate an opinion, and could do this on
any subject. . . then it might start getting close to passing the
Turing Test.

- Ian


Xiphias Gladius

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
lar...@enemax.zk3.dec.com (Larry Smith) writes:

> (Xiphias Gladius) writes:

>> Basically, a computer that passes the Turing Test is one that acts
>> intelligent. If I am typing over a terminal, and I can't tell whether
>> I'm talking to a reasonably inteligent, if ignorant, human, or a
>> computer, then the computer passes, and is intelligent.

> Sure. And for many people, a computer running Eliza _is_ "intelligent".
> Eliza _does_ pass the Turing test for large numbers of people. But is
> Eliza "really" intelligent, no. This is a warning that we are dealing
> with things that are very poorly defined.

Allow me to point out a semantic subtlety in my original
definition. . .

If *I* am typing over a terminal, and *I* can't tell, etc. . .

I understand that if you use Joe Random off the street as the Turing
Tester that you can get skewed results, since Joe Random may not be
aware of the ideas behind the test, and may not even be aware of the
*possibility* that he or she is talking to a program.

If I grabbed a person of the street, and said, "Here. Sit at this
terminal and type questions and answers in and have a conversation,"
and the person had never heard of Eliza or the Turing Test, the very
*concept* of having a conversation with a program may not cross his or
her mind. Thus, he or she won't be *looking* for evidence for or
against the conversationalist's humanity.

Even if they're briefed on what they're supposed to be doing, unless
they have a strong philosophical *and* logical background, they won't
be able to understand the question of intelligence well enough to
formulate any tests or interpret the data they do have.

So, the Turing Test assumes a certain degree of intelligence,
education, *and* familiarity with the subject on the part of the
tester.

I think that I (barely) fulfil these qualifications, because I've been
thinking about them for years. It certainly sounds as if you do, Mr
Smith. I have a suspicion that *most* of the people on
rec.arts.int-fiction would, in fact, make good Turing Testers, since
writing text adventures requires a creative mind, a working knowledge
of computers, and, in many cases, experience with dealing with
computer automina, which is how this whole thread started. . .

- Ian

Jorn Barger

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
In article <4dcgu4$d...@news.cc.brandeis.edu>,

Xiphias Gladius <i...@cs.brandeis.edu> wrote:
>I understand that if you use Joe Random off the street as the Turing
>Tester that you can get skewed results, since Joe Random may not be
>aware of the ideas behind the test, and may not even be aware of the
>*possibility* that he or she is talking to a program.

I think the Loebner Contest (some years) *has* used 'Joe Random'
as the judge, and it resulted in scarily high ratings for Joe
Weintraub's eliza-like programs...


j

Trevor Barrie

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to
jo...@arnod.demon.co.uk (Julian Arnold) wrote:

>> Basically, a computer that passes the Turing Test is one that acts
>> intelligent. If I am typing over a terminal, and I can't tell whether
>> I'm talking to a reasonably inteligent, if ignorant, human, or a
>> computer, then the computer passes, and is intelligent.

>But I don't see that this is much of a test of intelligence, at least not the
>computer's intelligence.

Seems like the perfect test of intelligence to me. If the computer can
communicate in a manner indistinguishable from that of an intelligent
being, what basis do we have to question whether it is, in fact,
intelligent? Isn't this how we recognize intelligence in people, after
all?

>This is mostly new to me, but Eliza, the programs
>from the article Jorn posted, and our own IF NPC's are just using various
>crude techniques to trick human users into thinking they have some sort of
>intelligence.

Of course they are, but none of these programs can come close to
passing a full Turing Test.


Larry Smith

unread,
Jan 15, 1996, 3:00:00 AM1/15/96
to

In article <Pine.OSF.3.91.960113151844.5291G-100000@leofric>, Nele Abels <ab...@coventry.ac.uk> writes:
>On 10 Jan 1996, Larry Smith wrote:

>> clever simulation of sentience. Eliza was notorious for sucking
>> people in, it scared Weintraub (I think that's his name - the
>> author of Eliza) how easy it was to fool people. Eliza doesn't
>> impress computer types because they quickly spot the pattern of
>> responses, but non-computer people are seldom so sensitive.

>I have problems believing that. Eliza is so exorbitantly un-intelligent
>that every person with a minimal feeling for language should notice after
>the third sentence that something is wrong.


Yes. They _should_. No, they _didn't_. This is more a data point
for human shrinks than AI researchers, however. The single biggest
factor in Eliza's ability to fool people was the education level of
the human she conversed with. The higher it was, the less likely,
and the shorter time, she fooled them. The people she fooled longest,
and who were most likely to refuse to see _how_ she fooled them, were
among the high-school dropouts. Not unexpected in retrospect, but a
real shocker that _any_ human could be fooled that way. Among that
group, Eliza was actually percieved as very human - partly, for one
rather bizarre example, because she, too, made spelling mistakes!
Some people noticed she didn't spell very well, but they never twigged
to the fact that she was spelling _their_ way, every time! I also
wonder if the human expectations were set for more indulgence by her
"doctor" status - I wonder if she doesn't "sound" high-brow because of
the context of the contact, but I can't recall those details right now
(if I ever read them). A number of papers were done from the study in
various computer and psych rags, which I didn't pay much attention to
after the initial study, or I'd cite them.

Don't misunderstand me, I, too, find it impossible to think that
Eliza could _ever_ fool a human being. Yet I _saw_ her do it. I
can only conclude that when _I_ "sense" intelligence in another I
am using very different criteria and signals than another human
being with a different background and training might. It may be an
important cultural data point as well. I can only speculate _how_
Eliza managed this, but I can't dispute she did so.


> I mean, you simply die of
>boredom after talking for five minutes with this would-be psychatrist who
>is not even able to form syntactically correct sentences. After all, what
>can you expect from a 5Kb Basic-programme? :-)

Weintraub's original version was in Lisp, I think, and the 5k Basic
version was more limited - but it was still much the same kind of
creature. And yes, to anyone on the ball, she _is_ terribly, terribly
boring.

Larry Smith

unread,
Jan 16, 1996, 3:00:00 AM1/16/96
to

In article <4dcg53$d...@news.cc.brandeis.edu>, i...@cs.brandeis.edu (Xiphias Gladius) writes:

>Eliza merely repeats, as you've pointed out. The fact that certain
>humans using certain psychology methods, do the same thing does *not*
>make Eliza intelligent, although I might argue that it makes the
>psychologists nonsentient. . .

I'll go along with _that_! =)

Hr.Martin Jerabek

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
Larry Smith (lar...@enemax.zk3.dec.com) wrote:

: Weintraub's original version was in Lisp, I think, and the 5k Basic


: version was more limited - but it was still much the same kind of
: creature. And yes, to anyone on the ball, she _is_ terribly, terribly
: boring.

OK, since nobody put it right till now: the guy who wrote Eliza is
called Joseph Weizenbaum, not Weintraub. You might want to check out
his book "Computer Power and Human Reason".

Jerry

--
Martin "Jerry" Jerabek in /earth/europe/austria/vienna
mailto: jer...@rm6208.gud.siemens.co.at
Do you really think Siemens cares what I say?

Jacob Solomon Weinstein

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
edh...@eden-backend.rutgers.edu (Bozzie) writes:

>Edan Harel
>Who wonders how many posters to r.a.i-f are really AI machines in
>disguise.

Please tell me more about why you wonders how many posters to r.a.i-f are
really AI machines indisguise.

Bozzie

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
In article <1770BACC1S...@vm.uci.kun.nl>
U24...@vm.uci.kun.nl (Branko Collin) writes:

> But the Turing test was originally devised as a test to decide if
> a computer(program) is intelligent. In this it has failed, because of
> two reasons (as far is I can tell):
> -there is no good definition of intelligence.

Exactly. If you had a program that played chess and won all the time,
and you had one that did the same strategy as chess winners, which
would be more intelligent? A programmer would say the first (always
wins, therefore it's smart and intelligent). A psychologist might say
the second (acts like a human). In fact both of these are not
intelligence.

The question is, what is true intelligence? Is it self awareness?
Critical\logical thinking (ie, if a then b and b then c, then if a then
c). Is it creativity? Is it the ability to see in a new way? Is it
the ability, that devoid of any rendom elements it will still be
unpredictable? Is it the ability of the program to learn (another hard
word to define. literally it would mean creating a file with all it's
memories for it to learn from. I helped make a tic-tac-toe game like
this.)

> -the human factor: people can be fooled into thinking they are dealing
> with an intelligent being.

And then, of course, there are humans that can fool us into thinking
they're not intelligent beings ;-)


Bozzie

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
In article <4d3gff$b...@news.cc.brandeis.edu>
i...@cs.brandeis.edu (Xiphias Gladius) writes:

> Basically, a computer that passes the Turing Test is one that acts
> intelligent. If I am typing over a terminal, and I can't tell whether
> I'm talking to a reasonably inteligent, if ignorant, human, or a
> computer, then the computer passes, and is intelligent.
>

But the thing is, that even though these programs could "simulate" a
human in one respect, can they do so well? Say I had a program, and
tested it by giving it some info (the number of different grades given
by different teachers during the year), and then wanted to ask him
about it. Suppose I ask him how many A's Mr Jones gave and I got 0.
Then I asked him how many F's he gave. 0. Now I continue to ask him
and I learn that he gave no grades at all (He didnt teach this year).
Now Something *intelligent* would have told me that, but unless the
program is specifially designed to respond with that phrase (which
would be pointless, because there are tons of simmilar such problems
that the programmer would have to write which would take fa too long to
write) it wouldn't say that unless it was specifically asked did he wrk
this year.

Bozzie

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
In article <4dcg53$d...@news.cc.brandeis.edu>
i...@cs.brandeis.edu (Xiphias Gladius) writes:

> Now, if you were to type into a computer terminal, "So, how do you
> like baseball?" and it replied, "Actually, I'm not familiar with that
> -- what is it?" and you discribed it, and it asked intelligent
> questions, and appeared to formulate an opinion, and could do this on
> any subject. . . then it might start getting close to passing the
> Turing Test.

Thing is, what are those opinions based on?

Say you describe a baseball as a round, white object to be thrown
around and is hit by a stick-like bat (not a great def'n, I know) then
the program could randomly pick out an adjective and say I don't like
white, therefore I don't like baseball (ok, In better terms, but
still), that's just a random opinion at worst and at best the likes and
dislikes are prefed into the program.

Xiphias Gladius

unread,
Jan 17, 1996, 3:00:00 AM1/17/96
to
lar...@enemax.zk3.dec.com (Larry Smith) writes:

> In article <Pine.OSF.3.91.960113151844.5291G-100000@leofric>, Nele
> Abels <ab...@coventry.ac.uk> writes:

>> I have problems believing that. Eliza is so exorbitantly
>> un-intelligent that every person with a minimal feeling for language
>> should notice after the third sentence that something is wrong.

> Yes. They _should_. No, they _didn't_. This is more a data point
> for human shrinks than AI researchers, however.

A data point *for* human shrinks, or *about* human shrinks? (Insert
smileys as needed. . .)

I know that, well, I did meet actual human psychaiatrists who seemed
to emulate Eliza . . . the reflexive technique is actually used in
therapy. And I'd even met shrinks who used it in conversation. That
is, if possible, even MORE annoying than when a Lisp program does that
to you . . .

I wonder if some of the "fooling" was because people just don't, or
didn't, expect much intelligence from psychiatrists. In the past
decade or so, it seems to me that shrinks have gotten a *lot* better,
but I wonder if some of the confusion stemmed from the perception
that, at the time, shrinks *were* just that limited. . .

On a related question, and as an ObInt-Fiction, has anyone ever
written a psychiatrist actor, who, when confronted with a question
that it can't deal with, defaults to "Eliza-ing?" When I saw the TADS
port of Eliza, that seemed to be the obvious thing to do, but I'm
hardly a good enough programmer to figure out how to do so. . .

- Ian

Branko Collin

unread,
Jan 18, 1996, 3:00:00 AM1/18/96
to
In article <4dhuv5$3...@news.cc.brandeis.edu>

i...@cs.brandeis.edu (Xiphias Gladius) writes:
>
>On a related question, and as an ObInt-Fiction, has anyone ever
>written a psychiatrist actor, who, when confronted with a question
>that it can't deal with, defaults to "Eliza-ing?" When I saw the TADS
>port of Eliza, that seemed to be the obvious thing to do, but I'm
>hardly a good enough programmer to figure out how to do so. . .
>

Coming back to the original subject, the entry date for the next
Loebner prize ($2000 for humanest software) is somewhere in March. Has
anyone tried to make a program for the contest using an adventure creator?

Larry Smith

unread,
Jan 18, 1996, 3:00:00 AM1/18/96
to

In article <4dit8l$e...@news.siemens.at>, jer...@mx5217.gud.siemens.co.at (Hr.Martin Jerabek) writes:
>Larry Smith (lar...@enemax.zk3.dec.com) wrote:

>: Weintraub's original version...

>OK, since nobody put it right till now: the guy who wrote Eliza is
>called Joseph Weizenbaum, not Weintraub. You might want to check out
>his book "Computer Power and Human Reason".

Well, I got _most_ of the letters right, and in more-or-less the
right order. Not bad for a 20 year old organic database engine...

Arlo Smith

unread,
Jan 23, 1996, 3:00:00 AM1/23/96
to
Xiphias Gladius (i...@cs.brandeis.edu) wrote:
: I wonder if some of the "fooling" was because people just don't, or

: didn't, expect much intelligence from psychiatrists. In the past

Well, they are medical doctors. After the patient cools his heels in the
waiting room for an hour, enters a room decorated with pictures of farm
animals, sits in a low chair (legs shortened by a handsaw) in the shadow
of a giant mahongany desk, spots (with terror) the doctor's three
published books on Inhibition, Melancholia, and Bed-Wetting propped
against a half-finished plastic model of a brain, he's ready to be fooled
by a sock with a cigar in its mouth. ;)

What it is, I think, people fooled by Eliza expected very little from
computers, not psychiatrists.

Anyway, I'm making some effort to give NPCs flexibility by assigning them
characteristics which then evaluated to determine action, rather than
specifying what the action may be. This is probably a big joke and I'm
wasting my time. As far as conversation, I cant see much hope. Intent on
creating a many-branched dialogue, I used a flow diagram to model possible
lines of conversation (Inspiration for Windows) and it reminded me of a
car company production schedule


root

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to
Bozzie (edh...@eden-backend.rutgers.edu) wrote:
: In article <1770BACC1S...@vm.uci.kun.nl>
: U24...@vm.uci.kun.nl (Branko Collin) writes:

: > But the Turing test was originally devised as a test to decide if
: > a computer(program) is intelligent. In this it has failed, because of
: > two reasons (as far is I can tell):
: > -there is no good definition of intelligence.

: Exactly. If you had a program that played chess and won all the time,
: and you had one that did the same strategy as chess winners, which
: would be more intelligent? A programmer would say the first (always
: wins, therefore it's smart and intelligent). A psychologist might say
: the second (acts like a human). In fact both of these are not
: intelligence.

Being a programmer, I'd say no to the first. Performing a task well is not
intelligence.

: The question is, what is true intelligence? Is it self awareness?
: Critical\logical thinking (ie, if a then b and b then c, then if a then
: c). Is it creativity? Is it the ability to see in a new way? Is it
: the ability, that devoid of any rendom elements it will still be
: unpredictable? Is it the ability of the program to learn (another hard
: word to define. literally it would mean creating a file with all it's
: memories for it to learn from. I helped make a tic-tac-toe game like
: this.)

This is the central question of AI. The hell of it is, while we've come
up with all sorts of definitions over the years, none of them work. We
don't know what it is, and only have the dimmest sort of glimmerings as
to what we term intelligent and what we don't.

Your average cockroach for example is probably not intelligent. It's a
damn fine machine though. It works on a set of simple, interlocking
rules that causes it to behave like it does. The thing is, nobody really
knows if people are just an extension of this (add several billion
rules, stir well, and gestate).

Trying to define intelligence falls down badly, because usually the few
things we try to define it in terms of (self-awareness, creativity, etc)
are as yet undefined and not understood either. One may as well say that
'An intelligent entity can glozrpak, and mixnoth, and is able to deal
with pluggandisp.' Plug in free-will, creativity, self-awaareness,
adaptability, imagination, symbolic abstraction or any of the other
apparent qualities of intelligence that we do not understand.

: > -the human factor: people can be fooled into thinking they are dealing
: > with an intelligent being.

: And then, of course, there are humans that can fool us into thinking
: they're not intelligent beings ;-)

True. :)

D


root

unread,
Jan 26, 1996, 3:00:00 AM1/26/96
to
Bozzie (edh...@eden-backend.rutgers.edu) wrote:

: Edan Harel


: Who wonders how many posters to r.a.i-f are really AI machines in
: disguise.

Hmm. That would be me. I have (on the order of) 10^23 neural interconnections
designed in a parallel-processing local and wide network environment
utilizing excitatory and inhibatory message passing and communications
schemas. I have a vocabulary of around 23,000 words (larger than the average
11,000 for this particular country) and am capable of performing contextual
transformations on data, coupled with fine-motor-sequencing in an activity
called typing. Alternatively, I can use an audio-vibratory I/O mechanism
in order to speak.

Internal message passing is an adaptive homeostatic process which can
deal (remarkably effectively) with a wide variety of transfer molecules,
and shortages due to irregular or poor intake of supplies.

On the whole, that it works at all is pleasantly surprising.

D


Carl Muckenhoupt

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
dan...@brisnet.org.au (root) writes:

>Bozzie (edh...@eden-backend.rutgers.edu) wrote:
>: In article <1770BACC1S...@vm.uci.kun.nl>
>: U24...@vm.uci.kun.nl (Branko Collin) writes:

>: > But the Turing test was originally devised as a test to decide if
>: > a computer(program) is intelligent. In this it has failed, because of
>: > two reasons (as far is I can tell):
>: > -there is no good definition of intelligence.

Er... isn't this the *reason* for the Turing test?
If we had a precise definition of intelligence, it would be (relatively)
easy to detect. We don't, so we have to use our judgement instead. The
Turing test is just a way of formalizing that judgement.

--
Carl Muckenhoupt | Text Adventures are not dead!
b...@tiac.net | Read rec.[arts|games].int-fiction to see
http://www.tiac.net/users/baf | what you're missing!

Commander Space Dog

unread,
Jan 29, 1996, 3:00:00 AM1/29/96
to
root (dan...@brisnet.org.au) wrote:

: Bozzie (edh...@eden-backend.rutgers.edu) wrote:
: : In article <1770BACC1S...@vm.uci.kun.nl>
: : U24...@vm.uci.kun.nl (Branko Collin) writes:

: : Exactly. If you had a program that played chess and won all the time,


: : and you had one that did the same strategy as chess winners, which
: : would be more intelligent? A programmer would say the first (always
: : wins, therefore it's smart and intelligent). A psychologist might say
: : the second (acts like a human). In fact both of these are not
: : intelligence.

I would propose that a program that used a human-like strategy, but also
had the capacity to learn from past mistakes, would better fit a
reasonable definition of intelligence than either of those two.
--
+-----------------------------------------------------------------------------+
| "No matter what happens, we'll still be able to enjoy cheese." |
| -The Hon. Dr. Bruce "Eddie" Zambini |
+-----------------------------------------------------------------------------+

Morten Pedersen

unread,
Jan 30, 1996, 3:00:00 AM1/30/96
to

: : Edan Harel


: : Who wonders how many posters to r.a.i-f are really AI machines in
: : disguise.

: Hmm. That would be me. I have (on the order of) 10^23 neural interconnections
: designed in a parallel-processing local and wide network environment
: utilizing excitatory and inhibatory message passing and communications

You must be very clever - humans have normally 100 billion neurons
with around 10000 synaptic connections each, giving 10^15 interconnections.
Thus you are about 100 million times more interconnected than most people.


Morten

Sj Pover

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
I want to get back into IF. I previously used AGT and found it
pretty good but lacked the capability to handle some of the more
complex conditions I wanted to include. This was several years
ago. I stopped using it because it was too frustrating not being
able to program the neat (IMHO) things I had imagined.

I have recently discovered this group and was delighted to discover
the wealth of new (to me) development systems. I have studied the FAQ
and Inform Developers guide and enjoyed them tremendously. They do not,
however, give the kind of information I need to choose. TADS and
Inform seem to be the most sophisticated systems currently available,
(Comments?) but I am having a great deal of trouble deciding which
to get into bed (so to speak) with.

Cost (within reason), availability of source code and portability
to non-PC platforms are not important to me nor do I need any ability
to include graphics or sound in my game/story. I want a powerful
system that will allow me to design a large, complex environment and
sophisticated AI. Good development aids and debugging tools would be
nice. I have a fairly strong, although rusty, programming background
so programming, per se, does not frighten me.

In order to make a choice, it seems to me that one should get a really
good level of expertise in both. Being somewhat lazy, I would like
to hear the opinions of any of you who have done this already; if you
don't mind too much my piggy-backing on your efforts.

I look forward to any help you can give.

By the way, does anyone know why AGT was dropped? It seemed to be a
pretty useful tool for developing games of moderate complexity by
non-programmer types.

SJ
---
* DeLuxe2/386 1.25 #10827 * I tried the rest but bought the best!!!!

Carl D. Cravens

unread,
Jan 31, 1996, 3:00:00 AM1/31/96
to
On Wed, 31 Jan 96 13:53:00 -0500, sj.p...@canrem.com (Sj Pover) wrote:
>however, give the kind of information I need to choose. TADS and
>Inform seem to be the most sophisticated systems currently available,
>(Comments?) but I am having a great deal of trouble deciding which

Unfortunately, you didn't exactly give us your criteria for choosing.

TADS and Inform are the two "top dogs"... Inform being free and TADS
costing $40 makes a big difference for a lot of people. Both have
excellent documentation. I personally couldn't get used to Inform's
style, having learned TADS first.

Both are equally flexible, although each has different strengths and
weaknesses. (I don't think Inform does dynamic creation/deletion of
objects yet, while TADS does, for instance. But you can't completely
rewrite the parser in TADS, while Inform's parser is written IN Inform's
language.) Both are well-supported by their authors. There are a large
number of interpreters available for Z-Machine (Inform) games, while
you're "stuck" with the one interpreter available for TADS. (I happen
to prefer the TADS interpreter to any Z-Machine ones I've seen, but
that's personal preference. I know others who feel the opposite.)

My advice is to pick up both of them and take a look... the one is free
and the other is shareware. (TADS shareware doesn't come with a lot of
documentation, but the manual you get for registering is well done.)
You can compare them to your heart's content before you commit to either
one.

Oh, one other thing, which kinda clenched it for me... TADS has a
run-time debugger that comes with the registered version.

--
Carl (rave...@southwind.net)
* Hey! We're out of wine, women, and song! !@#$*!?% NO MERRIER

Bozzie

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
dan...@brisnet.org.au (root) wrote:

>Being a programmer, I'd say no to the first. Performing a task well is not
>intelligence.

Well, perhaps that's true, but the point is, they're *both* not
intelligent. They don't have a reason for following the strategies they
follow. The closest thing that I could come to a program that *reasoned*
out it's moves is one that saves a record of every game and then at every
moves, examines the records and follows the move that won in previous
games (although he'd have to start out doing things randomly). I wrote a
program like this for checkers with some other people.
>

[Problem of defining Intelligence snipped]

>This is the central question of AI. The hell of it is, while we've come
>up with all sorts of definitions over the years, none of them work. We
>don't know what it is, and only have the dimmest sort of glimmerings as
>to what we term intelligent and what we don't.

Well, I dont see anything wrong with the definition of Intelligence as
being creative or unpredictable, without having any "random" elements
inside your program. Course, is anything truely unpredictable? No
program can "Randomly" pick a number, it generally follows an algorythm
of computing a number using an irrational number, such as pi.


>
>Your average cockroach for example is probably not intelligent. It's a

I think you're mistaking inteligence with being smart. A cockaroach isnt
smart, but a computer program that beats the world's chess champion would
be smart. An encyclopedia can be considered smart (A collection of data)
or else a genius who makes amazing steps forward with sudden sparks of
insight can be considered "smart".

Intelligence, on the other hand, means the ability to understand. Now,
we dont know how much a cockaroach nows, but I'm fairly sure that it
can't beat the World's Champion or understand the basics of physics.
However (and I really wouldn't know as I haven't been a cockaroach for as
far back as I can remember) a cockaroach *might* understand things in
it's own universe. It might know not to step underneath a foot because
it might understand that it could get stepped on. Now I don't know if
this is an actual "understanding" or mearly a natural responce to
something learned over thosands of years as an evolutionary process, but
it's something to think about. If a bug can "understand" it's own
universe, then it's just as intelligent as we are (or close, any ways).
Especially since we dont know that much about our *own* universe.

>damn fine machine though. It works on a set of simple, interlocking
>rules that causes it to behave like it does. The thing is, nobody really
>knows if people are just an extension of this (add several billion
>rules, stir well, and gestate).

Well, I would tend to think that people *aren't*. I mean, thats the
basic assumption, isn't it? I think therefore I am. If we're just
natural process there's n point to our arguing or not arguing cause
whatever will happen will happen and our fate is sealed.

>
>Trying to define intelligence falls down badly, because usually the few
>things we try to define it in terms of (self-awareness, creativity, etc)
>are as yet undefined and not understood either. One may as well say that

Now, that's true. But I don't see whats wrong, with say, using the
following test (albeit, no computer will ever pass). If you have the
program and the inputs and there are no random elements in the program
(just for simplification, because, as I said before, nothing in computers
*is* rendom, mearly an algorythim to get numbers, which could be
discovered if you know the aldorythim) that if you *can't* predict the
outcome, the computer has artificial intelligence.

Of course, testing humans would be a bit more difficult. You'd have to
take all the socialogical inputs and the genetical inputs (I had a
roomate who thought that every person was defined by their genetics and
society. A very pessimistic attitude, in my opinion.) and see if you
could predict the person. Presumably, there's something *more* to humans
(or any life). Call it a soul or what have you.

>'An intelligent entity can glozrpak, and mixnoth, and is able to deal
>with pluggandisp.' Plug in free-will, creativity, self-awaareness,
>adaptability, imagination, symbolic abstraction or any of the other
>apparent qualities of intelligence that we do not understand.

But that's just it, as "Intelligent Beings" we "understand" those terms.
Just not to the same level, because they are debatable.


Mark Borok

unread,
Feb 1, 1996, 3:00:00 AM2/1/96
to
In article <7+AExwIe...@southwind.net>, rave...@southwind.net (Carl

D. Cravens) wrote:
>
> TADS and Inform are the two "top dogs"... Inform being free and TADS
> costing $40 makes a big difference for a lot of people. Both have
> excellent documentation. I personally couldn't get used to Inform's
> style, having learned TADS first.
>
I got TADS myself because somebody on this group complained about the
Inform documentation being too difficult.

Question: is there a good development system that accepts graphics? I've
been thinking it might be nice to combine text-based puzzles with visual
ones (such as a map, for example). Or is this anathema to text-adventure
enthusiasts? Most games seem to either be all-text or all-graphics (MYST)
or provide you with limited commands, which takes all the fun out of
experimenting with different commands and getting unexpected responses.
Would it be in bad taste if a future upgrade of TADS or INFORM had
graphics/sound support?

--Mark

Graham Nelson

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
In article <markb-01029...@brad.pcix.com>, ma...@pcix.com (Mark Borok) writes:
> In article <7+AExwIe...@southwind.net>, rave...@southwind.net (Carl
> D. Cravens) wrote:
>>
>> TADS and Inform are the two "top dogs"... Inform being free and TADS
>> costing $40 makes a big difference for a lot of people. Both have
>> excellent documentation. I personally couldn't get used to Inform's
>> style, having learned TADS first.
>>
> I got TADS myself because somebody on this group complained about the
> Inform documentation being too difficult.
>
I'm sorry about that! There was a minor flame skirmish on that theme,
and I do accept that beginners find the Designer's Manual hard going,
but quite a lot of people do like the manual (or so they're kind enough
to say to me, anyway). Perhaps it's worth a look.

> Question: is there a good development system that accepts graphics? I've
> been thinking it might be nice to combine text-based puzzles with visual
> ones (such as a map, for example). Or is this anathema to text-adventure
> enthusiasts? Most games seem to either be all-text or all-graphics (MYST)
> or provide you with limited commands, which takes all the fun out of
> experimenting with different commands and getting unexpected responses.
> Would it be in bad taste if a future upgrade of TADS or INFORM had
> graphics/sound support?
>
> --Mark

In a sense Inform can do this now. You'd have to do most of the graphics
programming in "assembly language" (fairly low level code), and you'd
need an interpreter capable of displaying Version 6 graphics, but these
are soon to become available.

As for bad taste... well, it would be sad-ish. In some ways. I think all
of the four Infocom graphics games would have been better without the
graphics (Shogun is actually not bad underneath the graphics, I think),
except possibly Journey. But Journey isn't really an adventure game at all.

Graham Nelson

Carl D. Cravens

unread,
Feb 2, 1996, 3:00:00 AM2/2/96
to
On 2 Feb 1996 17:57:05 GMT, thar...@saucer.cc.umr.edu (Nudge Nudge) wrote:
> 8=> Dumb question, Bonni, what does IDE stand for?
>
>Integrated Debugging Environment.

Integrated Development Environment.

--
Carl (rave...@southwind.net)
* The thoughtless are rarely wordless. -- Howard W. Newton

Matthew Russotto

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to

He's using 10^15 for himself, and the rest for handling the simulation we call
"reality". Yep, I'm a solipsist-by-proxy :-)


Sj Pover

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
u6...@wvnvm.wvnet.edu (bonni mierzejewska) wrote about IDE's

UU>as a shell for all the programming functions. An IDE has an editor (which
UU>can hopefully view more than one file at a time), a window or a viewport
UU>for error messages, sometimes an integrated debugger, and can run your
UU>compiler. A good versatile IDE will have editor options that control text
UU>and font styles as well as the usual cut-and-paste operations, and
UU>compiler options (Inform has whole bunches of these) that can be set.

What text editors do you like for programme development?

Sj Pover

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
Hi Adam,

AJ>I like TADS a little better; it's more C++ish and I like the way
AJ>inheritance makes defining default behaviors very easy.
I may have to get back to you about "inheritance defining default
behaviours" in a little while.

AJ>Inform feels more like "real programming".
I wonder, could you expand just a bit on this? This is the kind of
distinction I was looking for in a comparison between the two.

How easy/difficult did you find it to go from one to the other?
Which did you start with? Why did you change, or do you use one or
the other depending on what you want to write?

AJ>Either one will do you fine.
Probably, but I'm a bit anal.

AJ>Adam
Thanks a lot for your time.

Sandra

Uncle Bob Newell

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
For what it's worth, my revised and much expanded "Which Authoring System
is Better" FAQ will be released around February 20. This FAQ contains
about 54 typeset pages of either information or rambling, depending on
your viewpoint, on comparative evaluations of authoring systems. It
covers TADS, Inform, ALAN, Hugo, Archetype, AGT, GINAS, and a few others.

Right now it is out for limited review and will be published Real Soon.

Uncle Bob

Andrew C. Plotkin

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
nel...@vax.oxford.ac.uk (Graham Nelson) writes:
> As for bad taste... well, it would be sad-ish. In some ways. I think all
> of the four Infocom graphics games would have been better without the
> graphics (Shogun is actually not bad underneath the graphics, I think),
> except possibly Journey. But Journey isn't really an adventure game at all.

All right, I'll bite. Why is Journey not an adventure game?

It's fiction, it's interactive, it has pretty much the same kind of
puzzle situations that we're familiar with. (Although it's got a lot
more of the homogenous-resource-allocation puzzle than most games.)

The command structure is the biggest change. Instead of a parser that
interprets input text as low-level (one-object-at-a-time) actions, you
have a menu of choices for dealing with a situation at a higher level
("Fred tries to make friends with the troll" or "Bob casts the
red-blue-red spell.") Is this the difference you mean? I wouldn't
label it as that significant.

--Z

"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."

Mark Borok

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <1996Feb2.152855@oxvaxd>, nel...@vax.oxford.ac.uk (Graham
Nelson) wrote:

> In article <markb-01029...@brad.pcix.com>, ma...@pcix.com (Mark
Borok) writes:
> >>
> > I got TADS myself because somebody on this group complained about the
> > Inform documentation being too difficult.
> >
> I'm sorry about that! There was a minor flame skirmish on that theme,
> and I do accept that beginners find the Designer's Manual hard going,
> but quite a lot of people do like the manual (or so they're kind enough
> to say to me, anyway). Perhaps it's worth a look.

I'm sure Inform is a great system, but as a non-programmer I need a lot of
hand-holding. Once I've figured out TADS, I would be interested in
checking out Inform as well.

Anyway, Curses is the best game I've played so far (not that I've played
that many)


>
> > Question: is there a good development system that accepts graphics?

> As for bad taste... well, it would be sad-ish. In some ways. I think all


> of the four Infocom graphics games would have been better without the
> graphics (Shogun is actually not bad underneath the graphics, I think),
> except possibly Journey. But Journey isn't really an adventure game at all.

I agree that graphics, where used, should be used with restraint. In
particular, I think there are instances where visual clues and puzzles
could be incorporated into a game without losing good writing.

What is "Journey"? Is it still available from Activision?

--Mark
>
> Graham Nelson

athol-brose

unread,
Feb 3, 1996, 3:00:00 AM2/3/96
to
In article <4l4jkp_00...@andrew.cmu.edu>,

Andrew C. Plotkin <erky...@CMU.EDU> wrote:
>All right, I'll bite. Why is Journey not an adventure game?

Well...

>The command structure is the biggest change. Instead of a parser that
>interprets input text as low-level (one-object-at-a-time) actions, you
>have a menu of choices for dealing with a situation at a higher level
>("Fred tries to make friends with the troll" or "Bob casts the
>red-blue-red spell.") Is this the difference you mean? I wouldn't
>label it as that significant.

I think this is the thing that makes Journey something less of an
adventure game (by classical, Zork-style definitions) and more of a
choose-your-own-adventure book translated to a computer. In many
situations, the actions you can take are severely curtailed. Also in
many spots, you may only take one action before the plot is moved
ahead, again like a pick-a-path. In the areas where you can stay in
one place, it is only until you do the correct action to move you
along... and you can never never go back to where you've been (though
arguably, this is because of the extended-quest format of Journey.)

I like the game, however; like it a lot. I still haven't finished it
-- haven't had the time to do much with my new copies of LTOI 1 & 2 at
all. I will sit down and play it through sometime, however, which is
more than I can say for, for instance, Shogun.

--r.

--
r. n. dominick -- cinn...@one.net -- http://w3.one.net/~cinnamon/

Sj Pover

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
FROM :BNE...@GOBBLERNET.SPUTNIX.COM

UB> For what it's worth, my revised and much expanded "Which Authoring System
UB> is Better" FAQ will be released around February 20. This FAQ contains
<snip>
I'm waiting with anticipation and impatience, Uncle Bob <BG>.
SJ

Christopher E. Forman

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
Mark Borok (ma...@pcix.com) wrote:
: I agree that graphics, where used, should be used with restraint. In

: particular, I think there are instances where visual clues and puzzles
: could be incorporated into a game without losing good writing.

Two words in defense of graphics: Automatic mapping. I agree, most of the
graphics in Infocom's later adventures were gratuitous, but that built-in
map was a great time-saver.

: What is "Journey"? Is it still available from Activision?

"Journey" was the first (and, incidentally, last) in Infocom's "Chronicles"
series. It combined role-playing concepts (in particular, the use of a
party of adventurers) with a pseudo-IF interface that was unfortunately
hampered by a select-a-command-from-this-nifty-menu interface. Activision
packaged it on the LTOI2 CD-ROM, but aside from that, they no longer offer
it. If you want to know more, I'd recommend looking at the back issues of
SPAG. One of them has a review of "Journey"."

--
C.E. Forman cef...@rs6000.cmp.ilstu.edu
Read the I-F e-zine XYZZYnews, at ftp.gmd.de:/if-archive/magazines/xyzzynews,
or on the Web at http://www.interport.net/~eileen!
* Interactive Fiction * Beavis and Butt-Head * The X-Files * MST3K * C/C++ *

bonni mierzejewska

unread,
Feb 4, 1996, 3:00:00 AM2/4/96
to
On 2 Feb 96 15:28:55 GMT, nel...@vax.oxford.ac.uk (Graham Nelson) wrote:

>I'm sorry about that! There was a minor flame skirmish on that theme,
>and I do accept that beginners find the Designer's Manual hard going,
>but quite a lot of people do like the manual (or so they're kind enough
>to say to me, anyway). Perhaps it's worth a look.

The only real difficulty I had with the manual was sometimes not being
able to find the actual syntax of a statement. Perhaps a supplementary
library reference would be in order? Something that gave all the Inform
commands and their syntax?

>> Would it be in bad taste if a future upgrade of TADS or INFORM had
>> graphics/sound support?

>...


>As for bad taste... well, it would be sad-ish. In some ways.

I have to admit, I had to restrain the impulse to respond directly to that
question with one line:
AAAAAAAAAAAAAAAAAAAAAAAAAAHHHHHHHHHHHHHHHHHH!!!!!!!! RUN AWAY!!!!

Am I a purist? A bigot? I don't know. I do confess that I have played
and enjoyed multimedia games such as King's Quest (all of them but VII!),
Jewels of the Oracle, and Karma: Curse of the Twelve Caves. Very minor
ascii graphics don't bother me - like the view of the garden hedge maze
from the Family Tree in Curses! or the games with Catharine in Magic
Toyshop - but I'm not sure how I'd react to graphics/sound sort of stuff.
It wouldn't feel like a text adventure, to me, with all that. But that
doesn't mean I wouldn't enjoy it, either. Maybe I'm just weird (but
Magnus Olsson has already figured that out O:). I like to keep the two
genre separate. Perhaps that's unrealistic? Hmm.

bonni
coming soon - 1996 IF Competition entry
coming whenever - JERUSALEM!
__ __
IC | XC | bonni mierzejewska "The Lone Quilter"
---+--- | u6...@wvnvm.wvnet.edu
NI | KA | Kelly's Creek Homestead, Maidsville, WV

JMurphy42

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
In article <1996Feb2.152855@oxvaxd>, nel...@vax.oxford.ac.uk (Graham
Nelson) writes:

>
>I'm sorry about that! There was a minor flame skirmish on that theme,
>and I do accept that beginners find the Designer's Manual hard going,
>but quite a lot of people do like the manual (or so they're kind enough
>to say to me, anyway). Perhaps it's worth a look.
>
>

Now, I don't understand that. I thought that the Manual was decent. I
mean,
the examples were outlandish, and the reading matter chapter was very hard
to grasp, but considering the kind of manuals out there, the Inform manual
is one of the beter that I've seen.
On the other hand, considering that your (Graham's) Inform Manual is the
only one, if a person feels that they can't learn from it, that's a valid
excuse.

Bysshe.

"The ships hung in the air in much the same way that bricks don't."
-- Douglas Adams

Ville Lavonius

unread,
Feb 5, 1996, 3:00:00 AM2/5/96
to
Sj Pover (sj.p...@canrem.com) wrote:
: u6...@wvnvm.wvnet.edu (bonni mierzejewska) wrote about IDE's

: UU>as a shell for all the programming functions. An IDE has an editor (which
: UU>can hopefully view more than one file at a time), a window or a viewport
: UU>for error messages, sometimes an integrated debugger, and can run your
: UU>compiler. A good versatile IDE will have editor options that control text
: UU>and font styles as well as the usual cut-and-paste operations, and
: UU>compiler options (Inform has whole bunches of these) that can be set.

: What text editors do you like for programme development?

Emacs. Few editors come even close (codewright and epsilon, maybe).
Of course DOS is very very restrictive environment to work in. It's
far easier and less stressing to work in a real multi-tasking system.
I'm using linux and it works like charm.

: SJ
--
Ville.L...@Helsinki.FI
http://www.cs.helsinki.fi/~lavonius/

Tucker

unread,
Feb 8, 1996, 3:00:00 AM2/8/96
to
>By the way, does anyone know why AGT was dropped?

As I recall, the authors got bored and moved on to Bigger And Better
Things (tm). Thus, AGT became freeware/public domain (can't recall
which) and lost technical support in one swell foop.

As for compilers-- I'm an Inform man myself, but that's mainly because
it's what I stumbled onto first. <shrug> To each his/her/its own, I
suppose.
--tucker


Tucker

unread,
Feb 10, 1996, 3:00:00 AM2/10/96
to
>I'm sure Inform is a great system, but as a non-programmer I need a lot of
>hand-holding.

In that case, from what I've heard (not personal experience), ALAN might be
a good choice, since both Inform and TADS rely on programming experience.

>I agree that graphics, where used, should be used with restraint. In
>particular, I think there are instances where visual clues and puzzles
>could be incorporated into a game without losing good writing.

Most definitely. Witness _Zork Zero_ for an excellent example. Double
Fanucci was a perfectly normal Infocom-style puzzle... but the graphics made
it a _lot_ more fun (and confusing...) Peggleboz and the Tower of Hanoi
could have been dispensed with, but I think that the use of graphics in
_Zork Zero_, as Steve Meretzky said in an article, "enhanced the game's
puzzles without taking away from the all-text 'feel' of it."


>What is "Journey"? Is it still available from Activision?

Journey is one of the last things that Infocom qua Infocom did. It's
available on the CD-ROM version of Lost Treasures II, and possibly in one of
the new collections.
--tucker


Chris Thomas

unread,
Feb 11, 1996, 3:00:00 AM2/11/96
to
In article <60.8014.41...@canrem.com>, sj.p...@canrem.com (Sj
Pover) wrote:

> AJ>Inform feels more like "real programming".
> I wonder, could you expand just a bit on this? This is the kind of
> distinction I was looking for in a comparison between the two.

Inform is more difficult to use to make a working program which does
what you want to do. That's usually what people mean when they make
noises about "real programming." I think computer science has been
endlessly corrupted by C in this regard. TADS feels like a higher-
level language.

--
Chris Thomas, c...@best.com

ler...@classic44.rz.uni-duesseldorf.de

unread,
Feb 12, 1996, 3:00:00 AM2/12/96
to
Saluton!

Tucker (jota...@vt.edu) wrote:
: >By the way, does anyone know why AGT was dropped?


: As I recall, the authors got bored and moved on to Bigger And Better
: Things (tm). Thus, AGT became freeware/public domain (can't recall
: which) and lost technical support in one swell foop.

That's not _quite_ true, there's someone out there, who still does
something for AGT - he did AGT 1.82 for DOS (I considered porting
that to my Amiga - at last a source that's not in C - but as there's
a working Alan now...)

Ad Astra!
JuL

P.S: Does anybody know what the differences of the AGT versions are?
The last Amiga-one is 1.3, on DOS it's up to 1.5 or 1.7...?

ler...@sunserver1.rz.uni-duesseldorf.de / Never disturb a dragon, for you will
J"urgen ''JuL'' Lerch / be crunchy and taste good with ketchup!

Magnus Olsson

unread,
Feb 13, 1996, 3:00:00 AM2/13/96
to
In article <ckt-110296...@ckt.vip.best.com>,

Chris Thomas <c...@best.com> wrote:
>In article <60.8014.41...@canrem.com>, sj.p...@canrem.com (Sj
>Pover) wrote:
>
>> AJ>Inform feels more like "real programming".
>> I wonder, could you expand just a bit on this? This is the kind of
>> distinction I was looking for in a comparison between the two.
>
>Inform is more difficult to use to make a working program which does
>what you want to do. That's usually what people mean when they make
>noises about "real programming."

Well, that's a possible explanation, I guess.

On a more charitable note, I think that it's not really because
Inform is more difficult than TADS, but because it's lower-level,
and less object oriented. Many problems that are solved by subclassing
and defining a few short methods in TADS are solved by writing
more "traditional" code in Inform. This doesn't mean that one is better
than the other, only that what you have to do in Inform is more like
what you're used to from C, Basic or Pascal. If you're used to Smalltalk,
I suppose TADS would be more like "real programming" (== what you're used
to) than Inform.

But don't be misled. If you're trying to do anything advanced (i.e.
something that involves more than just slight tweaking of the
library), then you'll have to do quite a lot of "real" programming in
TADS as well. Both TADS and Inform are quite normal programming
languages, with some OO capabilities and IF-oriented primitives. There's
nothing revolutionary or even new about either language.

Magnus Olsson (m...@df.lth.se)


Chris Thomas

unread,
Feb 16, 1996, 3:00:00 AM2/16/96
to
In article <4fq7nd$3...@news.lth.se>, m...@marvin.df.lth.se (Magnus Olsson) wrote:

> In article <ckt-110296...@ckt.vip.best.com>,
> Chris Thomas <c...@best.com> wrote:

> On a more charitable note, I think that it's not really because
> Inform is more difficult than TADS, but because it's lower-level,
> and less object oriented. Many problems that are solved by subclassing
> and defining a few short methods in TADS are solved by writing
> more "traditional" code in Inform. This doesn't mean that one is better
> than the other, only that what you have to do in Inform is more like
> what you're used to from C, Basic or Pascal. If you're used to Smalltalk,
> I suppose TADS would be more like "real programming" (== what you're used
> to) than Inform.

I don't think so. If you were heavily involved in Smalltalk, you would
probably still be aware that most application development happens in C.

I want to note that I'm not complaining- Graham has done a wonderful
job. For programming IF, I don't like TADS much either. (Okay, maybe
I *am* complaining. Not about anyone's contributions, though.) I like
those portions of ZIL I've seen. Anyone interested in a ZIL -> ZIPcode
compiler?

> But don't be misled. If you're trying to do anything advanced (i.e.
> something that involves more than just slight tweaking of the
> library), then you'll have to do quite a lot of "real" programming in
> TADS as well. Both TADS and Inform are quite normal programming
> languages, with some OO capabilities and IF-oriented primitives. There's
> nothing revolutionary or even new about either language.

Truth, in the language-oriented sense. Inform and TADS are revolutionary
in that they've managed to bootstrap a healthy IF game writing community
to life.

--
Chris Thomas, c...@best.com

Magnus Olsson

unread,
Feb 19, 1996, 3:00:00 AM2/19/96
to
In article <ckt-160296...@ckt.vip.best.com>,

Chris Thomas <c...@best.com> wrote:
>In article <4fq7nd$3...@news.lth.se>, m...@marvin.df.lth.se (Magnus Olsson) wrote:
>
>> In article <ckt-110296...@ckt.vip.best.com>,
>> Chris Thomas <c...@best.com> wrote:
>
>> On a more charitable note, I think that it's not really because
>> Inform is more difficult than TADS, but because it's lower-level,
>> and less object oriented. Many problems that are solved by subclassing
>> and defining a few short methods in TADS are solved by writing
>> more "traditional" code in Inform. This doesn't mean that one is better
>> than the other, only that what you have to do in Inform is more like
>> what you're used to from C, Basic or Pascal. If you're used to Smalltalk,
>> I suppose TADS would be more like "real programming" (== what you're used
>> to) than Inform.
>
>I don't think so. If you were heavily involved in Smalltalk, you would
>probably still be aware that most application development happens in C.

Of course, but I think you're missing my point: that we tend to define
"real programming" as what we're used to.

A common reaction among C programmers switching to C++ is "where did
the programming go"? Instead of a program consisting of a number of
relatively long function, you get a program consisting of a much
larger number of objects and methods, and a typical method is rather
much shorter than a C function. Sometimes it seems as if the entire
program consists just of declarations :-). Also, the flow of control
is much less obvious than in a C program.

A similar thing happens when a C programmer starts using TADS: there's
a lot of declarations, rather little code, and the flow of control is
quite non-obvious. Inform is slightly less OO, slightly less declaration-
intensive, requires slightly more program code (in all the before and
after rules, for example), so it may seem more familiar, more like
"real programming".

Also, to get back to my Smalltalk example, I would not be very surprised
if a Smalltalk programmer would say: "Sure, there's a lot of code
development going on in C, but that's not _real programming_, that's
just low-level coding." :-) Get my point?

>I want to note that I'm not complaining- Graham has done a wonderful
>job.

I'm certainly not complaining either. Both Inform and TADS are
marvellous creations and very much needed in the IF community.
They're not ideal tools, but they're a good deal better than anything
that's come before.

>>There's
>> nothing revolutionary or even new about either language.
>
>Truth, in the language-oriented sense. Inform and TADS are revolutionary
>in that they've managed to bootstrap a healthy IF game writing community
>to life.

Indeed.

Magnus Olsson (m...@df.lth.se)

Adam J. Thornton

unread,
Feb 19, 1996, 3:00:00 AM2/19/96