Selfish meme literature recommendations?

0 views
Skip to first unread message

Anthony Bailey

unread,
Jan 7, 1997, 3:00:00 AM1/7/97
to

I recently read "The Mind's I" by Hofstadter & Dennett. One of the
pieces collected therein and commented thereon is from Richard
Dawkins' book "The Selfish Gene." I was particularly taken by the
concept of meme replication, competition and evolution, and the idea
that such significant quantum leap events as the birth of life and the
birth of human culture were in many ways analagous.

I'd now like to read a bit more on this. Dawkins' own book is an
obvious starting point, but since it was the work in which the idea
seems to have first originated and is now more than twenty years old,
I suspect there may have been some interesting developments and other
arguments since then. Can anyone recommend a text that would form a
good general introduction to the current thoughts on the selfish meme
concept for someone coming from a Hofstadtian background and without a
significant amount of academic biological and anthropological
grounding?

Many thanks.

ObHofstadter: The observation that MMF$$$ posts are archetypal memes
is no doubt a tired one, so I won't try that one. But it does get me
to thinking whether other ideas relevant to the a.f.h newsgroup might
have some interesting relations to Usenet.

For example, the Turing test as originally formulated seems to have
quite a high opinion of the intelligence level found in human
discourse. Such a level is currently well beyond our reach, but
less demanding testing grounds are also available. Does anybody else
think that a device that could take part in some Usenet discussion
and flaming without being noticed as non-human might be an interesting
idea for an AI project? (c: Indeed, are any of our regular readers
here in fact beta versions of such projects?

--
@*-.,_,.-`'-=*@@*=-`'-.,_,.-*@ For my contact details, and information on the
| Anthony. | bands Pulp, Saint Etienne, and the Kitchens OD,
@*-.,_,.-`'-=*@@*=-'`-.,_,.-*@ home's <URL:http://WWW.CS.Man.ac.UK/~baileya/>.
Anthony Bailey studies for a CompSci/Maths PhD at the University of Manchester

Richard Joly

unread,
Jan 7, 1997, 3:00:00 AM1/7/97
to

Anthony Bailey <bai...@cs.man.ac.uk> wrote:
>I'd now like to read a bit more on this. Dawkins' own book is an
>obvious starting point, but since it was the work in which the idea

[cc via email too ]

Anthony,

look for

- the alt.memetics newsgroup
- the alt.memetics FAQ and bibliography

The most recent most talked about books on the newsgroup is

Aaron Lynch's
THOUGHT CONTAGION -
How belief spreads through society
The new science of memes
pub. fall 96

[ blurbs by Hofstadter and Dawkins appear on the cover ]

Aaron Lynch is very present on the newsgroup...he also maintains a
webpages, with links to another book on memes [ title does not come to
mind just yet...but that one is downloadable ]

If your newsserver does not carry the newsgroup, you can probably
access it thru Altavista or Dejanews.

hope this helps,

Richard Joly

--
Richard Joly
http://www.odyssee.net/%7Ericjoly/ononet.html --*Y. ONO/net Pages *


S M PRESNELL

unread,
Jan 8, 1997, 3:00:00 AM1/8/97
to

Anthony Bailey wrote:
> Does anybody else think that a device that could take part in some
> Usenet discussion and flaming without being noticed as non-human
> might be an interesting idea for an AI project? (c: Indeed, are
> any of our regular readers here in fact beta versions of such
> projects?

Why do you say are any of our regular readers here in fact beta
versions of such projects? Please go on. :)

A lot of Eliza programs seem to rely on knowledge of a very specific
area of conversation, such as pets, or the weather. Normally, this
would be a drawback, since the program can't respond when the
questionner changes the subject. However, in a Usenet group, writers
are (theoretically) more inclined to stay on the topic in question.

Usenet discussion is different from "normal" Turing test conditions
in that it consists of passages of text, rather than short immediate
answers. This might make it more difficult for the program to perform
well, since it has to come up with more (coherent) prose, and reply
appropriately to several points in a single message.

This has probably been discussed before, but a question seems to arise:
Does the questionner have to know that he is performing in a Turing test?
It seems to me that if a good Eliza program were released onto Usenet,
the rest of the group would not immediately know that it was not human.
It would take some time before a few members of the group would notice
that something was wrong, and eventually call the program's bluff.
Does this make the procedure a valid Turing test? Is this method
an equally (or perhaps _more_) valid way of testing a program than the
traditional "which one is the human?" method?

... Eliminate the impossible, and whatever is left, however improbable, is probably wrong.
Stuart Presnell

Hauke Reddmann

unread,
Jan 8, 1997, 3:00:00 AM1/8/97
to

Anthony Bailey (bai...@cs.man.ac.uk) wrote:
: ...
: less demanding testing grounds are also available. Does anybody else

: think that a device that could take part in some Usenet discussion
: and flaming without being noticed as non-human might be an interesting
: idea for an AI project? (c: Indeed, are any of our regular readers
: here in fact beta versions of such projects?
:
I won't tell about ME (that's for me to know
and you to guess), but any sci.physics/sci.math/...
reader can tell you that the reknowned
"Archimedes Plutonium" is the first suspect
in Artificial Killfilematerial ;-).
--
Hauke Reddmann <:-EX8
fc3...@math.uni-hamburg.de PRIVATE EMAIL
fc3...@rzaixsrv1.rrz.uni-hamburg.de BACKUP
redd...@chemie.uni-hamburg.de SCIENCE ONLY

Cerulean

unread,
Jan 9, 1997, 3:00:00 AM1/9/97
to

S M PRESNELL <ph...@csv.warwick.ac.uk> wrote:

>Anthony Bailey wrote:
>> Does anybody else think that a device that could take part in some
>> Usenet discussion and flaming without being noticed as non-human
>> might be an interesting idea for an AI project? (c: Indeed, are
>> any of our regular readers here in fact beta versions of such
>> projects?

>Why do you say are any of our regular readers here in fact beta
>versions of such projects? Please go on. :)

Perhaps in good time I will go on.

--
___vvz /( Cerulean = Kevin Pease
<__,` Z / ( http://home.earthlink.net/~kpease
`~~~) )Z) (
/ (7 ( ,,`aw asnwe suewnH,,


Anthony Bailey

unread,
Jan 9, 1997, 3:00:00 AM1/9/97
to

S M PRESNELL <ph...@csv.warwick.ac.uk> responded to a previous posting
from me:
Ant> Does anybody else think that a device that could take part in some
Ant> Usenet discussion and flaming without being noticed as non-human
Ant> might be an interesting idea for an AI project? (c: Indeed, are
Ant> any of our regular readers here in fact beta versions of such
Ant> projects?


> Why do you say are any of our regular readers here in fact beta
> versions of such projects? Please go on. :)

NO, BUT IF YOU WOULD GIVE ME A COOKIE, I WOULD GLADLY PAY YOU TUESDAY.
(c: I CAN RECOMMEND AN EXCELLENT BOOK ON THAT SUBJECT.



> A lot of Eliza programs seem to rely on knowledge of a very specific
> area of conversation, such as pets, or the weather. Normally, this
> would be a drawback, since the program can't respond when the
> questionner changes the subject. However, in a Usenet group, writers
> are (theoretically) more inclined to stay on the topic in question.

Indeed. There is even a set acceptable response along the lines of
"I'm not going to follow that line of argument since it seems
off-topic for me. Something I feel that is more relevant to readers of
alt.swedish.chef.bork.bork.bork is..."

Obviously one doesn't want one's AI device having to argue the
intricacies of whether or not something *is* on-topic. But it is
acceptable Usenet behaviour to only post on-topic stuff according to
charter and to not participate in whatever one sees as off-topic.

> Usenet discussion is different from "normal" Turing test conditions
> in that it consists of passages of text, rather than short immediate
> answers. This might make it more difficult for the program to perform
> well, since it has to come up with more (coherent) prose, and reply
> appropriately to several points in a single message.

I don't know from where you got the idea that Usenet postings had to
be coherent to be thought to come from a human user... (c: This is one
reason I find the idea interesting. There are enough relatively
illiterate Usenet posters for mistakes in grammar by a machine to not
cause too many suspicions.



> This has probably been discussed before, but a question seems to arise:
> Does the questionner have to know that he is performing in a Turing test?

In the test as envisaged by Turing, the tester is well aware that one
of the intelligences with which e is interacting is a machine, and is
allowed to go to any lengths in order to gather evidence to help em
decide which it is. Turing also seems to imply that the machine must
pass the test with most intelligent testers, including those with some
knowledge of AI or of human psychology in order to have passed the
Turing Test fully.

However, I seem to recall that in the original article e did not argue
that a machine that couldn't perform that well in this rather harsh
test was not showing signs of intelligence; just that we might as well
theorise that the machine is passing this very tough Turing Test as a
basis for discourse on whether or not that meant it was thinking.

> It seems to me that if a good Eliza program were released onto Usenet,
> the rest of the group would not immediately know that it was not human.
> It would take some time before a few members of the group would notice
> that something was wrong, and eventually call the program's bluff.

So has anyone done this sort of thing? It seems like an interesting
project to me; if I had the time and the relevant experience, I'd give
it a go on one of the more noisy newsgroups, at least.

> Does this make the procedure a valid Turing test? Is this method
> an equally (or perhaps _more_) valid way of testing a program than the
> traditional "which one is the human?" method?

See my preceding comments, and also Turing's original article (my copy
of which comes from "The Mind's I" (Hofstadter & Dennett) which is
presently at home, alas.) My intuition is that it would be a useful
and interesting experiment, certainly.

Michael Lazin

unread,
Jan 16, 1997, 3:00:00 AM1/16/97
to

A modest proposal

Obviously it would not be difficult to write a program to compose
and mail eliza-type responses to posts on a newsgroup. To discover
whether such a program can fool human news readers, one must write the
program and test it out. Also quite obviously, the best place to test it
out is a newsgroup such as alt.fan.hofstadter or sci.cognitive,
comp.ai.philosophy or the like. It is true that the Turing test requires
that the questioners know in advance that one of the subjects they are
questioning is a computer, but regular readers of such newsgroups as those
listed above would have a certain advantage in this particular variation
of the Turing Test theme that, say, readers of alt.swedishchef.bork.bork
wouldn't have. Hey, if they can't tell the difference between human and
computer news posters, who can?

If we are going to be scientific about things, someone should progam a
mail sending eliza type program to spout out lengthy sentences with words
such as "consciousness" "memes" and "self-reference" in them. Such a
program, could, hypothetically, send replies to posts not unlike these
listed below:

"Your theory of consciousness actually ignores the very phenomena which
it seeks to explain"

"There is an underlying dualism in your arguments which is doubtlessly a
symptom of a vicious meme which has lodged itself inside your cognitive
processes"

"The difficulty of emulating human cognitive experiences with a computer
lies in the fact that the mind is not software, but something that
is quite obviously effected by purely physical stimuli"

The program can run on an algorithim not unike that of Wired magazine's
techno phrase generator program.

O.K. Programmers, it's time for you to get to work. Who will be the first
to have such a program posting messages on this newsgroup?

;)


Anthony Bailey

unread,
Jan 17, 1997, 3:00:00 AM1/17/97
to

je...@email.unc.edu (Michael Lazin) writes:
> Obviously it would not be difficult to write a program to compose
> and mail eliza-type responses to posts on a newsgroup. To discover
> whether such a program can fool human news readers, one must write
> the program and test it out. Also quite obviously, the best place
> to test it out is a newsgroup such as alt.fan.hofstadter or
> sci.cognitive, comp.ai.philosophy or the like.

First thing to say is that honesty is all here. Once the debugging is
done with, then if the experimenter chooses to run eir machine (either
in response to an existing post, or to create an original one) then e
is duty bound to post the result. No human intervention in what gets
posted should be allowed, otherwise we aren't facing a machine any
more. Obvious I know, but I can see it being a great temptation to do
the odd minor fix ("oh, I just didn't program that grammar rule
correctly... it meant to say *blah*") or decide not to post the
results of a particular run.

> It is true that the Turing test requires that the questioners know
> in advance that one of the subjects they are questioning is a
> computer, but regular readers of such newsgroups as those listed
> above would have a certain advantage in this particular variation of
> the Turing Test theme that, say, readers of
> alt.swedishchef.bork.bork wouldn't have.

Whilst not *quite* as strong as the original Turing Test, this is
still a pretty demanding criteria for the machine. I would have
thought that our readership would be able to spot the coded
correspondents a short way into a thread, as long as no humans were
being deliberately machine-like or evasive. (That is another thing
that might make this newsgroup a less than perfect place to try this
out. Although I can see it would be a lot of fun. [c:)

By all means do try it out if you feel like it, though; even if it
doesn't do well, the results will still be interesting.

My original post was made with regard to the fact that a lot of
conversation is of a lower intellectual standard that Turing envisages
in his Test, and that Usenet in particular provided an ideal testing
ground for a rather less stringent version of Turing. I think a
reasonably literate and intelligent forum within which the readers had
no special AI interests and had no forewarning that one correspondent
might be a machine might still present a considerable challenge, but
with a chance of some interesting amount of success. A forum within
which the conversation tended to be less intelligent would be kinder
still, and a high volume group of that kind might be the best place in
which to start.

> If we are going to be scientific about things, someone should progam
> a mail sending eliza type program to spout out lengthy sentences
> with words such as "consciousness" "memes" and "self-reference" in
> them.

That sort of thing wouldn't last many posts on here, I'd have
thought. However, it might do rather better on a less-informed and
less suspecting newsgroup... I do hope *somebody* gives *some* version
of this idea a go and tells us all what happens, anyway.

Rasmus Kaj

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

>>>>> "MH" == Marcus Hill <mar...@ma.man.ac.uk> writes:

MH> Anthony Bailey wrote:
>> A forum within which the conversation tended to be less
>> intelligent would be kinder still, and a high volume group of that
>> kind might be the best place in which to start.

...
MH> I think a low volume group frequented by people with varied
MH> backgrounds and a vague interest in AI would be a more taxing and
MH> valid testing ground. Can anyone think of such a group?

Are you hinting that you yourself in fact are a program?

// Rasmus :-)

Marcus Hill

unread,
Jan 20, 1997, 3:00:00 AM1/20/97
to

Anthony Bailey wrote:
> A forum within
> which the conversation tended to be less intelligent would be kinder
> still, and a high volume group of that kind might be the best place in
> which to start.
>

Naah, it would be too easy to "cheat" in such a forum - for instsnce,
your program picks a random part of a random message, quotes it and
appends a canned phrase from a selection of things like

"If you rely think that, then u r an asshole"

"Get this crap off our newsgroup"

etc.


It would fit right in with a lot of posts in such groups, and
because people with half a brain just skip these posts without
further thought, those who might be able to spot the "poster" as
a machine would be ignoring it...


I think a low volume group frequented by people with varied

backgrounds and a vague interest in AI would be a more taxing and

valid testing ground. Can anyone think of such a group?

******* LARS/NV at http://www.compsoc.man.ac.uk/~richc/lrp.html *******
There's nothing in life that can't be achieved with a little
persuasion and brute force.

Marcus.

Marcus Hill

unread,
Jan 24, 1997, 3:00:00 AM1/24/97
to

Rasmus Kaj wrote:

>
> Are you hinting that you yourself in fact are a program?
>
>

You should know, you wrote me...

Reply all
Reply to author
Forward
0 new messages