I just finished Egan's AXIOMATIC, and I don't think I've enjoyed a
short-story collection this much for a long time. I was wondering though,
if when "Learning to Be Me" was published, Egan gave any credit to Daniel
Dennett, because the premise and action of the story is identical to a
story (originally a lecture) of Dennett's in THE MIND'S I.
Spoilers for both works follow.
Both stories imagine that you could produce a "backup" of someone's brain
by having a computer model of the brain receiving all the same sensory
input so that the backup thinks it is the person and thinks the body is
responding to the backup's commands, and that you could switch from one to
another without the brain, the backup, or any observer being able to tell
any difference.
Both stories recognize that if there were an instant's difference between
the backup's thoughts and the brain's thoughts, the illusion that the
backup was controling the body would be shattered and that the two would
irrevocably diverge into different minds from that point. Both stories
have that happen and the backup finds itself a trapped observer in a body
the brain is controling. Both stories cast doubt on whether the narrator
is the brain or the backup and both end with the tables being turned.
Is this just a coincidence? Did Egan credit Dennett? Are the
similarities not really so great as to require crediting Dennnett?
Enquiring Ndoli Devices want to know!
SMTIRCAHIAGEHLT
No, it isn't.
Plagiarism has a very specific meaning, and requires much more direct
duplication than just having the same premise; if the *wording* of
substantial chunks of "Learning to be Me" had been the same as the
*wording* of substantial chunks of Dennett's article (which I've also
read, but don't recall the title offhand).
It's not inconcievable that two people can independently come up with the
same idea -- which isn't all that unheard of -- or that Egan did, in fact,
take Dennett's idea and use it in the context of a science fiction story
rather than a thought experiment. The two are very different things.
Look at all the sf stories inspired by Schroedinger's Cat; they aren't
plagiarizing Schroedinger in any sense of the word.
Once you have the idea of the duplicated consciousness, and the inability
to tell which is which, the consequences you describe all follow rather
naturally, and don't indicate copying.
--
Andrea Leistra http://www-leland.stanford.edu/~aleistra
-----
Life is complex. It has real and imaginary parts.
> In article <Pine.A41.3.95L.98051...@login2.isis.unc.edu>,
> Michael Straight <stra...@email.unc.edu> wrote:
> >
> >I just finished Egan's AXIOMATIC, and I don't think I've enjoyed a
> >short-story collection this much for a long time. I was wondering though,
> >if when "Learning to Be Me" was published, Egan gave any credit to Daniel
> >Dennett, because the premise and action of the story is identical to a
> >story (originally a lecture) of Dennett's in THE MIND'S I.
>
> No, it isn't.
>
> It's not inconcievable that two people can independently come up with the
> same idea -- which isn't all that unheard of -- or that Egan did, in fact,
> take Dennett's idea and use it in the context of a science fiction story
> rather than a thought experiment. The two are very different things.
Yes, but in this case we are talking about a story - I think it was
called "Where Am I?" (Either that or "Where was I?", which is also
in The Mind's I.)
I have to say that I didn't make that connection when I read Axiomatic,
although I certainly recognized that the issues of consciousness and
identity which Egan addresses in so much of his work are also addressed
in The Mind's I. I don't think it's anything more than that - two authors
working in similar domains.
Bob Hearn
b...@gobe.com
Uh, I think this is a not-good thing. Accusing someone of plagiarising another
writer, even as a question, is highly irresponsible, especially considering the
serious effect such an accusation can have on someone's reputation.
I have, for the sake of fairness to Egan (whom I don't know), changed the
subject line.
Bud Webster
Writer - Editor - Proofreader: Think of me as an infinite number of monkeys.
"Bubba Pritchert and the Space Aliens" now on the Web at www.wwco.com/scifi
The first science fiction magazine was published in 1926. By that time,
some of the best (and some of the worst) ideas had _already_ been used.
I can't say for certain that this one was used before either Egan or
Dennett was born -- but that's probably the way to bet.
In article <Pine.A41.3.95L.98051...@login2.isis.unc.edu>,
Michael Straight <stra...@email.unc.edu> wrote:
>
>I just finished Egan's AXIOMATIC, and I don't think I've enjoyed a
>short-story collection this much for a long time. I was wondering though,
>if when "Learning to Be Me" was published, Egan gave any credit to Daniel
>Dennett, because the premise and action of the story is identical to a
>story (originally a lecture) of Dennett's in THE MIND'S I.
>
>Spoilers for both works follow.
>
>Both stories imagine that you could produce a "backup" of someone's brain
>by having a computer model of the brain receiving all the same sensory
>input so that the backup thinks it is the person and thinks the body is
>responding to the backup's commands, and that you could switch from one to
>another without the brain, the backup, or any observer being able to tell
>any difference.
>
>Both stories recognize that if there were an instant's difference between
>the backup's thoughts and the brain's thoughts, the illusion that the
>backup was controling the body would be shattered and that the two would
>irrevocably diverge into different minds from that point. Both stories
>have that happen and the backup finds itself a trapped observer in a body
>the brain is controling. Both stories cast doubt on whether the narrator
>is the brain or the backup and both end with the tables being turned.
>
>Is this just a coincidence? Did Egan credit Dennett? Are the
>similarities not really so great as to require crediting Dennnett?
>Enquiring Ndoli Devices want to know!
>
>SMTIRCAHIAGEHLT
>
--
Dan Goodman
dsg...@visi.com
http://www.visi.com/~dsgood/index.html
Whatever you wish for me, may you have twice as much.
> I just finished Egan's AXIOMATIC, and I don't think I've enjoyed a
> short-story collection this much for a long time. I was wondering though,
> if when "Learning to Be Me" was published, Egan gave any credit to Daniel
> Dennett, because the premise and action of the story is identical to a
> story (originally a lecture) of Dennett's in THE MIND'S I.
[snip]
> Is this just a coincidence? Did Egan credit Dennett? Are the
> similarities not really so great as to require crediting Dennnett?
> Enquiring Ndoli Devices want to know!
>
I read _The Mind's I_ in the early '80s, so I must have read the story to
which you're referring, but I don't recall it. I certainly didn't model
"Learning to Be Me" on anything Dennett or anyone else had written (if I
had, I would have credited the source in the original publication, and
that credit would have been reprinted in the anthology).
The plot of "Learning to Be Me" was the most interesting thing I could
think of to happen to the narrator, given the premise; if the Dennett
story is as similar as you say, then I guess Dennett thought along similar
lines. But I doubt that I was even subsconsciously influenced by having
read the Dennett story. I simply forgot about it. If I'd remembered it,
I probably wouldn't have written "Learning to Be Me" at all.
By the way (and not all that coincidentally, since it's obvious that
Dennett and I have very similar interests), Dennett's book _Consciousness
Explained_ is very explicitly credited as the influence for both a short
story of mine, "Mister Volition", and part of my latest novel, _Diaspora_
(in both cases along with Minksy's _The Society of Mind_).
--
Greg Egan
Email address (remove name of animal and add standard punctuation):
gregegan netspace zebra net au
When I read Dennett's story I thought it was *amazingly* similar to
Egan's. Neither is much of a story at all; each is basically an idea
dipped in exposition and sprinkled with a small amount of
characterisation. I think the similarity comes about as a result of
this; if they were novels the authors would have added enough material
to make the two stories distinct. As they are, however, either is
worth reading for the concept and little else.
jds
Is the one you're thinking of 'Where Am I?' (in _Brainstorms_, 1981,
1997)? The premise, the philosophical point, may be identical, but the
action certainly isn't, in detail.
>When I read Dennett's story I thought it was *amazingly* similar to
>Egan's. Neither is much of a story at all; each is basically an idea
>dipped in exposition and sprinkled with a small amount of
>characterisation. I think the similarity comes about as a result of
>this; if they were novels the authors would have added enough material
>to make the two stories distinct. As they are, however, either is
>worth reading for the concept and little else.
>
>jds
I disagree - if it's 'Where Am I?' (the one with the brain in the vat
and the 'two Dennetts'). It's well worth reading, and amusingly told,
but basically it's a thought-experiment in the form of a story. The
technical premises (the rays that are deadly to the brain but not to
other tissues, etc) are obviously ad hoc. (Nothing wrong with that -
Dennett's giving a talk on philosophy, not writing an SF story.)
I think 'Learning To Be Me' makes the point more elegantly, and in a
more emotionally engaging way (I still get a slight constriction in the
chest, like an incipient athsma attack, just thinking about it.) It
works as an SF story. The incidental details (like the hold-outs who
don't switch, and feel like they're living through 'Invasion of the
Body-Snatchers') add to the story - whereas the incidentals in Dennet's
(the warhead retrieval, etc) add little.
(I'd be among the hold-outs. I just flat out do not believe that
computers can have subjectivity. If the Ndoli implant ever comes along,
I'll regard those who switch as dead, replaced by meat puppets with
computers in their skulls.)
--
Ken MacLeod 'Civilized man takes for granted that order is better than
chaos and that, due to the natural order of the world,
certain things are simply impossible. An assault by flesh
eating ghouls, however, calls into question this assumption.'
- John Marmysz, _The Nihilist's Notebook_
>(I'd be among the hold-outs. I just flat out do not believe that
>computers can have subjectivity. If the Ndoli implant ever comes along,
Why not? I have serious doubts that a computer simulation of a person
could have a subjectivity which would be much like living in a body,
but why couldn't a program/computer have its own computerish subjectivity?
>I'll regard those who switch as dead, replaced by meat puppets with
>computers in their skulls.)
--
Nancy Lebovitz (nan...@universe.digex.net)
November '97 calligraphic button catalogue available by email!
Because it's a just a machine.
We've had fifty years of AI being 'twenty years away'. We've had mad
scientists holding up pictures in front of their pet machines, and
watching lights come on inside in a similar pattern. 'Ah, you see, ze
neural netvork forms an *internal representation* of ze little house
picture ...' Nah. It's just some torch-bulbs coming on. We've had Hans
Moravec chilling our blood with tales of our bloodless replacements,
while building robots that can't cross the room without tripping over
their own feet.
The more we find out about the brain, the less it seems like a computer,
and the more mind seems dependent on - in fact, to be the activity of -
very subtle arrangements of matter. To me that suggests that the
specific kinds of matter involved are necessary conditions for the
existence of consciousness.
The logic of the strong AI argument is that matter doesn't matter. You
can have some contraption made out beer cans and string, and it could be
conscious so long as it could 'implement the algorithms'.
Bollocks. That way lies madness and Platonism.
In fact I'll stick my neck out and predict that no Turing-test-passing
program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
wrong about that, however, I *still* wouldn't regard it as conscious.
It's no more conscious than a weather simulation is wet.
--
Ken MacLeod
...
>In fact I'll stick my neck out and predict that no Turing-test-passing
>program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
>wrong about that, however, I *still* wouldn't regard it as conscious.
...
If you are wrong and such a program were developed, it probably
wouldn't care what you thought. :-)
-- Ernie Sjogren
> (I'd be among the hold-outs. I just flat out do not believe that
> computers can have subjectivity. If the Ndoli implant ever comes along,
> I'll regard those who switch as dead, replaced by meat puppets with
> computers in their skulls.)
Is it the difference in hardware that bothers you, or the fact that
your consciousness would have to be transferred to it, in some sense
destroying the *real* you?
If the former, what is it that makes carbon more conscious than silicon,
or any other material?
If the latter, does it bother you that your component molecules are
continually being replaced?
I frightens me that when something like the Ndoli device comes along, as I'm
sure it will, there will probably be a significant number of people who feel
as you do. If it happens in my lifetime, I'm not looking forward to having to
face a society that doesn't even believe I exist, refuses to grant me
rights, etc.
We could wind up with an entirely new kind of religious war. Speaking of Egan,
I guess _Diaspora_ explores this problem.
In article <6jkfbn$8...@universe.digex.net>, nan...@universe.digex.net
(Nancy Lebovitz) wrote:
> ... I have serious doubts that a computer simulation of a person
> could have a subjectivity which would be much like living in a body,
Why? Do you mean that simulating the external environment effectively would
be difficult or impossible? Just takes enough computing power.
Or do you mean that there's more to a person than just a brain? It's certainly
true that the brain's behavior (and perception!) is influenced by levels of
various chemicals & hormones produced by other parts of the body, but I see
no reason in principle these couldn't be simulated effectively as well.
> but why couldn't a program/computer have its own computerish subjectivity?
In the long run (and maybe the short run too, since providing a full humanlike
environment takes work), I'm sure this is how it will be. People will cast
off the limitations of flesh.
--
Bob Hearn
b...@gobe.com
> Because it's a just a machine.
>
> We've had fifty years of AI being 'twenty years away'. We've had mad
> scientists holding up pictures in front of their pet machines, and
> watching lights come on inside in a similar pattern. 'Ah, you see, ze
> neural netvork forms an *internal representation* of ze little house
> picture ...' Nah. It's just some torch-bulbs coming on. We've had Hans
> Moravec chilling our blood with tales of our bloodless replacements,
> while building robots that can't cross the room without tripping over
> their own feet.
The lesson of all that is that computers that even *act* like people are
far more difficult than the early proponents thought, and might even be
technically impossible. I wouldn't argue with that.
But does that mean that if somebody *did* manage to make a computer act
like a human, it would still lack subjectivity? After all, it took five
billion years of tinkering to make the natural variety from brute matter,
and that doesn't make us any less genuine.
> The logic of the strong AI argument is that matter doesn't matter. You
> can have some contraption made out beer cans and string, and it could be
> conscious so long as it could 'implement the algorithms'.
>
> Bollocks. That way lies madness and Platonism.
Nah. The check on madness and Platonism is just to realize that with
*just any* device-- say, a desktop PC-- you *couldn't* 'implement the
algorithms' in any practically efficient way. If you managed to make any
sentient beings, they'd be living off in some Greg Egan goblin world
living at one thought per million years, and you wouldn't have any
occasion to interact with them; probably the machine wouldn't survive
long enough for them to get any thinking done.
Another example is Hofstadter and Dennett's objection to Searle's
Chinese-room argument; if you tried to simulate a person by means of
another person executing a list of rules, you'd never get around to
finishing a single utterance.
I suspect that it's possible to build conscious beings out of something
other than meat, but a von Neumann-architecture computer is not the
correct way to do it; hardware is important because performance matters in
this line of work.
--
Font-o-Meter! Proportional Monospaced
^
Physics, humor, Stanislaw Lem reviews: http://world.std.com/~mmcirvin/
> The really interesting thing is that the inferences drawn in "The Story of
> a Brain" are almost *the exact opposite* of the ones you drew in "Dust"
> and _Permutation City_! Zuboff viewed the situation as a reductio ad
> absurdum; if an operationalist view of the situation regards a bunch of
> neurons firing at random times and places as just as good as a whole
> brain, then the operationalist view must have something wrong with it.
I've not read these, but this paragraph reminds me
(for some reason) of the definition of philosophy: that
field of study in which you kick up a lot of dust, then
complain that you can't see anything.
Paul
>(I'd be among the hold-outs. I just flat out do not believe that
>computers can have subjectivity. If the Ndoli implant ever comes along,
>I'll regard those who switch as dead, replaced by meat puppets with
>computers in their skulls.)
What if you didn't know who had a switch and who didn't?
--
John S. Novak, III j...@cris.com
The Humblest Man on the Net
>> ... I have serious doubts that a computer simulation of a person
>> could have a subjectivity which would be much like living in a body,
>Why? Do you mean that simulating the external environment effectively would
>be difficult or impossible? Just takes enough computing power.
Well I've never managed to convince myself that a sufficient
aggregation of digital hardware could ever manage to simulate a human
brain. But there are more ways to compute than just flipping bits. I
see no reason that, having learned at some point to understand
biological neurons _completely_, we could not build a brain out of a
sufficient aggregation of analog hardware.
> I read _The Mind's I_ in the early '80s, so I must have read the story to
> which you're referring, but I don't recall it. I certainly didn't model
> "Learning to Be Me" on anything Dennett or anyone else had written (if I
> had, I would have credited the source in the original publication, and
> that credit would have been reprinted in the anthology).
I remember thinking of Arnold Zuboff's thought experiment "The Story of a
Brain," reprinted in _The Mind's I_, when I read "Dust" and _Permutation
City_; the similarity there is fairly tenuous, but they both have to do
with the decomposition and scrambling of a thinking being over space and
time, and the implications thereof. In the Zuboff story it wasn't a
digitized Copy, but a biological brain that got sliced into individual
neurons, eventually scattered across the galaxy and made to fire out of
step.
The really interesting thing is that the inferences drawn in "The Story of
a Brain" are almost *the exact opposite* of the ones you drew in "Dust"
and _Permutation City_! Zuboff viewed the situation as a reductio ad
absurdum; if an operationalist view of the situation regards a bunch of
neurons firing at random times and places as just as good as a whole
brain, then the operationalist view must have something wrong with it.
--
> It's not inconcievable that two people can independently come up with the
> same idea -- which isn't all that unheard of -- or that Egan did, in fact,
> take Dennett's idea and use it in the context of a science fiction story
> rather than a thought experiment. The two are very different things.
> Look at all the sf stories inspired by Schroedinger's Cat; they aren't
> plagiarizing Schroedinger in any sense of the word.
Heck, many of the authors haven't even bothered to learn what Schroedinger
was talking about.
(It's gotten bad enough that I start groaning whenever I see an SF story
entitled "Schroedinger's Something-or-Other.")
I tried writing some (not very good) SF years and years ago, and I haven't
done it in a long time, in part because I haven't reconciled myself to the
fact that all of the basic story ideas are taken, and that you *can* put
your own unique stamp on an idea that's been done before. After all, I
enjoy *reading* SF stories to this day, even though, whenever I read a
recent one, there's a 95 percent chance that I can name at least one
earlier SF story with the same basic premise.
It reminds me of something "Babylon 5" writer J. Michael Straczynski said
about TV, though it applies to books as well: "Story ideas are worthless."
He was talking about people who try to sue TV (and novel) writers for
stealing their story ideas, which has created a climate in which popular
writers have to protect themselves provably from seeing any sort of
submitted fan material, lest they get in legal trouble-- even though,
really, the whole nasty business is stupid, because it's things other than
story ideas that make or break the show, or the story, or the novel.
The relationship between Dennett's tale and Egan's is a good example;
Dennett's is an amusing thought experiment, but it really isn't much of a
story, since the characters and situations are (intentionally) constructed
as silly cardboard props.
But it's hard to convince myself of all this when I'm trying to *write*. I
get a couple of sentences down and think "'All You Zombies' by Robert
Heinlein" or "_Blood Music_ by Greg Bear" or "'Altruizine' by Stanislaw
Lem" or something else I've read, and then I can't write any more. Which
is probably just as well.
If Ken were just another bigoted nut-case on the internet*, maybe not. But
what if it had to work with Ken and he kept ignoring its' efforts to make
friendly small talk? Or worse, what if it didn't have any legal rights
because prejudices like Ken's were widespread?
*Not that I mean to suggest for moment that he is; but our hypothetical
program might think so.
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/ Now offering spam-free web-based newsreading
>> but why couldn't a program/computer have its own computerish subjectivity?
>
>In the long run (and maybe the short run too, since providing a full humanlike
>environment takes work), I'm sure this is how it will be. People will cast
>off the limitations of flesh.
>
Maybe. And maybe you'll just have to settle for improved flesh.
: If Ken were just another bigoted nut-case on the internet*, maybe not. But
: what if it had to work with Ken and he kept ignoring its' efforts to make
: friendly small talk? Or worse, what if it didn't have any legal rights
: because prejudices like Ken's were widespread?
: *Not that I mean to suggest for moment that he is; but our hypothetical
: program might think so.
What if they had to share a newsgroup with him? Would he be polite to the
simulated poster, which after all would not have to pass a Turing test,
given the number of posters to Usenet who demonstrably are insufficient to
that task?
Gary, amused that the degree of Ken MacLeod's degree of "bigotry" against
AI's is now being debated.
--
Copyright 1998 by Gary Farber; Web Researcher; Nonfiction Writer,
Fiction and Nonfiction Editor; gfa...@panix.com; B'klyn, NYC, US
>The logic of the strong AI argument is that matter doesn't matter. You
>can have some contraption made out beer cans and string, and it could be
>conscious so long as it could 'implement the algorithms'.
>
>Bollocks. That way lies madness and Platonism.
>
>In fact I'll stick my neck out and predict that no Turing-test-passing
>program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
>wrong about that, however, I *still* wouldn't regard it as conscious.
>It's no more conscious than a weather simulation is wet.
>--
So, have you and Banksie had this argument often? :-)
--
The Turtle Moves!
-----------------
Gryffyd
>>Why not? I have serious doubts that a computer simulation of a person
>>could have a subjectivity which would be much like living in a body,
>>but why couldn't a program/computer have its own computerish subjectivity?
>
>Because it's a just a machine.
And you are not? In what sense exactly you are not a machine?
>We've had fifty years of AI being 'twenty years away'. We've had mad
>scientists holding up pictures in front of their pet machines, and
>watching lights come on inside in a similar pattern. 'Ah, you see, ze
>neural netvork forms an *internal representation* of ze little house
>picture ...' Nah. It's just some torch-bulbs coming on.
Oh, ignore *him*. Professor Igor Alexander is... how shall I put it
carefully... adept more in publicising his great achievements than in
achieving greatness. :-) In the discussion about "linear reasoning"
(hiding in the "Male/Female Relations" thread), I've been explaining
that the naive optimism of some AI researchers arose from their failure
to grasp the complexities of "common sense". This failure in no way
proves that their overall aim is an impossible one. It'll just take a
wee bit longer. (In fact the latest _New Scientist_ has a fascinating
article about all this -- do read! It may be way over the top hype, but
it sounds damn interesting!)
> We've had Hans
>Moravec chilling our blood with tales of our bloodless replacements,
>while building robots that can't cross the room without tripping over
>their own feet.
Actually, the MIT robotics lab cracked that one years ago with their
"subsumption architecture" robots.
>The more we find out about the brain, the less it seems like a computer,
>and the more mind seems dependent on - in fact, to be the activity of -
>very subtle arrangements of matter. To me that suggests that the
>specific kinds of matter involved are necessary conditions for the
>existence of consciousness.
Sorry, but I see nothing of the sort in any findings about the
functioning of the brain. Could you be more specific as to what it is
you have in mind?
>The logic of the strong AI argument is that matter doesn't matter. You
>can have some contraption made out beer cans and string, and it could be
>conscious so long as it could 'implement the algorithms'.
>
>Bollocks. That way lies madness and Platonism.
I am not sure "bollocks" is a valid intellectual argument. And I
disagree that (a) madness is in any way equatable with Platonism, (b)
that all interpretations of Platonism are necessarily wrong and (c) that
that way lies anything of the sort. Want to go deeper into this?
>In fact I'll stick my neck out and predict that no Turing-test-passing
>program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
>wrong about that, however, I *still* wouldn't regard it as conscious.
>It's no more conscious than a weather simulation is wet.
Arnautov Mantra No 1: read Lem! (Yeah, I am sure you have... :-)
Let's face it, my *only* experience of subjectivity is my own. I cannot
experience anybody elses. I accept that you probably also have
subjectivity, because your behaviour suggests this. (It is tempting to
say "because you are also a human being", but that depends on my
assessment of you being a human being *and* a hypothesis that all human
beings have subjectivity, not just me -- which in turn I can only deduce
from the behaviour of human beings in general.) Hence it is unclear to
me why one should a priori grant subjectivity to another human being,
but not to a machine, even if the machine exhibits a behaviour
suggestive of subjectivity. This is, after all, the essence of the
Turing Test.
--
Mike Arnautov
m...@mipmip.demon-co-antispam-uk
Replace dashes with dots and remove the antispam component.
By hypothesis, I wouldn't know for sure unless they told me. In the
context of Egan's story, the safe rule might be 'never trust anyone over
thirty'.
--
Ken MacLeod
[snip my jeers at current AI]
>The lesson of all that is that computers that even *act* like people are
>far more difficult than the early proponents thought, and might even be
>technically impossible. I wouldn't argue with that.
>
>But does that mean that if somebody *did* manage to make a computer act
>like a human, it would still lack subjectivity? After all, it took five
>billion years of tinkering to make the natural variety from brute matter,
>and that doesn't make us any less genuine.
>
IMO it depends *how* the 'acting like a human' came about. I'm sure it
would be possible to create a very good simulation of consciousness that
lacked consciousness (what in my novel _The Stone Canal_ the characters
call a 'flatline', after the Dixie Flatline in _Neuromancer_). I also
think it would be possible to construct an arrangement of matter that
feels and thinks (as you say, and as I entirely agree, natural selection
has done just that) but the actual existence of consciousness would
depend on the actual arrangement of matter that it was made of.
>> The logic of the strong AI argument is that matter doesn't matter. You
>> can have some contraption made out beer cans and string, and it could be
>> conscious so long as it could 'implement the algorithms'.
>>
>> Bollocks. That way lies madness and Platonism.
>
>Nah. The check on madness and Platonism is just to realize that with
>*just any* device-- say, a desktop PC-- you *couldn't* 'implement the
>algorithms' in any practically efficient way. If you managed to make any
>sentient beings, they'd be living off in some Greg Egan goblin world
>living at one thought per million years, and you wouldn't have any
>occasion to interact with them; probably the machine wouldn't survive
>long enough for them to get any thinking done.
>
'Greg Egan goblin world' is more or less what I meant by 'madness and
Platonism', but was too much the gobsmacked fan to say. I really admire
Egan's work, and enthusiastically recommend it, but the 'dust'
hypothesis, Permutation City etc is, yes, madness and Platonism. It's
all very explicit in another of his stories, 'Transition Dreams', where
just *doing the maths* generates a consciousness somewhere.
>Another example is Hofstadter and Dennett's objection to Searle's
>Chinese-room argument; if you tried to simulate a person by means of
>another person executing a list of rules, you'd never get around to
>finishing a single utterance.
>
>I suspect that it's possible to build conscious beings out of something
>other than meat, but a von Neumann-architecture computer is not the
>correct way to do it; hardware is important because performance matters in
>this line of work.
>
I partly agree with that. I think it may be possible, some day, to build
conscious beings out of something other than meat. But if they *are*
conscious, it'll be because the stuff they're built of is doing -
*physically* doing - something sufficiently close to whatever it is
neurons physically do. (Which might require something all wet and icky.)
They won't be just *modelling* what neurons do.
--
Ken MacLeod
That's what worries me :-)
--
Ken MacLeod
<snip>
>>In fact I'll stick my neck out and predict that no Turing-test-passing
>>program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
>>wrong about that, however, I *still* wouldn't regard it as conscious.
>>It's no more conscious than a weather simulation is wet.
>>--
>
>So, have you and Banksie had this argument often? :-)
>
Yes.
'MacLeod, *why* do your happy endings always involve wiping out billions
of AIs?'
(couple of rounds later)
'You think there's something special about *carbon*?'
--
Ken MacLeod 'Einstein and people like Einstein said that the world was
flat; Einstein and people like Einstein said that Man
would never travel faster than the speed of sound.'
- Lobsang Rampa (a.k.a. Cyril Hoskins of Plympton, Devon)
>If the former, what is it that makes carbon more conscious than silicon,
>or any other material?
>
Its physical properties. I don't rule out in principle that whatever
physical properties are necessary for an arrangement of matter to be
conscious could be achieved with other materials.
>If the latter, does it bother you that your component molecules are
>continually being replaced?
>
No. The new molecules are just as capable of sustaining consciousness as
the old ones. Trust me on this :-)
There are two separate questions here. The difference in hardware
bothers me because I think that only some specific arrangements of
matter can actually be conscious. Emulation doesn't cut it.
The second question assumes that the idea of consciousness being
'transferred' makes sense, and is not just is a mechanistic relic of the
idea of a separable soul. I'm sure the widespread use of computers is
making it more intuitively plausible, but it's still wrong. A mind is
not a file.
As for 'in some sense destroying the *real* you' - yes, for some funny
reason I think having my skull cleaned out and filled with sponge would
destroy the real me. When this arrangement of matter dies, I die.
A thought experiment. You meet some godlike aliens who tell you that,
millions of years after you die, an exact replica of you will be created
with all your memories up until death. Its weal or woe depends on what
you do in your life here. You believe them (for whatever reason).
Is it rational to regard that future person as yourself?
Should you modify your behaviour in consequence?
Suppose the godlike aliens can create replicas *right now* of people who
have just died, and reward or punish the replicas. Do the answers
change?
Well, mine are no, no, and no.
>I frightens me that when something like the Ndoli device comes along, as I'm
>sure it will, there will probably be a significant number of people who feel
>as you do. If it happens in my lifetime, I'm not looking forward to having to
>face a society that doesn't even believe I exist, refuses to grant me
>rights, etc.
>We could wind up with an entirely new kind of religious war.
The Butlerian Jihad, as Frank Herbert called it. I'm fairly sure
something like the Ndoli device *won't* come along, but if it does, I
intend to be among the mujahedin.
HAH! Thats what YOU say, but can you PROVE it. NO! Just explain how MAN
could come about by BLIND CHANCE, from a PUDDLE of gue!
I suspect I already have been polite towards simulated posters and
mischievous AI experiments, on talk.origins.
--
Ken MacLeod
[...]
>In fact I'll stick my neck out and predict that no Turing-test-passing
>program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
>wrong about that, however, I *still* wouldn't regard it as conscious.
>It's no more conscious than a weather simulation is wet.
Plagiarising Paul Dietz from, oh, four posts back:
"philosophy: that field of study in which you kick up a lot of dust, then
complain that you can't see anything"
I think pretty much everybody will/would form their own opinion on this,
primarily from the gut with a little retrospective intellectual
justification. I wonder happens when a Machinist ("they're just machines")
has a pint with an android and discusses what's wrong with Arsenal's new
winger. Does his gut adjust, and his brain argue itself into sync a few days
later?
And as for the social tensions arising from differences in opinion... well,
that's another post.
-- Joel
[snip]
>I suspect I already have been polite towards simulated posters and
>mischievous AI experiments, on talk.origins.
>
>--
>Ken MacLeod
All together now: And where did all you zombies come from?
John Boston
>Let's face it, my *only* experience of subjectivity is my own. I cannot
>experience anybody elses. I accept that you probably also have
>subjectivity, because your behaviour suggests this. (It is tempting to
>say "because you are also a human being", but that depends on my
>assessment of you being a human being *and* a hypothesis that all human
>beings have subjectivity, not just me -- which in turn I can only deduce
>from the behaviour of human beings in general.) Hence it is unclear to
>me why one should a priori grant subjectivity to another human being,
>but not to a machine, even if the machine exhibits a behaviour
>suggestive of subjectivity. This is, after all, the essence of the
>Turing Test.
We're back to inductive reasoning again. It is a considerably smaller
inductive step to go:
I am conscious; other people act like me and have similarly constructed
brains --> Other people are conscious
than:
I am conscious; this machine acts in some respects like me and has a
'brain' which works on principles which are partly based on how my brain
is believed to work --> This machine is conscious
--
Mike Scott
mi...@moose.demon.co.uk
http://www.moose.demon.co.uk
What's this conciousness stuff?
I continue to regard the entire notion of 'Mind' as a holdover from the
presumption of a creator. People have a bunch of interesting capacities
for generalizing and a few more for abstracting, but those don't need any
kind of Mind to explain them.
>*physically* doing - something sufficiently close to whatever it is
>neurons physically do. (Which might require something all wet and icky.)
>They won't be just *modelling* what neurons do.
So, the artillery shell _doesn't_ land very very close to where the
ballistic calculation says it does?
--
goo...@interlog.com | "However many ways there may be of being alive, it
--> mail to Graydon | is certain that there are vastly more ways of being
dead." - Richard Dawkins, :The Blind Watchmaker:
It depends on how well the aliens answer the questions about what 'exact'
means, doesn't it?
Going to sleep doesn't remove me from existence, after all; neither does
travel, and there is no reason to suppose travel in time is fundamentally
distinct from travel in space.
What do you think of as the special limits of machines as compared to
organisms?
>
>We've had fifty years of AI being 'twenty years away'. We've had mad
>scientists holding up pictures in front of their pet machines, and
>watching lights come on inside in a similar pattern. 'Ah, you see, ze
>neural netvork forms an *internal representation* of ze little house
>picture ...' Nah. It's just some torch-bulbs coming on. We've had Hans
>Moravec chilling our blood with tales of our bloodless replacements,
>while building robots that can't cross the room without tripping over
>their own feet.
But if we wait long enough, we'll see computers making crazedly optimistic
promises, too. :-)
>
>The more we find out about the brain, the less it seems like a computer,
>and the more mind seems dependent on - in fact, to be the activity of -
>very subtle arrangements of matter. To me that suggests that the
Care to unpack that?
>specific kinds of matter involved are necessary conditions for the
>existence of consciousness.
Well, maybe. It could be that the consciousness is in the arrangements,
not the specific types of matter.
I find it plausible that we may not be able to program consciousness,
but we can evolve it in computers or some other complex non-organic
systems. In that case, it won't be possible to make large simple
reliable arbitrary modifications, but we can't do that with complex
non-conscious programs, either.
>The logic of the strong AI argument is that matter doesn't matter. You
>can have some contraption made out beer cans and string, and it could be
>conscious so long as it could 'implement the algorithms'.
>
>Bollocks. That way lies madness and Platonism.
Why?
>
>In fact I'll stick my neck out and predict that no Turing-test-passing
>program (or Ndoli Device, or Copy) will ever be developed. Even if I'm
>wrong about that, however, I *still* wouldn't regard it as conscious.
>It's no more conscious than a weather simulation is wet.
I'm not convinced that passing the Turing test is the important or
interesting part of artificial consciousness. Not all humans can
pass the Turing test, and neither can cats--and cats are obviously
conscious.
OBSF: Has there been any science fiction about a computer that appears
to be conscious, but isn't?
It behaves as though it thinks it's special, but it's just showing off.
<snip>
> I'm not convinced that passing the Turing test is the important or
> interesting part of artificial consciousness. Not all humans can
> pass the Turing test, and neither can cats--and cats are obviously
> conscious.
>
> OBSF: Has there been any science fiction about a computer that appears
> to be conscious, but isn't?
SPOILER AHEAD!
_Dreamships_, by Melissa Scott, concerns an AI called
Manfred, which is believed by some to be fully conscious,
true AI rather than a very advanced expert system. If it is,
it has complex legal, social, and political ramifications:
Can you own a true AI, or is that slavery? Can you justify
giving full political rights to a true AI, when substantial
portions of the human population of the planet don't have
full political rights? And how can you tell if something is
a true AI, or just a really good hoax?
In the end it's determined that Manfred is _not_ truly
conscious, full AI. And no, I have not done justice to the
book.
Lis Carey
> I'm not convinced that passing the Turing test is the important or
> interesting part of artificial consciousness. Not all humans can
> pass the Turing test, and neither can cats--and cats are obviously
> conscious.
They are?! Wait a minute while I try to wake my cats up long enough to
inform them of this part of their contract...
-- Dave Goldman
Portland, OR
>There are two separate questions here. The difference in hardware
>bothers me because I think that only some specific arrangements of
>matter can actually be conscious.
So what if one of these gadgets were a configuration of matter capable
of exhibiting consciousness? For what it's worth, you are espousing
the idea that consciousness is a physical, not a mathematical,
artifact. It's an idea that I agree with, based primarily on many AI
discussions which otherwise make No Fucking Sense when bent, twisted,
tied in knots and otherwise taken to extremes. I have an extreme
degree of skepticism that a digital computer can properly emulate
human consciousness, a skepticism which will not be withdrawn until we
actually discover what I think of as the completely unknown Physics of
Consciousness. Without that, we're really _all_ talking out of our
asses. Information, after all, has a physical basis. So may
consciousness.
(Well, I suppose a rigourous demonstrable proof that the universe
itself is discrete in time might make me change my tune, too. But
spare me the "Latest theories indicate...!" argument until they're
commonly accepted.)
But nothing I can think of would prevent me from replacing,
Moravec-style, each neuron of my brain with a sufficiently
sophisticated silicon counterpart, one at a time.
Is the resulting lump conscious? Presumeably so. Was the
consciousness interrupted at any point during the replacement? Not
that anyone could possibly notice. No more so than the effects of
taking one shot of whisky, which would kill and _not_ replace many
brain cells.
Are the discarded neurons conscious? Not, obviously not. What
alternative is left other than a transfer of consciousness?
>The second question assumes that the idea of consciousness being
>'transferred' makes sense, and is not just is a mechanistic relic of the
>idea of a separable soul. I'm sure the widespread use of computers is
>making it more intuitively plausible, but it's still wrong. A mind is
>not a file.
I think that the idea of a Physics of Consciousness implies the
possibility of transferring consciousness from one place to another.
You have carefully staked out the idea that consciousness is a
physical effect, and not a mathematical effect, if I have understood
you correctly.
What, then, precludes the transfer of consciousness from one framework
to another?
>A thought experiment. You meet some godlike aliens who tell you that,
>millions of years after you die, an exact replica of you will be created
>with all your memories up until death. Its weal or woe depends on what
>you do in your life here. You believe them (for whatever reason).
I can't even think about this in the same terms you do, since the
responsibility for the actions of my then-clone would be placed on the
creators, not on me.
>What's this conciousness stuff?
>I continue to regard the entire notion of 'Mind' as a holdover from the
>presumption of a creator. People have a bunch of interesting capacities
>for generalizing and a few more for abstracting, but those don't need any
>kind of Mind to explain them.
Interesting, but semantically void.
Any argument you come up with against consciousness, I will refute by
incorporating that _into_ my definition of consciousness.
I know I'm conscious. Being a creep, I don't really care if you are
or not. (Which makes my idea of on-line entertainment rather odd,
perhaps, but there you go.)
> As for 'in some sense destroying the *real* you' - yes, for some funny
> reason I think having my skull cleaned out and filled with sponge would
> destroy the real me. When this arrangement of matter dies, I die.
>
> A thought experiment. You meet some godlike aliens who tell you that,
> millions of years after you die, an exact replica of you will be created
> with all your memories up until death. Its weal or woe depends on what
> you do in your life here. You believe them (for whatever reason).
>
> Is it rational to regard that future person as yourself?
>
> Should you modify your behaviour in consequence?
>
> Suppose the godlike aliens can create replicas *right now* of people who
> have just died, and reward or punish the replicas. Do the answers
> change?
>
> Well, mine are no, no, and no.
I'm not certain that the person who will wake up on my side of the bed
with my memories tomorrow is the same person who's sitting here typing on
this keyboard. But it doesn't seem irrational to act as if it will be,
partly because I'm inheriting useful things like memories, plans, and
material goods from the person who thought he was me yesterday, so I'm
used to the idea, and partly because time advances whether I want it to or
not, and if the future me isn't the same person as the current me, then
I've got a very limited lifespan.
--
Avram Grumer | av...@interport.net | http://www.users.interport.net/~avram/
If you want a picture of the future of Usenet,
imagine a foot stuck in a human mouth -- forever.
Hee.
This doesn't answer 'what conciousness is', at all.
It would appear to be 'knowning that you are yourself'; that appears to be
a consequence of a particular capacity for abstraction. It doesn't need to
be explained by Mind at all.
> OBSF: Has there been any science fiction about a computer that appears
> to be conscious, but isn't?
Sawyer's AI's in THE TERMINAL EXPERIMENT.
Rod Pennington
rod...@sage.net
I'm curious, as far as conciousness is concerned do you think that the
essential feature of neurons is something other than their information
processing activities? If so, what? If not, why isn't modelling
sufficient, so long as the model is sufficiently complex and complete?
Regards,
Rod Pennington
rod...@sage.net
>> We've had Hans Moravec chilling our blood with tales of our
>> bloodless replacements, while building robots that can't cross the
>> room without tripping over their own feet.
>The ability to walk across the room is unrelated to intelligence or
>consciousness. Insects can do it, and they are neither intelligent
>or (probably) conscious. Steve Hawking can't do it.
Besides, Honda has some robots that do _really damned well_ and don't
even resort to looking distinctly non-human. The non-human ones do
considerably better, over considerably more diverse terrain.
And the RoboDyne Cybernetics proposed fractal robots just look
fuckin' amazing and elegant. Half their website is pure unabashed,
undeserved marketting hype, but the other half is fascinating.
>> The more we find out about the brain, the less it seems like a
>> computer, and the more mind seems dependent on - in fact, to be
>> the activity of - very subtle arrangements of matter. To me that
>> suggests that the specific kinds of matter involved are necessary
>> conditions for the existence of consciousness.
>I believe there is only one sort of matter in the universe.
>Namely that which follows the laws of physics.
Be fair.
He's obviously talking about a physics of consciousness idea.
_If_ there exists such a beast, _then_ it may be the case that only
certain configurations of matter can give rise to consciousness. This
is no sillier an idea, on the surface, than separating substances into
the conducting, non-conducting, and semi-conducting groups.
> The interesting question to me is whether the behavior of the neuron
> can be adequately simulated by something that's not significantly
> slower. For example, what if organisms (or at least conscious
> organisms) take advantage of the way atomic nuclei wiggle?
That seems unlikely, given that NMRI machines do not produce
vegetables.
> Well, I suppose that's possible, but it seems very unlikely. Neurons
> seem to be pretty well understood, and seem to be fairly simple
> in their operation.
I would dispute this assertion. New basic aspects of neuronal
function are being discovered all the time.
Paul
>I suspect I already have been polite towards simulated posters and
>mischievous AI experiments, on talk.origins.
Ken McLeod posts to talk.origins? That's almost worth subscribing to
talk.origins to see.
--
Del Cotter d...@branta.demon.co.uk
The Alien Design Bibliography
http://www.branta.demon.co.uk/alien-design/
> A thought experiment. You meet some godlike aliens who tell you that,
> millions of years after you die, an exact replica of you will be
> created
> with all your memories up until death. Its weal or woe depends on what
>
> you do in your life here. You believe them (for whatever reason).
>
> Is it rational to regard that future person as yourself?
......
> Should you modify your behaviour in consequence?
I´m already trying to behave etically so no. But the replica _is_human
and _I_ am responsible for its fate.
> Suppose the godlike aliens can create replicas *right now* of people
> who
> have just died, and reward or punish the replicas. Do the answers
> change?
>
> Well, mine are no, no, and no.
You´d let them torture someone just like you (or just someone)without
considering changing your behavior? That´s a little... unetical isn´t
it.
It all depends what they want one to do of course..
On the Internet, nobody knows you're a colony of fire ants. Which is
to say that, at this point, since we have no real evidence of non-human
intelligence, and many Usenet posts suggest intelligence, we assume
that all those screen names and email addresses refer to human
beings.
More seriously, I don't know if we'll ever see AI. But I don't find the idea
of an intelligent machine any more implausible than the idea of an
intelligent animal.
Vicki Rosenzweig
v...@interport.net | http://www.users.interport.net/~vr/
"Typos are Coyote padding through the language, grinning."
-- Susanna Sturgis
I refuse to believe that the reproachful gaze of a cat who's insisting
that the food dish must be topped off or the world will be awry has
no consciousness behind it. (On the other hand, the structure of
the preceding sentence may have been generated by something almost
but not quite like consciousness.)
OBSF: _Arrive at Easterwine_ by R.A.Lafferty, in which it is suggested
that people don't have consciousness yet, but do get occasional
preliminary hints and flashes of it.
--
Aren't people just machines, too? We're just organized matter. What
else could we be? The fact that we weren't designed or built in a
factory isn't relevant.
> We've had fifty years of AI being 'twenty years away'.
The lack of a program that can pass a Turing test proves nothing,
except that nobody has yet built appropriate hardware, and/or that
nobody has yet written appropriate software. Or at least that nobody
has yet combined the two.
> We've had Hans Moravec chilling our blood with tales of our
> bloodless replacements, while building robots that can't cross the
> room without tripping over their own feet.
The ability to walk across the room is unrelated to intelligence or
consciousness. Insects can do it, and they are neither intelligent
or (probably) conscious. Steve Hawking can't do it.
> The more we find out about the brain, the less it seems like a
> computer, and the more mind seems dependent on - in fact, to be
> the activity of - very subtle arrangements of matter. To me that
> suggests that the specific kinds of matter involved are necessary
> conditions for the existence of consciousness.
I believe there is only one sort of matter in the universe.
Namely that which follows the laws of physics.
If each neuron in your head were to be replaced one at a time with a
tiny artificial machine which exactly duplicates its function, your
behavior wouldn't change. Would your consciousness gradually fade
away? Or suddenly wink out when the one neuron responsible for
consciousness was replaced? Or do you believe that even one neuron
does things which cannot be accurately reproduced by any artificial
machine?
> The logic of the strong AI argument is that matter doesn't matter.
> You can have some contraption made out beer cans and string, and it
> could be conscious so long as it could 'implement the algorithms'.
Right. A wide variety of physical systems are capable of emulating
a "universal Turing machine," though they may do it very slowly and
inefficiently. And universal Turing machines are capable of emulating
the behavior of any matter to any desired precision, as far as anyone
can tell. (Counterexamples eagerly solicited.)
With today's hardware and software, the program which emulates an
entire human brain would require more memory than exists, and would
run far too slowly to be useful. Not to mention the difficulty of
analyzing an entire brain to get that information into the computer
in the first place. But these are problems in implimentation, not
in fundamentals. An intelligent conscious machine in 1998 is like
a TV set in 1898, a moon rocket in 1928, or a chess playing automaton
in 1918. It's not like a faster-than-light starship or a perpetual
motion machine, which, as far as we know from physics, will never be
possible for fundamental physical reasons. (Of course new physics
could be discovered tomorrow.)
> In fact I'll stick my neck out and predict that no Turing-test-
> passing program (or Ndoli Device, or Copy) will ever be developed.
I've filed this away so it can be printed in a future edition of the
book which contains similar statements about how machines can never be
built that will fly, that will go to the moon, that will play master
level chess, that will go underwater without drowning their crew, that
can determine what the stars are made of, etc.
> Even if I'm wrong about that, however, I *still* wouldn't regard it
> as conscious.
A difficult philosophical problem. You know that you are conscious,
but you can't prove it to anyone else. I know that I am conscious,
but I can't prove it to anyone else. Are other people conscious? I
assume they are if they claim to be, but I have no proof. Are animals
conscious? I assume that dogs are and insects aren't. I could be
wrong. Maybe everything is conscious. Maybe nothing and nobody is
but me. Unlike the ability to pass a Turing test, this can never be
proven either way, as far as I can tell.
> It's no more conscious than a weather simulation is wet.
How can you tell that any weather is wet unless it's weather that you
experienced yourself? I could truthfully tell you about my getting
soaked in a thunderstorm while biking home from work. But you've
never met me. Perhaps I'm a simulation, living in a simulated world
which includes simulated weather.
Or maybe you are. How would you know? Maybe even now, your
programmer is peering in, watching an image of you getting soaked on
his computer screen. Does he have to worry that water will pour out
of his computer? Of course not. You and the water are at the same
level of abstraction, so you interact with each other. But you're
not at the same level of abstraction as the programmer in his nice
dry computer room, any more than I need fear that the next time I play
Doom that a monster will escape from the game and chase me across my
living room with fireballs.
Suppose programs capable of passing the Turing test are written, and
become common. Suppose half the "people" posting to the newsgroups in
2028 are programs. And that nobody could tell the difference. Would
you continue to claim that they aren't really conscious, no matter
what they write? No matter how vehemently they disagree with you?
What if some of them argue that only programs can be conscious but
flesh people can't be? Wouldn't they have just as much or as little
justification for that belief as you have for the opposite belief?
Is any rational counterargument possible?
(Actually, I think we have met, briefly, but I doubt you remember me.
It doesn't change my argument either way.)
--
Keith Lynch, k...@clark.net
http://www.clark.net/pub/kfl/
I boycott all spammers.
> In article <dave-17059...@pdx68-i48-32.teleport.com>,
> Dave Goldman <da...@rsd-remove-this-bit.com> wrote:
> >In article <6jn74u$t...@universe.digex.net>, nan...@universe.digex.net
> >(Nancy Lebovitz) wrote:
> >
> >> I'm not convinced that passing the Turing test is the important or
> >> interesting part of artificial consciousness. Not all humans can
> >> pass the Turing test, and neither can cats--and cats are obviously
> >> conscious.
> >
> >They are?! Wait a minute while I try to wake my cats up long enough to
> >inform them of this part of their contract...
> >
> Well, maybe *your* cats aren't conscious, even when they're awake.
>
> I refuse to believe that the reproachful gaze of a cat who's insisting
> that the food dish must be topped off or the world will be awry has
> no consciousness behind it.
Which brings up yet another aspect of the definition of consciousness --
can an entity be called "conscious" if it has never had the thought that
there are any other consciousnesses in the universe?
(My impression of the cat complaining about an unfilled food dish is that
it is performing the equivalent of whacking the side of the damn machine
until it starts working again.)
--
Well, I suppose that's possible, but it seems very unlikely. Neurons
seem to be pretty well understood, and seem to be fairly simple
in their operation. As living cells, they're about as complex as
any other living cell, but that seems to only be to support their
operation as electrochemical switches. Something that's not a cell
can easily do the same thing.
Most of the complexity of the brain comes from the sheer number of
neurons, and the intricate ways in which they are hooked up. It seems
likely that an artificial neuron could be made to run at a million
times standard speed. Perhaps even faster if it can be made smaller
than a natural neuron.
AI AI AI Sound like a song, perhaps the ship sang.
--
Precision Excision Revision Division
Shu...@nospam.worldnet.att.net
for email, you know what to do with the "nospam" part
Paul Dietz <di...@interaccess.com> wrote in article
<355E2C6B...@interaccess.com>...
>
> I've not read these, but this paragraph reminds me
> (for some reason) of the definition of philosophy: that
> field of study in which you kick up a lot of dust, then
> complain that you can't see anything.
Interesting definition. I tend to use one almost exactly opposite -
Philosophy is the study of that which is so obvious that nobody else
bothers with it. Of course, a lot of the time, the thing that is so
blindingly obvious is that the topic of discussion is a complete can of
worms and you'd be better off thinking about something that actually has a
potential solution - which, I suppose, fits in quite well with the initial
'dust' definition - and leads me to occasionally refer to the many
philosophers I studied for my degree as 'a pack of worm farmers'.
-Giles
> The interesting question to me is whether the behavior of the neuron
> can be adequately simulated by something that's not significantly
> slower. For example, what if organisms (or at least conscious
> organisms) take advantage of the way atomic nuclei wiggle?
You can postulate any level of dependence on the detailed chemistry and
physics of actual neurons you like, but I think you'll have a hard time
making a case for the possibility that anything at the nuclear level can
percolate up to a level that affects behaviour. Certainly, thermal noise
in chemical processes probably does affect the brain, but only to the
extent that it makes any neuron-level description probabilistic to some
degree. Any other noise source with the same probability distribution
would be just as good.
Some people do seem to believe that something physical that can have no
effect whatsoever on behaviour might nonetheless be the magic ingredient
of subjectivity, but I think that's absurd. Subjective experience is all
about the potential for behaviour to be affected by it. There might be
feelings we have trouble putting into words, but at the very least these
feelings make us respond to the question "What are you feeling?" with the
answer "I can't put it into words". It might, just conceivably, be
possible to build an AI zombie -- a program that passes the Turing test,
but has no conscious experience. But it would probably be a thousand
times harder to contrive such a thing than it would be to build a genuine
AI. Consciousness is probably an almost unavoidable side-effect of the
kinds of knowledge and behaviour that the Turing test guarantees to be
present.
--
Greg Egan
Email address (remove name of animal and add standard punctuation):
gregegan netspace zebra net au
>Some people do seem to believe that something physical that can have no
>effect whatsoever on behaviour might nonetheless be the magic ingredient
>of subjectivity, but I think that's absurd. Subjective experience is all
>about the potential for behaviour to be affected by it. There might be
>feelings we have trouble putting into words, but at the very least these
>feelings make us respond to the question "What are you feeling?" with the
>answer "I can't put it into words". It might, just conceivably, be
For me, a lot of what's interesting about those feelings is
the way they feel, not the behaviors they lead to. And it's certain
that one feeling that can't be put into words can be quite different
from another feeling that can't be put into words.
>possible to build an AI zombie -- a program that passes the Turing test,
>but has no conscious experience. But it would probably be a thousand
Depends on how tough the Turing test is. A lot of humor is based on
people being reflexive--and we are reflexive a lot of the time. (What's
made AI tough is that even those reflexes are a lot more complicated
than anyone realized.)
However, the hard part might be getting that little bit extra when
the person surprises you. I'm reminded of those programs that the analyse
music of a composer in terms of the probability of notes following each
other, and possibly other features like harmony. I've heard that the
result will sound like that composer on a bad day. It's a big jump
to get from there to sounding like that composer on a good day, not
to mention actually developing a new personal style.
I'm sure that (not to name names), there are some posters who would
be much harder to simulate than others. And I've noticed that even
some posters who seem to mostly post out of habit will occasionally say
something original.
>times harder to contrive such a thing than it would be to build a genuine
>AI. Consciousness is probably an almost unavoidable side-effect of the
>kinds of knowledge and behaviour that the Turing test guarantees to be
>present.
>
--
[posted and mailed]
I had no intention of making any sort of accusation. I was surprised by
the similarities between Egan's story and Dennett's and was merely curious
whether that sort of thing was common and acceptable in fiction. The
consensus seems to be that it is. I certainly wasn't trying to disparage
a story which I've been shoving under people's noses and saying, "Read
this; you'll love it. It'll blow your mind."
It wasn't until too late that I realized how inflamatory my original
subect header was. Would people discussing the AI stuff mind changing the
subject header as well?
Has Michael Straight Stopped Molesting Children Yet?
FLEOEVDETYHOEUPROEONREWMEILECSOFMOERSGTIRVAENRGEEARDSTVHIESBIITBTLHEEPSRIACYK
Ethical Mirth Gas/"I'm chaste alright."/Magic Hitler Hats/"Hath grace limits?"
"Irate clam thighs!"/Chili Hamster Tag/The Gilt Charisma/"I gather this calm."
I wrote:
> > I just finished Egan's AXIOMATIC, and I don't think I've enjoyed a
> > short-story collection this much for a long time. I was wondering though,
> > if when "Learning to Be Me" was published, Egan gave any credit to Daniel
> > Dennett, because the premise and action of the story is identical to a
> > story (originally a lecture) of Dennett's in THE MIND'S I.
> [snip]
>
> > Is this just a coincidence? Did Egan credit Dennett? Are the
> > similarities not really so great as to require crediting Dennnett?
> > Enquiring Ndoli Devices want to know!
On Sat, 16 May 1998, Greg Egan wrote:
> I read _The Mind's I_ in the early '80s, so I must have read the story to
> which you're referring, but I don't recall it. I certainly didn't model
> "Learning to Be Me" on anything Dennett or anyone else had written (if I
> had, I would have credited the source in the original publication, and
> that credit would have been reprinted in the anthology).
>
> The plot of "Learning to Be Me" was the most interesting thing I could
> think of to happen to the narrator, given the premise; if the Dennett
> story is as similar as you say, then I guess Dennett thought along similar
> lines. But I doubt that I was even subsconsciously influenced by having
> read the Dennett story. I simply forgot about it. If I'd remembered it,
> I probably wouldn't have written "Learning to Be Me" at all.
>
> By the way (and not all that coincidentally, since it's obvious that
> Dennett and I have very similar interests), Dennett's book _Consciousness
> Explained_ is very explicitly credited as the influence for both a short
> story of mine, "Mister Volition", and part of my latest novel, _Diaspora_
> (in both cases along with Minksy's _The Society of Mind_).
On Sun, 17 May 1998, Ken MacLeod wrote:
> A thought experiment. You meet some godlike aliens who tell you that,
> millions of years after you die, an exact replica of you will be created
> with all your memories up until death. Its weal or woe depends on what
> you do in your life here. You believe them (for whatever reason).
>
> Is it rational to regard that future person as yourself?
>
> Should you modify your behaviour in consequence?
>
> Suppose the godlike aliens can create replicas *right now* of people who
> have just died, and reward or punish the replicas. Do the answers
> change?
>
> Well, mine are no, no, and no.
Orson Scott Card has a short story that might change your mind about that.
I can't remember the name of it, it's about the guy who gets a new body
cloned every time his current one gets to fat or too out-of-shape. Maybe
someone else can give you the title.
Egan's "Learning to Be Me" agrees with you, as Michael Straight reads it.
The ease with which subjective experience can be altered - dehydration or
Mg deficency will make _anybody_ think funny - would tend to argue against
'no effect on behaviour' IMO. Being dehydrated affects how one behaves,
too.
Never mind that subjectivity looks an awful lot like the brain having less
bandwidth for integrating sensory data than the senses are able to
provide. People notice different things becuase it's not optional, they
_can't_ notice everything.
>possible to build an AI zombie -- a program that passes the Turing test,
>but has no conscious experience. But it would probably be a thousand
>times harder to contrive such a thing than it would be to build a genuine
>AI. Consciousness is probably an almost unavoidable side-effect of the
>kinds of knowledge and behaviour that the Turing test guarantees to be
>present.
Why is a sense that 'I'm me' a necessary function of knowing how to talk
like a human?
It's kind of you to say so, but I haven't even lurked on it for some
time.
--
Ken MacLeod - who evolved by RANDOM CHANCE from a PUDDLE
of LIFELESS gue.
If Searle can get away with handwaved 'causal powers' and Penrose with
handwaved 'quantum effects' ...
--
Ken MacLeod
Why? We can already make programs that can fool some people some of
the time when discoursing on some subjects. For that very reason I
don't think the Turing test is a particularly good one; demonstrating
that we can't tell the difference between humans and computers when
they are each communicating via teletype simply tells us that
teletypes don't pass enough information. Why *should* we think that we
can determine humanity from a lousy few thousand bytes of information?
Give me a computer program that can maintain meaningful conversations
on arbitrary subjects in a high-detail virtual-reality environment,
and I'll agree that the test may be valid.
jds
> >I'm curious, as far as conciousness is concerned do you think that the
> >essential feature of neurons is something other than their information
> >processing activities? If so, what? If not, why isn't modelling
> >sufficient, so long as the model is sufficiently complex and complete?
> >
>
> If Searle can get away with handwaved 'causal powers' and Penrose with
> handwaved 'quantum effects' ...
But I don't think that either of them really does get away with it.
Searle's argument is fatally flawed because he fails to explain why an
argument that conscious machines can't be built out of NAND gates
isn't also an argument that conscious machines can't be built out of
neurons. That's the one and only interesting question and he doesn't
even attempt to address it.
>>I'm curious, as far as conciousness is concerned do you think that the
>>essential feature of neurons is something other than their information
>>processing activities? If so, what? If not, why isn't modelling
>>sufficient, so long as the model is sufficiently complex and complete?
>
>If Searle can get away with handwaved 'causal powers' and Penrose with
>handwaved 'quantum effects' ...
Who's letting Penrose get away with that?
He may have refined his position since _The Emperor's New Mind_, but the
"we don't understand quantum gravity, and we don't understand
consciousness, so the two must be related" position was total
nonsense. Too bad, since the first two-thirds or so of the book were
quite interesting.
--
Andrea Leistra http://www-leland.stanford.edu/~aleistra
-----
Life is complex. It has real and imaginary parts.
>Why? We can already make programs that can fool some people some of
>the time when discoursing on some subjects.
This, I think, is the main problem with the Turing test as defined. I
do think that an operational definition of consciousness is necessary,
since, as has been pointed out on this thread, none of us have any way of
*knowing* that anyone else is conscious. We choose to believe so because
of peoples' actions. This would seem to suggest that a "perfect
simulation of consciousness" that is not itself consciousness is a
contradiction in terms, as the only way we have of defining consciousness
in other entities is through how it looks from the outside.
But I think we'd all agree that Eliza is not conscious, and Eliza *has*
passed the Turing test in a limited sense. Requiring the person
administering the test to have some basic level of competence seems
unsatisfactory, and turning it into a consensus issue is...inelegant,
somehow. (The flip side, of people who fail the Turing test, which is a
common occurence on Usenet, is also somewhat problematic.)
>For that very reason I don't think the Turing test is a particularly good
>one; demonstrating that we can't tell the difference between humans and
>computers when they are each communicating via teletype simply tells us
>that teletypes don't pass enough information. Why *should* we think that
>we can determine humanity from a lousy few thousand bytes of information?
What do you think Usenet is?
>Give me a computer program that can maintain meaningful conversations on
>arbitrary subjects in a high-detail virtual-reality environment, and I'll
>agree that the test may be valid.
Why do you need the VR environment? A program that could fool me on
Usenet (which, while the "meaningful" part may be debatable, certainly
meets the "arbitrary subjects" portion of your requirement) would be good
enough for me. I've only ever met a handful of people I know from Usenet
-- for all I know, the others could all be AIs. I think I have enough
information from their postings to decide that they're conscious (which,
since I know that we aren't anywhere close to having convincing AI yet,
makes me conclude that they're people. But *consciousness* is the issue,
and I maintain that that can be recognized solely through a textual
medium, without need for a "detailed VR environment".
>It might, just conceivably, be
>possible to build an AI zombie -- a program that passes the Turing test,
>but has no conscious experience. But it would probably be a thousand
>times harder to contrive such a thing than it would be to build a genuine
>AI.
This is interesting, Greg.
I've never put much stock in the Turing Test precisely because I've
always held the opinion that it would be easier to build a Zombie than
to build a true artificial consciousness.
Claiming that a device is sentient because it can fool the cognitive
capacities of a human being strikes me as being much the same as
saying that a CD is real just because human ears can't tell the
difference.
The test says everything about the ability to _decieve_ a human, but
says nothing quantifiable about actual consciousness. It _can't_ say
anything quantifiable about consciousness, because we don't know _how_
as a scientific race, to say anything along those lines ourselves.
Let's look a little bit closer at the actual test, Greg.
What happens if we build a device that can pass the Turing Test for,
say, Koko, acting through AOL? Have we made a statement, then, that
the device is conscious only to Koko? Or that the device has a
cognitive capacity of Koko? Either? Neither? Both?
Let's take the case of an autistic Tester. Let's assume we manage to
make a device that reliably passes the test only for (say) certain
types of autistic people. Is the device autistic? Is it sentient
only to autistics? Is it "only" as conscious as the autistics?
Perhaps it's differently conscious, to avoid placing a value judgement
on autistics.
Or, let's say that we build a gadget whom normal people pass through
the Test but which, through some quirk, autistic people can nail every
single time. Same series of questions as above.
And all of these have assumed that one type of person or another is
the only filter we need. All autistics pass the machine through, all
normal humans fail the machine, and variations. What if two people of
roughly equal mental capacity have differing opinions?
Sorry, Greg, but I can't regard the Turing Test as anything but a
clever bit of chicanery.
Give me another way of determining whether something is sentient, then.
I can tell a CD from a live performance through various methods. But I
can't come up with any way of distinguishing a sentient entity from a
perfect imitation of a sentient entity -- and thus, I argue that it makes
no sense to talk about such an imitation.
This sort of thing skates on the edge of solipsism, really; I have no way
of *proving* that you are conscious, but I believe you are, even though I
can't possibly get outside of my own head to verify that. You convince my
cognitive capabilities that you're sentient, and that's good enough for
me. If you can come up with some *other* mechanism I can use for
verifying your sentience, strictly via Usenet, that doesn't rely on my own
cognitive capabilities, please suggest it.
I'm not fond of definitions that require physical interaction, simply
because there are many people who I quite happily treat as sentient who
have have never met and presumably never will.
> In article <gregegan-180...@dialup-m1-61.perth.netspace.net.au>,
> Greg Egan <greg...@netspace.zebra.net.au> wrote:
>
> >Some people do seem to believe that something physical that can have no
> >effect whatsoever on behaviour might nonetheless be the magic ingredient
> >of subjectivity, but I think that's absurd. Subjective experience is all
> >about the potential for behaviour to be affected by it. There might be
> >feelings we have trouble putting into words, but at the very least these
> >feelings make us respond to the question "What are you feeling?" with the
> >answer "I can't put it into words". It might, just conceivably, be
>
> For me, a lot of what's interesting about those feelings is
> the way they feel, not the behaviors they lead to. And it's certain
> that one feeling that can't be put into words can be quite different
> from another feeling that can't be put into words.
I certainly agree that "what's interesting about those feelings is the way
they feel", and in ordinary situations vast amounts of subjective
experience will have no [practically observable] effect on behaviour
(though in the majority of cases I think there'd be subtle effects on
things like posture, facial expression, how long you pause before doing
some other activity, etc.).
But in examining the physical causes of these subtleties, the fact that
feelings might not *normally* lead to any particular behaviour isn't the
point. If you're capable of the behaviour of saying "feeling A was not
like feeling B", which involves the use of motor neurons, then whether you
actually say it or not, that says something about the kind of physical
processes, and the way they affect your neurons, that can ultimately be
responsible for the distinction between those feelings.
The fact that you've just typed something about the subtleties of your
subjective states is *behaviour*. Whatever physical process is
responsible for those subleties must be capable of affecting the firing of
your neurons. People can find themselves at a loss for specific words to
describe aspects of subjectivity, but (short of certain pathological
situations) the very fact that we can always vocalise [or type, etc.]
*something* in response to those subtleties means that the physical
processes responsible for them can't be so obscure that, as far as our
neurons are considered, they're either "invisible" (the neurons would fire
identically if the process wasn't happening) or "just noise" (the neurons
are randomly affected, and any other random proccess would have
essentially the same effect).
Isn't it possible that if someone invented a near infinite data storage system
you could create a device that could record all the stimuli we perceive. You
could then give it built in responses to pain, temperature, hunger etc.
If you then gave this device the ability to randomly repeat recorded stimuli and
note which caused a positive response you would have a learning system which
in a few years might ask you 'Please pass the silicon jell and the jumper lead,
the walk in the park has made me hungry.'
It probably would, given time, pass a Turing test.
Ewan
> In article <gregegan-180...@dialup-m1-61.perth.netspace.net.au>,
> Greg Egan <greg...@netspace.zebra.net.au> wrote:
> >possible to build an AI zombie -- a program that passes the Turing test,
> >but has no conscious experience. But it would probably be a thousand
> >times harder to contrive such a thing than it would be to build a genuine
> >AI. Consciousness is probably an almost unavoidable side-effect of the
> >kinds of knowledge and behaviour that the Turing test guarantees to be
> >present.
>
> Why is a sense that 'I'm me' a necessary function of knowing how to talk
> like a human?
I should probably never use the words "Turing test", since there are so
many possible versions, and the phrase conjures up visions of those dumb
competitions in which people enter software deliberately designed to
cheat, rather than deliberately designed to think. I'm sure some
contrived conservational program *will* fool the judges in some limited
competition like that, long before there's real AI.
What I should have said is that any system that behaves indistinguishably
from a human being in all circumstances almost certainly feels like a
human -- but keyboard communication doesn't exactly span the range of
human experience, and if we're talking about an AI built from scratch
(rather than an uploaded human brain and body), it'd be missing the point
completely to stick the AI in a simulated human body and simulated
physical environment and see how it reacted to various situations. I
think testing an AI that's not even meant to mimic a human is going to be
quite subtle and difficult, and asking it which character in _Hamlet_ it
most relates to, etc., will be more or less irrelevant.
I don't think it's a binary question: I'm quite convinced that each
morning when I awake, I'm an at least slightly different person than the
day before, and some days more so. We're constantly evolving, both in
thought paradigms, memories, personality, and are slightly different
physically as well. For me to worry about the hypothical
copy-in-a-million-years would be to worry about an extension of the same
principle.
It's a given that some people take great care to let worry about long-term
effects affect today's actions, and others act based upon short-term
goals. We're just talking about scale, in Ken's thought-experiment, not a
difference in principle.
--
Copyright 1998 by Gary Farber; Web Researcher; Nonfiction Writer,
Fiction and Nonfiction Editor; gfa...@panix.com; B'klyn, NYC, US
> Mostly that a body (including brain) is very complex and subtle. We don't
> have any information about how much can be abstracted away about the
> brain and still have anything much like a human. The idea that you can
> adequately simulate a brain by on-off patterns of pseudo neurons is
> a *guess*. Note that I'm not postulating a soul (I'm agnostic about
> souls). Mere matter is complicated enough.
John Baez once posted, to one of the physics groups, a hunch to the effect
that the only way you could adequately emulate a brain would be to do it
on the level of accurate physical simulations of the individual neurons.
If you have to get the quantum and thermal mechanics right, that's a
pretty tall order-- you obviously couldn't do it with a computing machine
of any present-day architecture, though some exotic speculative ideas
currently in the works might suffice.
Greg Egan's guess in _Permutation City_ was that a considerably higher
level of abstraction would still be adequate. Personally, my guess is
that you don't have to go as far as simulating, say, nonlocal quantum
correlations (Penrose didn't convince me), but that you might still have
to go so far that the exercise will be futile for a long, long time.
But that *doesn't* mean that we are exempt from dilemmas over whether
machines are conscious in the foreseeable future. On the contrary, I
suspect that it will just make the dilemma more difficult, because the
first machines that somebody seriously considers granting sentient status
*won't act anything like humans at all*! They won't pass the Turing Test;
they won't converse like Oxford graduates; they'll be solid-state aliens
so bizarre that we'll be hard pressed to say whether they count as people
or not. And we may never know.
--
Font-o-Meter! Proportional Monospaced
^
Physics, humor, Stanislaw Lem reviews: http://world.std.com/~mmcirvin/
"Fat Farm," Omni, January 1980 issue.
Also see his "Hot Sleep," 1978, in which someone is repeatedly
executed (using a different body each time) until he consents to make
a public broadcast admitting that the Soviet takeover of America was a
good thing.
> In article <6jll47$4...@universe.digex.net>, nan...@universe.digex.net
> (Nancy Lebovitz) wrote:
>
> > Mostly that a body (including brain) is very complex and subtle. We don't
> > have any information about how much can be abstracted away about the
> > brain and still have anything much like a human. The idea that you can
> > adequately simulate a brain by on-off patterns of pseudo neurons is
> > a *guess*. Note that I'm not postulating a soul (I'm agnostic about
> > souls). Mere matter is complicated enough.
>
> John Baez once posted, to one of the physics groups, a hunch to the effect
> that the only way you could adequately emulate a brain would be to do it
> on the level of accurate physical simulations of the individual neurons.
> If you have to get the quantum and thermal mechanics right, that's a
> pretty tall order-- you obviously couldn't do it with a computing machine
> of any present-day architecture, though some exotic speculative ideas
> currently in the works might suffice.
>
> Greg Egan's guess in _Permutation City_ was that a considerably higher
> level of abstraction would still be adequate. Personally, my guess is
> that you don't have to go as far as simulating, say, nonlocal quantum
> correlations (Penrose didn't convince me), but that you might still have
> to go so far that the exercise will be futile for a long, long time.
Exactly how do you decide what's "adequate", though? I think the person's
behaviour, or potential behaviour, has to be the ultimate criterion for
relevance. If process A is going on inside a neuron, but it can never --
even in principle -- affect the person's behaviour any differently from
process B in a computer (and I'd argue that all quantum and thermal
details ever contribute is noise), then B is just as good as A in terms of
its effect on consciousness. The presence of *some* source of noise might
be essential to the way our minds work, but requiring that it be the
particular noise that the particular quantum chemistry of all the
particular proteins and other molecules that happen to be doing things
inside biological neurons (most of them simply to keep the cell alive and
structurally intact) seems as absurd to me as worrying about the detailed
physics of the transistors in a computer's logic gates.
> The really interesting thing is that the inferences drawn in "The Story of
> a Brain" are almost *the exact opposite* of the ones [Egan] drew in "Dust"
> and _Permutation City_! Zuboff viewed the situation as a reductio ad
> absurdum; if an operationalist view of the situation regards a bunch of
> neurons firing at random times and places as just as good as a whole
> brain, then the operationalist view must have something wrong with it.
Before I answer this, I'll add spoiler space.
By the way, has anyone remarked on the analogy between:
* the idea that time travel selects for those who won't modify the
past (Niven's time travel essays, Gerrold's _The Man Who Folded
Himself_).
* the post facto voting procedure for wave function collapse in
Egan's _Quarantine_.
Would one expect the Egan voting procedure to evolve a person toward
strong feelings? Toward selfishness?
SPOILER for _Permutation City_.
SPOILER for _Permutation City_.
SPOILER for _Permutation City_.
SPOILER for _Permutation City_.
SPOILER for _Permutation City_.
Seems to me that Egan also rubbishes the dust theory at the end of
_Permutation City_. Every possible sort of person is instantiated
in the dust, and the Permutation Burghers have picked one of those
possible people as the one they want to be. But Egan chooses instead
to follow a group of people that conspires to appear to be them: they
have the same memories and personalities, but are in fact at a level
of abstraction greater than that of the bugs, whereas they were
expecting to be at a level of abstraction less than that of the bugs.
Since every possibility is instantiated Egan's choice is no more or
less valid than any other.
Essentially it's the "infinite number of monkeys writing Hamlet"
paradox. Shakespeare's genius is as much a reflection of what he
didn't write as of what he did: the monkeys can duplicate either but
not both at once, which makes their version of Hamlet worthless. I
think Egan is saying that building _Permutation City_ is worthless
for the same reason. Some would extend that to say that Egan's writing
_Permutation City_ is equally worthless for the same reason. I'd be
hard pressed to explain why I don't agree, and would probably have to
fall back on saying that at least it made me think.
----------------------------------------------------------------------
David Bofinger David.B...@dsto.defence.gov.au
----------------------------------------------------------------------
> On Mon, 18 May 1998 12:59:25 +0800, Greg Egan
> <greg...@netspace.zebra.net.au> wrote:
>
> >It might, just conceivably, be
> >possible to build an AI zombie -- a program that passes the Turing test,
> >but has no conscious experience. But it would probably be a thousand
> >times harder to contrive such a thing than it would be to build a genuine
> >AI.
>
> This is interesting, Greg.
> I've never put much stock in the Turing Test precisely because I've
> always held the opinion that it would be easier to build a Zombie than
> to build a true artificial consciousness.
[snip]
All you're really pointing out is that any particular implementation of
the Turing Test is potentially flawed, and that if you try to set it up to
give infallible (or even perfectly repeatable) yes/no answers, that can
lead to absurdities. I agree. But if you take the situation in "Learning
to Be Me" of putting a chip in someone's skull that then takes over
control of the body, and absolutely no one can ever tell the difference,
then I'd see that as a kind of Turing Test, and one which makes it highly
unlikely that the chip is not as conscious as any human brain.
I'm not suggesting that the Turing Test is the best possible criterion for
judging the presence of consciousness. If you offered me a computer
program that you claimed was conscious, I'd like to have a look at its
algorithmic structure as well as observing it in action. But I can't
offer any perfect criterion based on structure, either.
As for the ease of building zombies compared to conscious beings, I think
natural selection would have made *us* zombies if it really was simpler to
get all the same behaviour without creating consciousness in the process.
>Give me another way of determining whether something is sentient, then.
I don't have to be able to paint, to criticise painting. I don't need
to be a novelist to criticize a novel. I don't need to have a better
approach to point out the fundamental flaws in a given idea.
>I can tell a CD from a live performance through various methods.
C'mon, take the analogy to the point where it makes sense. In one
case, you're using your own cognition to judge the cognition of
another artifact, in the other case, you're using your ears to judge
the source of a sound.
The point is that in _both_ cases, the subjective biological tool can
be fooled. When we come up with a hard way of analyzing problems like
this, we won't _need_ wishy-washy tests like the Turing.
>This sort of thing skates on the edge of solipsism, really; I have no way
>of *proving* that you are conscious, but I believe you are, even though I
>can't possibly get outside of my own head to verify that. You convince my
>cognitive capabilities that you're sentient, and that's good enough for
>me. If you can come up with some *other* mechanism I can use for
>verifying your sentience, strictly via Usenet, that doesn't rely on my own
>cognitive capabilities, please suggest it.
Few things would please me more.
It would be, to borrow a phrase, the most important discovery since
the discovery of the electron. If I _had_ a better approach, I'd be
founding an IEEE journal about it right now.
Doesn't change the fact I think the current Turing definition is full
of it.
At first I was going to answer "I don't think so" but after rereading
the start of Turing's original paper I change this to a simple "No, he
dismissed them."
The "imitation game" introduced by Turing is first introduced by
considering a man trying to act as a woman (or is it the other way round?)
and an interrogator trying to determine who's who.
After dismissing audio-transmission he states that teletypes are ideal.
He then goes on to do the same "imitation game" with one machine, and
considers the question the question if the machine need human looking skin
or not, but he does not think this relevant as it's not a "beauty contest".
A VR-replica should have been dismissed on the same grounds.
This does not say that if Turing hadn't taken the poisoned apple and
lived longer he wouldn't have formulated his test in another way,
but reading his original article only shows how dated it is.
--
// Home page http://www.dna.lth.se/home/Hans_Olsson/
// Email To..Hans.Olsson@dna.lth.se [Please no junk e-mail]
which makes it pretty clear those clones *are* seperate people.
The guy walking out the door may think he's still the same, but for
the guy left behind everything changes...
Personally, i feel the same. If they clone me with intact memories
after my death, i'm still dead myself. Copying won't help me.
moving my "self", will.
John Varley expressed this the best, in many of his short stories.
: Also see his "Hot Sleep," 1978, in which someone is repeatedly
: executed (using a different body each time) until he consents to make
: a public broadcast admitting that the Soviet takeover of America was a
: good thing.
Which had, btw, a pretty stupid ending. "Gee killing him repeatedly
doesn't make him do what we say, so let's ship him off to a space
colony full of similair people." Why not kill him for real instead?
Martin Wisse
No, the aliens are. But my point was that it isn't *me* up ahead.
>> Suppose the godlike aliens can create replicas *right now* of people
>> who
>> have just died, and reward or punish the replicas. Do the answers
>> change?
>>
>> Well, mine are no, no, and no.
>
>You´d let them torture someone just like you (or just someone)without
>considering changing your behavior? That´s a little... unetical isn´t
>it.
>It all depends what they want one to do of course..
>
Oh, I'd consider changing my behaviour, but only in the sense of trying
to save the hostages, under duress and moral blackmail. That the hostage
has my memories is his misfortune, not mine :-)
Philosophical source of all this: Anthony Flew's articles on
resurrection, BTW.
--
Ken MacLeod
>I think that the idea of a Physics of Consciousness implies the
>possibility of transferring consciousness from one place to another.
>You have carefully staked out the idea that consciousness is a
>physical effect, and not a mathematical effect, if I have understood
>you correctly.
>
>What, then, precludes the transfer of consciousness from one framework
>to another?
I don't doubt in principle that an exact copy of a consciousness could
be made in another brain (or whatever), but I wouldn't call that a
'transfer' because it suggests a sort of electric soul flitting across
the gap.
ObSF: the bit in 'Beyond Bedlam' by Wyman Guin, where a personality
record is dissipated in the friction of a spinning wheel.
--
Ken MacLeod
> It might, just conceivably, be
>possible to build an AI zombie -- a program that passes the Turing test,
>but has no conscious experience. But it would probably be a thousand
>times harder to contrive such a thing than it would be to build a genuine
>AI. Consciousness is probably an almost unavoidable side-effect of the
>kinds of knowledge and behaviour that the Turing test guarantees to be
>present.
This is where I have doubts. The chess-playing program Deep Blue can
beat humans at chess, but it is not doing what humans do when they play
chess. I don't see why a similarly 'brute-force' solution to passing the
Turing test would necessarily be a thousand times harder than genuinely
conscious AI - and not, say, ten times easier.
But this isn't, really, my sticking point.
Reading the 'Orphanogenesis' chapter of _Diaspora_, there was no point
where I stopped thinking of the entity as an object, and began thinking
of it as a subject - although the point where I should have done so was
clear enough. It was not a failing on the part of your writing, but
(perhaps) on the part of my capacity for regarding such an entity as a
person: 'Nah, it's just some torch-bulbs coming on.'
In the same way, reading the rest of _Diaspora_, there was no point
where I stopped regarding the citizens of the polises as anything other
than the enemy. Given that they'd won, it was fascinating to follow
their adventures, but it was not a world in which I or anyone I could
identify with had any part. The machines had won and we had lost, and I
never stopped feeling that loss.
Other posters have raised points which, it the moment, would take more
time than I have to discuss one by one. My apologies to those who's
points I've missed. Briefly, I don't think consciousness is anything
spooky, and I think it can exist in different material frameworks. I
don't think, however, that it's just a matter of information processing
which can in principle be implemented on *any* framework. That may be
just because I haven't read enough on the subject. I'll come back on
this when I've read some more.
My real objection is not to the theoretical possibility of conscious AI,
but to the desirability of including it in our ethical sphere. I'm
confident that post-humans would not include us in theirs, and see no
reason to accept their views on the matter in advance.
--
Ken MacLeod
Since any physical process can be simulated in software, these two
sentences contradict each other.
--
Ross Smith ..................................... Wellington, New Zealand
<mailto:r-s...@ihug.co.nz> ........ <http://crash.ihug.co.nz/~r-smith/>
"Remember when we told you there was no future? Well, this is it."
-- Blank Reg
> My real objection is not to the theoretical possibility of conscious AI,
> but to the desirability of including it in our ethical sphere. I'm
> confident that post-humans would not include us in theirs, and see no
> reason to accept their views on the matter in advance.
Confident? Why?
Interesting premise: suppose it's pretty much a toss up,
fifty-fifty odds on whether a Turing-capable species is
conscious or not.
Is it ethical to enslave zombie AIs ? Are the first aliens
we meet conscious ? If not, is it OK to enslave *them* ?
--
Niall [real address ends in se, not es]
>ObSF: the bit in 'Beyond Bedlam' by Wyman Guin, where a personality
>record is dissipated in the friction of a spinning wheel.
Does anybody know where this tale might be located?
--
Pete McCutchen
>My real objection is not to the theoretical possibility of conscious AI,
>but to the desirability of including it in our ethical sphere. I'm
>confident that post-humans would not include us in theirs, and see no
>reason to accept their views on the matter in advance.
Why is it impossible that there would be some post-human equivalent of People
for the Ethical Treatment of Animals?
--
Pete McCutchen
Only if the simulation can be considered an *exact* representation of
what is going on in the physical process. And that requires that we
understand *exactly* how the process works. Any simulation of
something as complex as the human brain must have a load of
simplifications at this time, especially since there's a lot about the
brain we don't understand.
Even far in the future, if our knowledge of how things work is much
greater, it may be that the only way to represent the process
accurately is to replicate the physical structure of neurons, etc.
---------------------------------------------
Scott Beeler scbe...@mindspring.com
It sounds like a consideration of what is necessary to the test. If
physical appearance were necessary, then should a black-skinned human be
regarded differently from a white-skinned human? If gender were a
factor, should one distinguish between Grace Hopper and Bill Gates on
that basis? How should Turing himself be regarded?
The test by teletype is a conversation which masks many of the social
cues which are hooks for prejudice. And the problems of synthesizing
speech, or making a machine which moves like a man, are not entirely the
problems of making a convincing simulation of intelligence. I am, you
should note, less certain whether the problems of engineering perception
can be isolated from AI. The problems of continuous speech recognition
may be inextricably linked to the way in which humans understand speech.
--
David G. Bell -- Farmer, SF Fan, Filker, and Punslinger.
If it's only as successful as PETA, life will be pretty bad for a lot
of people.
--
Nancy Lebovitz (nan...@universe.digex.net)
May '98 calligraphic button catalogue available by email!
> My real objection is not to the theoretical possibility of conscious AI,
> but to the desirability of including it in our ethical sphere. I'm
> confident that post-humans would not include us in theirs, and see no
> reason to accept their views on the matter in advance.
I'm surprised to hear you say that: it seems very contrary to what you
wrote in The Star Fraction. Either you've changed your mind since
then, or else I've misinterpreted your book and/or your Usenet post.
>Why is it impossible that there would be some post-human equivalent
>of People for the Ethical Treatment of Animals?
Oh, great; my continued future in the hands of some AI PETA fanatic...
Perhaps there is a speed requirement, such that you're not conscious
unless you can react to the world within a certain amount of time
after a stimulus happens? [No, I have no idea _what_ that appropriate
level would be. We seem to react on about the millisecond level,
but that may not be a universal requirement.]
And different implementations may run at different speeds.
ony Z
Tony Z
--
Over the altar, flame of anatomized fire,
the High Prince stood, gyre in burning gyre;
day level before him, night massed behind;
the Table ascended; the glories intertwined -- Charles Williams
>Ken MacLeod <k...@libertaria.demon.co.uk> wrote:
>
>
>
>>ObSF: the bit in 'Beyond Bedlam' by Wyman Guin, where a personality
>>record is dissipated in the friction of a spinning wheel.
>
>Does anybody know where this tale might be located?
It's the title story of the collection _Beyond Bedlam_ published in the
UK by Sphere in 1973. The title page says "Originally published under
the title LIVING WAY OUT", which may refer to a US publication under
that title.
Otherwise, check www.best.com/~contento.
--
Mike Scott
mi...@moose.demon.co.uk
http://www.moose.demon.co.uk
> Keith Lynch wrote:
> >
> > Also see his "Hot Sleep," 1978, in which someone is repeatedly
> > executed (using a different body each time) until he consents to make
> > a public broadcast admitting that the Soviet takeover of America was a
> > good thing.
>
> I think that was "A Thousand Deaths."
In the collection :Capitol: and not in the modern Worthing collection
but (I think) also in :Maps in a Mirror:. :Capitol: is, well, it isn't
exactly good, but I like it. It's sincere.
--
Jo - - I kissed a kif at Kefk - - J...@bluejo.demon.co.uk
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
http://www.bluejo.demon.co.uk - Blood of Kings Poetry; rasfw FAQ;
Reviews; Interstichia; Momentum - a paying market for real poetry.
>Is it the difference in hardware that bothers you, or the fact that
>your consciousness would have to be transferred to it, in some sense
>destroying the *real* you?
The second is what bothers me; I don't think there's any transfer of
consciousness at all, just duplication. I am agnostic on the issue of
computer consciousness.
Using Egan's setup, suppose that instead of replacing the brain with the
Ndoli device, you take one or the other and transplant it into an identical
body. Assuming that the device *is* conscious, you would then have two
people. I would therefore argue that before the operation, you would *also*
have two people. Dump the brain in favor of the crystal, and, in my view,
you have just killed one of them. As *I* would presumably be the one
getting killed, I would be rather reluctant to undergo the procedure in the
unlikely event that it was ever developed. I have nothing against copies
of myself, but I don't view them as an adequate compensation for dying.
Actually, this is just a variation on the old duplicating-disintegrating
teleporter problem, isn't it? (Well, with the "is the copy conscious"
question added.)
--
Justin Fang (jus...@ugcs.caltech.edu)
This space intentionally left blank.
> Ken MacLeod wrote:
> >
> > Briefly, I don't think consciousness is anything
> > spooky, and I think it can exist in different material frameworks. I
> > don't think, however, that it's just a matter of information processing
> > which can in principle be implemented on *any* framework.
>
> Since any physical process can be simulated in software, these two
> sentences contradict each other.
In theory, maybe. In practice, I suspect there are some physical
processes that are complex enough that they cannot be fully simulated on
any software that would run within the scope of the existing universe.
--
Avram Grumer | av...@interport.net | http://www.users.interport.net/~avram/
Go not to the Net for counsel, for it will say both "Me too!" and "Nazi!"