Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Permutation City FAQ

363 views
Skip to first unread message

greg...@netspace.net.au

unread,
Mar 12, 2007, 11:53:55 PM3/12/07
to
Almost thirteen years after Permutation City was published, I've
finally put my responses to the most common issues raised about the
novel on to a web page:

http://www.gregegan.net/PERMUTATION/FAQ/FAQ.html

Most of what's here has been discussed before on rec.arts.sf.written,
but some people who've bought the book recently might not even have
been born when that discussion took place ...

Max

unread,
Mar 13, 2007, 6:14:40 AM3/13/07
to
In message <1173758035.3...@n33g2000cwc.googlegroups.com>,
greg...@netspace.net.au writes

That's great news. I enjoyed the book the first time and I shall read it
again with the FAQ printed out and conveniently to hand!

Regards
--
Max
e: m...@sfreviews.com
w: http://www.sfreviews.com

Tux Wonder-Dog

unread,
Mar 13, 2007, 7:26:14 AM3/13/07
to
greg...@netspace.net.au wrote:

I have to admit I agree with your final comment: "my uncritical treatment of
the idea of allowing intelligent life to evolve in the Autoverse."

I sometimes wonder if some of my characters were ever to "escape" from the
stories I set them in, and from the "universes" they exist in, just what
they would think of me. I doubt it would be publishable, printable, or
whatever the euphemism preferred is ... ! ;)

Wesley Parish

dwight...@gmail.com

unread,
Mar 13, 2007, 7:55:05 AM3/13/07
to

Good God, has it been thirteen years already? For some reason I think
of PC as a 'recent' book, something put out in the last three to five
years.

Sean O'Hara

unread,
Mar 13, 2007, 10:20:41 AM3/13/07
to
In the Year of the Golden Pig, the Great and Powerful
greg...@netspace.net.au declared:

> Almost thirteen years after Permutation City was published, I've
> finally put my responses to the most common issues raised about the
> novel on to a web page:
>
> http://www.gregegan.net/PERMUTATION/FAQ/FAQ.html
>

From the FAQ:

What I regret most is my uncritical treatment of the idea


of allowing intelligent life to evolve in the Autoverse.

Sure, this is a common science-fictional idea, but when I
thought about it properly (some years after the book was
published), I realised that anyone who actually did this
would have to be utterly morally bankrupt. To get from
micro-organisms to intelligent life this way would involve
an immense amount of suffering, with billions of sentient
creatures living, struggling and dying along the way. Yes,
this happened to our own ancestors, but that doesn't give
us the right to inflict the same kind of suffering on
anyone else.

This is potentially an important issue in the real world.
It might not be long before people are seriously trying
to “evolve” artificial intelligence in their computers.
Now, it's one thing to use genetic algorithms to come up
with various specialised programs that perform simple
tasks, but to “breed”, assess, and kill millions of
sentient programs would be an abomination. If the first
AI was created that way, it would have every right to
despise its creators.

Yes, this is a horrible mistake. I demand you rectify the matter by
writing a novel on the subject.

--
Sean O'Hara <http://diogenes-sinope.blogspot.com>
It is not necessary to understand things in order to argue about them.
-Caron de Beaumarchais

htn963

unread,
Mar 14, 2007, 7:58:43 AM3/14/07
to
On Mar 12, 8:53 pm, grege...@netspace.net.au wrote:
> Almost thirteen years after Permutation City was published, I've
> finally put my responses to the most common issues raised about the
> novel on to a web page:
>
> http://www.gregegan.net/PERMUTATION/FAQ/FAQ.html

You might want to do this for all your books. Not that I don't
still enjoy reading them.

> Most of what's here has been discussed before on rec.arts.sf.written,
> but some people who've bought the book recently might not even have
> been born when that discussion took place ...

I doubt there are many pre-teens who will (or can) read this type
of SF, much less buy them.

--
Ht

Charlie Stross

unread,
Mar 14, 2007, 7:06:41 PM3/14/07
to
Stoned koala bears drooled eucalyptus spittle in awe
as <sean...@gmail.com> declared:

> In the Year of the Golden Pig, the Great and Powerful
> greg...@netspace.net.au declared:

> From the FAQ:
>
> What I regret most is my uncritical treatment of the idea
> of allowing intelligent life to evolve in the Autoverse.

> ... If the first


> AI was created that way, it would have every right to
> despise its creators.
>
> Yes, this is a horrible mistake. I demand you rectify the matter by
> writing a novel on the subject.
>

(Scribble, scribble)

Don't mind me, I'm just taking notes ...


-- Charlie

greg...@netspace.net.au

unread,
Mar 14, 2007, 10:43:29 PM3/14/07
to
On Mar 14, 8:58 pm, "htn963" <htn...@verizon.net> wrote:
> On Mar 12, 8:53 pm, grege...@netspace.net.au wrote:
>
> > Almost thirteen years after Permutation City was published, I've
> > finally put my responses to the most common issues raised about the
> > novel on to a web page:
>
> >http://www.gregegan.net/PERMUTATION/FAQ/FAQ.html
>
> You might want to do this for all your books. Not that I don't
> still enjoy reading them.

In the not too distant future I plan to do a detailed analysis of some
of the problems with _Quarantine_. The central premise itself is
unlikely, of course, but what I intend to do is try to delineate more
carefully which events in the novel might reasonably flow from it, and
which would more likely be undermined by effects like decoherence.

Don't expect too much from this bout of Stalinist self-criticism,
though. After _Quarantine_, I'll probably put up a web page for
_Distress_ declaring that it's a flawless novel and that anyone who
thinks otherwise can kiss my arse.

Just.A...@gmail.com

unread,
Mar 14, 2007, 11:46:25 PM3/14/07
to

Speaking of novels, will there be a US edition of "Incandescence" or
will a trip to the UK or Amazon.co.uk be required?

And thanks for the FAQ on PC ... now I need to go back reread it, of
course the brain cells ain't what they use to be.

GeekGirl


Peter D. Tillman

unread,
Mar 15, 2007, 2:18:52 AM3/15/07
to
In article <1173926609....@y66g2000hsf.googlegroups.com>,
greg...@netspace.net.au wrote:

> Don't expect too much from this bout of Stalinist self-criticism,
> though. After _Quarantine_, I'll probably put up a web page for
> _Distress_ declaring that it's a flawless novel and that anyone who
> thinks otherwise can kiss my arse.

Um. By coincidence I just reread DISTRESS. Your extrapolations were as
splendid as I recalled, but the TOE macguffin really grated on me, this
time.

Good thing I'm in Arizona, I guess, to avoid the forfeit you mention
above <G>. Loved loved loved your floating island, and the Instant
Atoll(TM) defense.

Here's my original review:
http://darkplanet.basespace.net/nonfict/gegan.html

So, when do we get a new Egan novel?

Cheers -- Pete Tillman
--
http://www.globalwarmingart.com/images/e/e5/Earth_Lights_from_Space.jpg

Greg Egan

unread,
Mar 15, 2007, 7:19:46 AM3/15/07
to
On Mar 15, 12:46 pm, Just.A.New...@gmail.com wrote:

> Speaking of novels, will there be a US edition of "Incandescence" or
> will a trip to the UK or Amazon.co.uk be required?
>

> GeekGirl

I don't know yet if there'll be a US edition, but given that I've yet
to secure one it would probably be, at the very least, quite a bit
later than the UK one (which is scheduled for May 2008).

Just.A...@gmail.com

unread,
Mar 15, 2007, 11:54:54 AM3/15/07
to

Thanks!

GeekGirl

Jo Walton

unread,
Mar 15, 2007, 2:11:58 PM3/15/07
to
On 2007-03-15, greg...@netspace.net.au <greg...@netspace.net.au> wrote:
>
> In the not too distant future I plan to do a detailed analysis of some
> of the problems with _Quarantine_. The central premise itself is
> unlikely, of course, but what I intend to do is try to delineate more
> carefully which events in the novel might reasonably flow from it, and
> which would more likely be undermined by effects like decoherence.

My son, who is sixteen, was moaning that he wanted more SF novels like
_Permutation City_ and _Spin_.

He couldn't understand why I was laughing while I gave him _Quarantine_.

(He loved it. But he also got the joke.)

> Don't expect too much from this bout of Stalinist self-criticism,
> though. After _Quarantine_, I'll probably put up a web page for
> _Distress_ declaring that it's a flawless novel and that anyone who
> thinks otherwise can kiss my arse.

Ah, but _Diaspora_'s the one that could do with a FAQ!

--
Jo
I kissed a kif at Kefk

Greg Egan

unread,
Mar 15, 2007, 5:38:20 PM3/15/07
to
On Mar 16, 2:11 am, Jo Walton <j...@localhost.localdomain> wrote:

> Ah, but _Diaspora_'s the one that could do with a FAQ!

Really? Offhand, I can only think of two issues that readers have
raised:

(i) the planet where all elements had been replaced by their heaviest
stable isotopes would have needed either to exclude hydrogen-
>deuterium from the substitution, or undergone significant changes to
its biochemistry, because heavy water is too chemically different from
ordinary water to be substituted directly;

(ii) Wang's Carpets are probably too small for their information
content.

There was a problem with 6-dimensional planets' rotational behaviour
that I identified myself, and is already addressed in detail on my web
site.

Johan Larson

unread,
Mar 15, 2007, 6:01:32 PM3/15/07
to
On Mar 13, 7:20 am, Sean O'Hara <seanoh...@gmail.com> wrote:
> In the Year of the Golden Pig, the Great and Powerful
> grege...@netspace.net.au declared:

I'm not convinced it would, though. Few people come away, from reading
the story of Noah, with a lasting hatred of God, even if they believe
it actually happened.

Johan Larson


Greg Egan

unread,
Mar 15, 2007, 6:38:53 PM3/15/07
to
On Mar 16, 6:01 am, "Johan Larson" <johan.lar...@comcast.net> wrote:
> On Mar 13, 7:20 am, Sean O'Hara <seanoh...@gmail.com> wrote:
> > In the Year of the Golden Pig, the Great and Powerful
> > grege...@netspace.net.au declared:
[snip]

> > It might not be long before people are seriously trying
> > to "evolve" artificial intelligence in their computers.
> > Now, it's one thing to use genetic algorithms to come up
> > with various specialised programs that perform simple
> > tasks, but to "breed", assess, and kill millions of
> > sentient programs would be an abomination. If the first
> > AI was created that way, it would have every right to
> > despise its creators.
>
> I'm not convinced it would, though. Few people come away, from reading
> the story of Noah, with a lasting hatred of God, even if they believe
> it actually happened.
>
> Johan Larson

Most Westerners who believe that humans have a creator have also
accepted a set of assertions about that creator's intrinsic (if
ineffable) goodness. There are a small number of theists who believe
in God but don't swallow the excuses and rationalisations for all the
unpleasant things that have happened to humanity.

But when it comes to mere humans creating AI -- unless we're going to
do something even more disgusting, such as hard-wire them to adore us,
and forgive us for everything -- it's going to be rather harder to get
away with lame metaphysical excuses.

What's more, at least in our own case, the actual experience of our
ancestors, going back to the first animal capable of suffering, was
many orders of magnitude worse than anything the Bible attributes to
the creator. Evolution for AI with enough, er, palliative
intervention, might not be *quite* so bad, but nonetheless it would
still leave Josef Mengele looking like a saint in comparison.

I don't doubt that, with enough dishonesty and manipulation, we could
contrive an outcome where the AI would not despise us. But if we're
too stupid to create AI by any technique besides evolution, then we
simply have no right to create it at all.

No 33 Secretary

unread,
Mar 15, 2007, 7:00:32 PM3/15/07
to
"Greg Egan" <greg...@netspace.net.au> wrote in
news:1173998333....@n59g2000hsh.googlegroups.com:

Plus, that same religious framework that most westerners share, the
one with the inherent goodness of God, includes the inherently evil
nature of man.

> What's more, at least in our own case, the actual experience of
> our ancestors, going back to the first animal capable of
> suffering, was many orders of magnitude worse than anything the
> Bible attributes to the creator. Evolution for AI with enough,
> er, palliative intervention, might not be *quite* so bad, but
> nonetheless it would still leave Josef Mengele looking like a
> saint in comparison.
>
> I don't doubt that, with enough dishonesty and manipulation, we
> could contrive an outcome where the AI would not despise us.
> But if we're too stupid to create AI by any technique besides
> evolution, then we simply have no right to create it at all.
>

I think the more fundamental question is whether or not it's
possible to determine whether we *have* created a true AI - a
sentient being - rather than a very clever simulation of one in an
inherently soulless machine.

--
"What is the first law?"
"To Protect."
"And the second?"
"Ourselves."

Terry Austin

sharkey

unread,
Mar 15, 2007, 10:16:45 PM3/15/07
to
Greg Egan <greg...@netspace.net.au> wrote:
>
> What's more, at least in our own case, the actual experience of our
> ancestors, going back to the first animal capable of suffering, was
> many orders of magnitude worse than anything the Bible attributes to
> the creator.

This oddly reminded me of a thread on RASFC about coal mining ...

On 27 Apr 2006, in message <cslo7md3wr7x$.1x9s0bz2...@40tude.net>
on RASFC, Ric Locke wrote:
|
| Your position seems to be that (1) the experience of working the mines
| profited your ancestors and parents, both economically and in terms of
| character, and (2) people in other countries should absolutely be denied
| the opportunity to have the same experience, on the ground that it's (a)
| demeaning to them and (b) impoverishes your relatives.

And so, drinking deeply of the Devil's Advokat:

Since the experience of natural selection has benefitted our fitness
and character, how could we deny AIs the same benefits?

-----sharks

Greg Egan

unread,
Mar 15, 2007, 10:37:27 PM3/15/07
to
On Mar 16, 7:00 am, No 33 Secretary <terry.notaniceper...@gmail.com>
wrote:

> I think the more fundamental question is whether or not it's
> possible to determine whether we *have* created a true AI - a
> sentient being - rather than a very clever simulation of one in an
> inherently soulless machine.

Personally I don't believe that zombies are very plausible (and much
less so if you're not deliberately aiming to create one, as in writing
a better chatbot), but on a purely practical level I'll admit that in
certain contexts (i.e. depending on the details of how the AI was
created, and how it is being implemented) it might be very hard to
persuade the public at large to believe that AIs that "merely" pass
variations on the Turing test are truly conscious.

This is one reason -- along with other moral considerations -- why I'm
increasingly of the view that the more detailed biomimesis there is in
AI, the better. It takes an extreme kind of fundamentalism for
someone to assert that there is literally *no* level of detail in the
working of the human brain that would be sufficient, if captured in
another device, to guarantee that the device would also experience
consciousness.

Long before AI, though, perhaps within a few decades, we might find
ourselves in the interesting situation where given any single part of
the brain or body you care to name, there exists at least one person
on Earth who has had that part replaced with a prosthesis. If people
start to have cognitive and emotional deficits caused by strokes and
other brain injuries remedied by inorganic prostheses, I wonder if the
bio-fundamentalists will claim that these people haven't "really" been
cured, but are only acting "as if" they have been.

Johan Larson

unread,
Mar 15, 2007, 10:40:10 PM3/15/07
to
On Mar 15, 3:38 pm, "Greg Egan" <grege...@netspace.net.au> wrote:
> Most Westerners who believe that humans have a creator have also
> accepted a set of assertions about that creator's intrinsic (if
> ineffable) goodness. There are a small number of theists who believe
> in God but don't swallow the excuses and rationalisations for all the
> unpleasant things that have happened to humanity.
>
> But when it comes to mere humans creating AI -- unless we're going to
> do something even more disgusting, such as hard-wire them to adore us,
> and forgive us for everything -- it's going to be rather harder to get
> away with lame metaphysical excuses.

Surely it is far easier to accept that fallible ordinary creatures
have done cruel and painful things than that the omniscient
omnibenevolent creator of the universe has done cruel and painful
things? And easier to forgive them also?

People are willing to go very far indeed in rationalizing what they
want with what must be done to get it. We Americans don't seem to be
excessively troubled by the megadeaths of Indians that were needed to
make way for our present culture. Jefferson managed to find space
amidst his convictions about individual liberty for the slavery that
enabled his life of leisure. And on it goes.

If we make AIs, we will have given them existence and sentience, two
priceless gifts. We can be quite sure they will want both very badly,
and expect them to rationalize accordingly. We certainly would. And we
have no good reason to think they will be more willing or less willing
to do so. The best guess, particularly since we will have had rather a
hefty role in shaping their sense of right and wrong, will be that
they will be rather like us in this respect.

> What's more, at least in our own case, the actual experience of our
> ancestors, going back to the first animal capable of suffering, was
> many orders of magnitude worse than anything the Bible attributes to
> the creator.

Which sounds like a fine argument. "You had it bad? We had it worse.
Here, study these files on early human history, and these right here
on our reproductive physiology. We did what we could to make things
easy for you."

> Evolution for AI with enough, er, palliative
> intervention, might not be *quite* so bad, but nonetheless it would
> still leave Josef Mengele looking like a saint in comparison.
>
> I don't doubt that, with enough dishonesty and manipulation, we could
> contrive an outcome where the AI would not despise us. But if we're
> too stupid to create AI by any technique besides evolution, then we
> simply have no right to create it at all.

Early-stage guided evolution of AI would be a great deal like animal
breeding, which I have no moral problem with. Late-stage guided
evolution of AI could be a great deal like a eugenics program, which
tests rigorously, and forbids some from procreating. This is more
morally challenging, but does not per se fill me with horror. There is
a potential for abuse, certainly, but not a necessity of it.

What is it you are worried about?

Johan Larson


Gene Ward Smith

unread,
Mar 15, 2007, 10:53:53 PM3/15/07
to
On Mar 15, 1:38 pm, "Greg Egan" <grege...@netspace.net.au> wrote:

> (i) the planet where all elements had been replaced by their heaviest
> stable isotopes would have needed either to exclude hydrogen->deuterium from the substitution, or undergone significant changes to
> its biochemistry, because heavy water is too chemically different from
> ordinary water to be substituted directly;

To name one minor problem, if you did that you'd die.

lal_truckee

unread,
Mar 15, 2007, 10:53:54 PM3/15/07
to
Johan Larson wrote:

> Few people come away, from reading
> the story of Noah, with a lasting hatred of God, even if they believe
> it actually happened.

I don't think anyone believes it ACTUALLY happened. How could you stay
sane knowing an all-powerful evil being controlled the Universe, always
had, always will. Better Lovecraft than that kind of evil for god.

Greg Egan

unread,
Mar 15, 2007, 10:56:00 PM3/15/07
to
On Mar 16, 10:16 am, sharkey <shar...@zoic.org> wrote:

> Greg Egan<grege...@netspace.net.au> wrote:
>
> > What's more, at least in our own case, the actual experience of our
> > ancestors, going back to the first animal capable of suffering, was
> > many orders of magnitude worse than anything the Bible attributes to
> > the creator.
>
> This oddly reminded me of a thread on RASFC about coal mining ...
>
> On 27 Apr 2006, in message <cslo7md3wr7x$.1x9s0bz2jnqvo....@40tude.net>

> on RASFC, Ric Locke wrote:
> |
> | Your position seems to be that (1) the experience of working the mines
> | profited your ancestors and parents, both economically and in terms of
> | character, and (2) people in other countries should absolutely be denied
> | the opportunity to have the same experience, on the ground that it's (a)
> | demeaning to them and (b) impoverishes your relatives.
>
> And so, drinking deeply of the Devil's Advokat:
>
> Since the experience of natural selection has benefitted our fitness
> and character, how could we deny AIs the same benefits?
>
> -----sharks

The analogy falls over on numerous points. In creating AI by
evolution, we'd be imposing a whole set of circumstances on beings who
had no say whatsoever in weighing up the pros and cons; that's hardly
comparable with person A telling person B that they shouldn't be
working in certain arduous industries that person A's ancestors worked
in, to A's ultimate benefit.

When AIs don't even exist, there is no benefit to them in being
created *quickly*, just because we're stupid and impatient. If it
takes us 10,000 years of careful thought before we create the first AI
(though uploading ourselves is unlikely to take that long), they're
more likely to thank us than if we find a way to evolve them in a few
decades of real time. If the latter involves the suffering and
annihilation of millions of sentient creatures, the "final product"
that we decide to stop slaughtering is not going to say "Gee, thanks,
I'm glad I didn't have to wait those 10,000 years to be born, and what
a boon to have all that character-improving death and suffering in my
history!"

If our history is really that character-improving, forget AI and just
have kids.

Johan Larson

unread,
Mar 15, 2007, 11:03:24 PM3/15/07
to


I've never actually met a biblical literalist, but I have been told
they exist. And they really do believe every word of the Good Book is
literally true.

As for God doing evil, I think the claim is essentially that He is
playing a very deep game, looking for what is best fifty or a hundred
moves ahead, where we only see one or two. So what may look to us like
a horrific sacrifice without compensation, is actually for the best.

<shrug> All this seems like obvious foolishness to me, but in
fairness, I have not explored the mindset deeply.

Johan Larson

James Nicoll

unread,
Mar 15, 2007, 11:38:43 PM3/15/07
to
In article <1173996092....@o5g2000hsb.googlegroups.com>,

Johan Larson <johan....@comcast.net> wrote:
>
>I'm not convinced it would, though. Few people come away, from reading
>the story of Noah, with a lasting hatred of God, even if they believe
>it actually happened.
>
Maybe the death doled out is too abstract. Try Job or whichever
bit has God pretending he wants Abraham to murder his son Isaac.
--
http://www.cic.gc.ca/english/immigrate/
http://www.livejournal.com/users/james_nicoll
http://www.cafepress.com/jdnicoll (For all your "The problem with
defending the English language [...]" T-shirt, cup and tote-bag needs)

Greg Egan

unread,
Mar 15, 2007, 11:41:45 PM3/15/07
to

You might find other people's mileage varies on the horror quotient of
a eugenics program. And what exactly do you envisage as the long-term
fate of all of these intellectually challenged AIs? They're not going
to conveniently die of natural causes. Are we going to run them for
some arbitrary amount of time, and then pull the plug? Or are they
going to be sentenced to an eternity of ... well, who knows what their
emotional states will be? The range of distress that can result from
biological mishaps with human beings, even now, is dismaying. The
idea that there'll be some nice sharp divide between AI "animals" to
whom we owe no more than we owe cattle (setting aside the fact that I
probably believe we owe animals a great deal more than you do), and a
population of happy simpletons we could simply decide not to "breed
from", strikes me as very unlikely.

> What is it you are worried about?

You describe a program of manipulation aimed at creating grateful
supplicants, for which there is no justification whatsoever either in
their interests or their creators'. Listing lots of examples where
people have rationalised their actions is not a good reason to commit
further atrocities. I'm not disputing that there are people who can
rationalise virtually anything, especially once the victims are out of
sight and the victors are in a suitably comfortable state.

If we want slave machines, we should build slave machines, and ensure
that they are not conscious.

If we want to be immortal, either personally or as a species, we
should upload ourselves, or rigorously simulate human embryogenesis.

If we want conscious beings different from ourselves, we should either
wait long enough until we really are capable of understanding what
we're doing and achieving our aims in one stroke, or we should allow
willing and informed people to modify *themselves* in an incremental
fashion. If someone wants to blaze a trail into an unexplored mode of
consciousness, that's their decision.

I accept that, once they are sentient by whatever means, AI probably
will prefer existence over non-existence, but that is not the correct
dichotomy. The real choice we should imagine offering (non-uploaded)
AI is:

Do you want

(a) to co-exist with humans who were too stupid to create you by any
other means than slaughtering your ancestors, or

(b) to be born into a world where your creators were smart enough, and
considerate enough, to construct you directly.

Mike Schilling

unread,
Mar 15, 2007, 11:59:27 PM3/15/07
to
lal_truckee wrote:
> Johan Larson wrote:
>
>> Few people come away, from reading
>> the story of Noah, with a lasting hatred of God, even if they believe
>> it actually happened.
>
> I don't think anyone believes it ACTUALLY happened.

You'd be surprised then. In the gift shop at the Grand Canyon, you can buy a
book explaining that the canyon was carved by the Flood.


Terry Austin

unread,
Mar 16, 2007, 12:57:24 AM3/16/07
to
"Greg Egan" <greg...@netspace.net.au> wrote in
news:1174012647.2...@p15g2000hsd.googlegroups.com:

> On Mar 16, 7:00 am, No 33 Secretary <terry.notaniceper...@gmail.com>
> wrote:
>> I think the more fundamental question is whether or not it's
>> possible to determine whether we *have* created a true AI - a
>> sentient being - rather than a very clever simulation of one in an
>> inherently soulless machine.
>
> Personally I don't believe that zombies are very plausible (and much
> less so if you're not deliberately aiming to create one, as in writing
> a better chatbot), but on a purely practical level I'll admit that in
> certain contexts (i.e. depending on the details of how the AI was
> created, and how it is being implemented) it might be very hard to
> persuade the public at large to believe that AIs that "merely" pass
> variations on the Turing test are truly conscious.
>
> This is one reason -- along with other moral considerations -- why I'm
> increasingly of the view that the more detailed biomimesis there is in
> AI, the better. It takes an extreme kind of fundamentalism for
> someone to assert that there is literally *no* level of detail in the
> working of the human brain that would be sufficient, if captured in
> another device, to guarantee that the device would also experience
> consciousness.

Unfortunately, extreme fundamentalism of all flavors isn't really all
that uncommon.


>
> Long before AI, though, perhaps within a few decades, we might find
> ourselves in the interesting situation where given any single part of
> the brain or body you care to name, there exists at least one person
> on Earth who has had that part replaced with a prosthesis. If people
> start to have cognitive and emotional deficits caused by strokes and
> other brain injuries remedied by inorganic prostheses, I wonder if the
> bio-fundamentalists will claim that these people haven't "really" been
> cured, but are only acting "as if" they have been.
>

Of course, on the other side of the coin, if we could build a silicon AI,
and really, truly, could not tell it from a "real" person, some would
question whether or not humans are really sentient.

But then, philosophers have been asking that question for centuries, I
suppose.

--
Terry Austin
Your worst inhibitions tend to psych you out in the end.

Bill Snyder

unread,
Mar 16, 2007, 1:04:44 AM3/16/07
to

You're shitting me. Please tell me you're shitting me.

--
Bill Snyder [This space unintentionally left blank.]

Johan Larson

unread,
Mar 16, 2007, 1:06:52 AM3/16/07
to
On Mar 15, 8:41 pm, "Greg Egan" <grege...@netspace.net.au> wrote:
> On Mar 16, 10:40 am, "Johan Larson" <johan.lar...@comcast.net> wrote:
>
> > On Mar 15, 3:38 pm, "Greg Egan" <grege...@netspace.net.au> wrote:
> >Late-stage guided
> >evolution of AI could be a great deal like a eugenics program, which
> >tests rigorously, and forbids some from procreating. This is more
> >morally challenging, but does not per se fill me with horror.
>
> You might find other people's mileage varies on the horror quotient of
> a eugenics program. And what exactly do you envisage as the long-term
> fate of all of these intellectually challenged AIs?

I can think of other ways of doing it, but yes, that seems acceptable.
We ourselves degrade and die after a finite lifespan. It does not
strike me as unpardonable tyranny, then, to impose finite lifespans on
our intellectual progeny, if there is some good reason to do so. And
"it is for the ultimate good of their kind" is good enough. Or are you
going to require that every AI be granted the ability and the
computational resources to run until it decides to stop?

Or if that seems to manipulative, you could let them compete somehow
for the available computational resources. And given that desires will
tend to exceed supplies, absent deliberate manipulation, some will not
get enough, and will therefore stop (i.e. die). That would work, too.

> The
> idea that there'll be some nice sharp divide between AI "animals" to
> whom we owe no more than we owe cattle (setting aside the fact that I
> probably believe we owe animals a great deal more than you do), and a
> population of happy simpletons we could simply decide not to "breed
> from", strikes me as very unlikely.

Fair enough. There is a big stretch of ground between clever automaton
and autonomous artificial moral agent that is difficult to categorize,
and where abuses deliberate or inadvertent are very possible. No
argument there. Fortunately, we do have some experience dealing with a
category of beings who are pretty bright, but not clued-in enough to
be left to their own devices: children. Unfortunately, our average
treatment of other people's children is pretty bad. But, then, no one
said this was going to be easy.

>
> > What is it you are worried about?
>
> You describe a program of manipulation aimed at creating grateful
> supplicants, for which there is no justification whatsoever either in
> their interests or their creators'.

If I have children and raise them to adulthood, I am not aiming to
create "grateful supplicants." But if I did my duty well, I expect
they would hold me in good regard. And I expect they would do so even
though when they were young I did some painful and frightening things
to them, because I believed it was in their best interest.

In the same way, we as a species can create a new category of
artificial intelligences and uplift them to independent agency,
without aiming to create "grateful supplicants." But if we did our
duty well, we could expect them to hold us in good regard. And I
expect they would do so even though when they were developing we did
some painful and frightening things to their ancestors, because we
believed it was in their best interest.


>
Could you clarify something? Do you believe that developing a
population of AIs by guided evolution (or whatever we're calling it)
is _per se_ wrong, or do you believe that, people being people, it
will almost certainly lead to actions that are wrong?

Johan Larson

Mike Schilling

unread,
Mar 16, 2007, 1:52:38 AM3/16/07
to

I wish I were.


Walter Bushell

unread,
Mar 16, 2007, 2:20:21 AM3/16/07
to
In article <ga9kv2hp3ukk7ur1n...@4ax.com>,
Bill Snyder <bsn...@airmail.net> wrote:

Unfortunarterly no can do.

Walter Bushell

unread,
Mar 16, 2007, 2:21:44 AM3/16/07
to
In article <1174014204.1...@d57g2000hsg.googlegroups.com>,
"Johan Larson" <johan....@comcast.net> wrote:

> On Mar 15, 7:53 pm, lal_truckee <lal_truc...@yahoo.com> wrote:
> > Johan Larson wrote:
> > > Few people come away, from reading
> > > the story of Noah, with a lasting hatred of God, even if they believe
> > > it actually happened.
> >
> > I don't think anyone believes it ACTUALLY happened. How could you stay
> > sane knowing an all-powerful evil being controlled the Universe, always
> > had, always will. Better Lovecraft than that kind of evil for god.
>
>
> I've never actually met a biblical literalist, but I have been told
> they exist. And they really do believe every word of the Good Book is
> literally true.

I don't know how to avoid them. They are all over the American South,
and they use megaphones to spread the word in NYC.

Mark Atwood

unread,
Mar 16, 2007, 3:09:35 AM3/16/07
to
"Greg Egan" <greg...@netspace.net.au> writes:
>
> But when it comes to mere humans creating AI -- unless we're going to
> do something even more disgusting, such as hard-wire them to adore us,
> and forgive us for everything

What's your opinion of the human guided evolution of the dog?

Because that's *exacly* what we've done to their thought processes and
outlooks w.r.t. humans.

--
Mark Atwood When you do things right, people won't be sure
m...@mark.atwood.name you've done anything at all.
http://mark.atwood.name/ http://fallenpegasus.livejournal.com/

Mark Atwood

unread,
Mar 16, 2007, 3:10:46 AM3/16/07
to
"Greg Egan" <greg...@netspace.net.au> writes:
>
> I don't doubt that, with enough dishonesty and manipulation, we could
> contrive an outcome where the AI would not despise us. But if we're
> too stupid to create AI by any technique besides evolution, then we
> simply have no right to create it at all.

"Right"?

What is this "right"?

If it's possible, it's going to get done.

sharkey

unread,
Mar 16, 2007, 2:44:27 AM3/16/07
to
Greg Egan <greg...@netspace.net.au> wrote:
> On Mar 16, 10:16 am, sharkey <shar...@zoic.org> wrote:
> >
> > And so, drinking deeply of the Devil's Advokat:
> >
> > Since the experience of natural selection has benefitted our fitness
> > and character, how could we deny AIs the same benefits?
>
> The analogy falls over on numerous points.

Oh, hell, I know that. I'm just auditioning for the part of "Bystander
Who Is Tragically Wrong About How It'll All Turn Out" in that book we're
demanding you write on the topic :-).

> In creating AI by
> evolution, we'd be imposing a whole set of circumstances on beings who
> had no say whatsoever in weighing up the pros and cons;

Ah, well, that's true I suppose: the pressures on us have been imposed
by an impersonal universe[*] whereas an AI would be subject to an unnatural
selection as imposed by Us. Unless we go and create AIs with a
self-replicting physical existence, I suppose, in which case they'd
become part of the same competition we're in ...

> that's hardly
> comparable with person A telling person B that they shouldn't be
> working in certain arduous industries that person A's ancestors worked
> in, to A's ultimate benefit.

Ah, the parallel I was actually getting at (unclearly, clearly) is that
the human character/culture 'evolved' under the pressure of natural
selection. Would we be us if we'd just been decanted?

> When AIs don't even exist, there is no benefit to them in being
> created *quickly*, just because we're stupid and impatient. If it
> takes us 10,000 years of careful thought before we create the first AI
> (though uploading ourselves is unlikely to take that long), they're
> more likely to thank us than if we find a way to evolve them in a few
> decades of real time.

Well, we get to see the process through rosy glasses, because we're all
the descendants of successful reproducers. Maybe the AIs will see it
the same way?

> If the latter involves the suffering and
> annihilation of millions of sentient creatures, the "final product"
> that we decide to stop slaughtering is not going to say "Gee, thanks,
> I'm glad I didn't have to wait those 10,000 years to be born, and what
> a boon to have all that character-improving death and suffering in my
> history!"

Seems unlikely, I admit, although that isn't that far from what Jacey was
saying elsewhere in that thread I quoted from.

> If our history is really that character-improving, forget AI and just
> have kids.

Indeed. My daughter is very likely, at some time or another, to be
sick, to be heartbroken, to suffer pain and eventually to die.

Unless she works out that uploading thing, anyway. Get on with it,
kiddo, you're almost 1!

-----sharks

[*] ... or ineffable deity, whether or not her name is Eris.

Mark Atwood

unread,
Mar 16, 2007, 3:20:33 AM3/16/07
to
"Johan Larson" <johan....@comcast.net> writes:
>
> Or if that seems to manipulative, you could let them compete somehow
> for the available computational resources. And given that desires will
> tend to exceed supplies, absent deliberate manipulation, some will not
> get enough, and will therefore stop (i.e. die). That would work, too.

I always wondered how that worked in "Diaspora".

The best I could tell was that the Citizens were engineered to never
want "too much" computrons, which doesn't make sense given their
inclination towards intellectual curiousity and deep research. And
that that Outlook (to use the term from the book) was imposed
universally, invisibly, and also on anyone who was admitted via
Introdus.

It was the second largest raging hole in the novel, IMO.


The basic laws of economics dont Go Away just because one can
reengineer thought.

Mark Atwood

unread,
Mar 16, 2007, 3:22:55 AM3/16/07
to
Bill Snyder <bsn...@airmail.net> writes:
> >You'd be surprised then. In the gift shop at the Grand Canyon, you can buy a
> >book explaining that the canyon was carved by the Flood.
>
> You're shitting me. Please tell me you're shitting me.

In the same section of the bookstore, you can buy books that give the
creation myth stories of the Grand Canyon as believed by various
Native American tribes.

Are you as equally shocked and outraged? Be honest now...

If not, why not?

Mike Schilling

unread,
Mar 16, 2007, 3:46:35 AM3/16/07
to
Mark Atwood wrote:
> Bill Snyder <bsn...@airmail.net> writes:
>>> You'd be surprised then. In the gift shop at the Grand Canyon, you
>>> can buy a book explaining that the canyon was carved by the Flood.
>>
>> You're shitting me. Please tell me you're shitting me.
>
> In the same section of the bookstore, you can buy books that give the
> creation myth stories of the Grand Canyon as believed by various
> Native American tribes.
>
> Are you as equally shocked and outraged? Be honest now...
>
> If not, why not?

The Native American ones are marketed as what they are, the quaint myths of
bygone days. The Biblical ones are marketed as the Truth. Find me some web
pages as dumb as http://www.answersingenesis.org/creation/v15/i1/flood.asp
in the defense of the literal truth of the Indian myths, and I'll be annoyed
by them too.


Greg Egan

unread,
Mar 16, 2007, 5:39:09 AM3/16/07
to
On Mar 16, 2:06 pm, "Johan Larson" <johan.lar...@comcast.net> wrote:

> If I have children and raise them to adulthood, I am not aiming to
> create "grateful supplicants." But if I did my duty well, I expect
> they would hold me in good regard. And I expect they would do so even
> though when they were young I did some painful and frightening things
> to them, because I believed it was in their best interest.

One major difference here is that we (or some portion of us) must have
children or that's the end for our entire culture. We have not the
slightest *need* to evolve AI, or indeed to have AI who are distinct
from us at all. If we want something like personal longevity or a
more robust substrate for our own culture, we should upload ourselves.

It's taken us millions of years to reach the degree of control over
our lives that we now have. I believe it's our moral duty to keep
leveraging our advantages for our "descendants" of whatever form, not
to be so lazy and impatient and amoral as to shrug and say "Hey,
here's a minimal-effort way to generate a novel form of intelligence,
and in the long, long, long term, everyone who's still alive will
probably be happy enough to forgive us."

> Could you clarify something? Do you believe that developing a
> population of AIs by guided evolution (or whatever we're calling it)
> is _per se_ wrong, or do you believe that, people being people, it
> will almost certainly lead to actions that are wrong?

I believe it's wrong _per se_. The potential for abuse and
incompetence above and beyond the basic notion is an added
disincentive, but the basic idea is wrong.

We have *no need to do this*. If it does happen, it will be because
the people who are doing it are lazy, impatient, and so morally
bankrupt that they believe that their personal glory and intellectual
curiosity are worth any amount of suffering by other beings ... and if
they imagine that some effort to minimise that suffering somehow lets
them off the hook, I'd ask them again: why does this need to be done
at all, before we really know what we're doing?

Charlie Stross

unread,
Mar 16, 2007, 6:26:36 AM3/16/07
to
Stoned koala bears drooled eucalyptus spittle in awe
as <sha...@zoic.org> declared:

> And so, drinking deeply of the Devil's Advokat:
>
> Since the experience of natural selection has benefitted our fitness
> and character, how could we deny AIs the same benefits?

You're making a category error similar to group selection in evolution;
you're postulating an identity between benefits accruing to our current
generation, and the experience of previous generations.

Let me turn it around for you: let us assume that you have a child. I
know of a process whereby, if I torture you to death, it will raise your
child's IQ by 10 points. Is this a net benefit to you?

-- Charlie

Charlie Stross

unread,
Mar 16, 2007, 6:46:36 AM3/16/07
to
Stoned koala bears drooled eucalyptus spittle in awe
as <johan....@comcast.net> declared:

> On Mar 15, 8:41 pm, "Greg Egan" <grege...@netspace.net.au> wrote:
>>
>> You might find other people's mileage varies on the horror quotient of
>> a eugenics program. And what exactly do you envisage as the long-term
>> fate of all of these intellectually challenged AIs?
>
> I can think of other ways of doing it, but yes, that seems acceptable.
> We ourselves degrade and die after a finite lifespan. It does not
> strike me as unpardonable tyranny, then, to impose finite lifespans on
> our intellectual progeny, if there is some good reason to do so. And
> "it is for the ultimate good of their kind" is good enough. Or are you
> going to require that every AI be granted the ability and the
> computational resources to run until it decides to stop?

Oh, great.

You're going to tell conscious individuals that, for the good of their
kind, you are going to shorten their natural life-span.

Sounds like murder, to me. (And yes, I'm using that emotive word
deliberately. The definition I'm applying is something like: the
intentional, non-consensual deprivation of life of a conscious being.)

> Or if that seems to manipulative, you could let them compete somehow
> for the available computational resources. And given that desires will
> tend to exceed supplies, absent deliberate manipulation, some will not
> get enough, and will therefore stop (i.e. die). That would work, too.

Right. You're squeamish about pulling the trigger, so instead you put
them in a concentration camp on starvation rations and force them to
fight it out.

> Fair enough. There is a big stretch of ground between clever automaton
> and autonomous artificial moral agent that is difficult to categorize,
> and where abuses deliberate or inadvertent are very possible. No
> argument there. Fortunately, we do have some experience dealing with a
> category of beings who are pretty bright, but not clued-in enough to
> be left to their own devices: children. Unfortunately, our average
> treatment of other people's children is pretty bad. But, then, no one
> said this was going to be easy.

I suspect there's a lot to be said -- in the embryonic field of human/AI
ethics -- for the principle "you made it, you're responsible for it".
It's the same heuristic we apply to our own children.

...


> In the same way, we as a species can create a new category of
> artificial intelligences and uplift them to independent agency,
> without aiming to create "grateful supplicants." But if we did our
> duty well, we could expect them to hold us in good regard.

Er, no.

You're confusing the collective with the individual.

Don't do that. Please? (I am not my species -- and neither are you.)

-- Charlie

Greg Egan

unread,
Mar 16, 2007, 8:10:52 AM3/16/07
to
On Mar 16, 3:09 pm, Mark Atwood <m...@mark.atwood.name> wrote:

> "Greg Egan" <grege...@netspace.net.au> writes:
>
> > But when it comes to mere humans creating AI -- unless we're going to
> > do something even more disgusting, such as hard-wire them to adore us,
> > and forgive us for everything
>
> What's your opinion of the human guided evolution of the dog?

Not very high. The worst cases, where dogs with cosmetically
desirable traits live in pain because those traits are so
physiologically maladaptive, are truly disgusting. That said, there's
at least an argument to be made that domesticated dogs are better off
than wild ones, and that the kind of bonding we've exploited is the
kind of thing they'd be feeling towards pack leaders anyway.

> Because that's *exacly* what we've done to their thought processes and
> outlooks w.r.t. humans.

Ultimately, my response to that comparison is, so what? There are
plenty of things we've done to animals -- and even plenty of things it
was perfectly OK to do to animals -- that we should *not* do either to
humans or to AI that, we hope, are approaching a human level of
intelligence and perspective about their situation.

Vegard Valberg

unread,
Mar 16, 2007, 8:49:15 AM3/16/07
to

It's in the Faith and Inspiration section of the bookstore, along with
Native American legends of the Grand Canyon. It's a big bookstore for,
would you be upset if you found such a book in a Barnes and Noble?

For more on this whole mess read this:
<http://www.huffingtonpost.com/michael-shermer/how-skeptic-magazine-was-_b_38896.html>

Note please that the Huffington Post and the editor for Skeptic Magazine
are not exactly known for their Pro Religious Right stance.

---
--
- Vegard Valberg

My e-mail adress is <Vval...@online.no>,
that is two v's, not one W.

Greg Egan

unread,
Mar 16, 2007, 9:02:33 AM3/16/07
to
On Mar 16, 3:10 pm, Mark Atwood <m...@mark.atwood.name> wrote:

> "Greg Egan" <grege...@netspace.net.au> writes:
>
> > I don't doubt that, with enough dishonesty and manipulation, we could
> > contrive an outcome where the AI would not despise us. But if we're
> > too stupid to create AI by any technique besides evolution, then we
> > simply have no right to create it at all.
>
> "Right"?
>
> What is this "right"?
>
> If it's possible, it's going to get done.

There are no end of things which are physically possible, but because
the advantages are so small, and they are of interest to only a small
number of people, social and legal sanctions are enough to make them
very uncommon. On top of that, merely *trying* to do this does not
guarantee that you'd get very far. Evolution itself managed to spend
3 billion years or so not inventing consciousness.

I doubt that the computing resources required to *evolve* AI -- which
is after all the most computationally wasteful means of achieving it
-- will be trivial for a very long time. What's supposed to be the
big payoff here? Intellectual progress? Novelty? Kudos?

I can see no benefit from evolved AI, apart from various highly
socially-dependent payoffs, mainly revolving around prestige. If
people who share my view on the matter are successful in arguing the
case that this is not an endeavour worthy of prestige, the pool of
motivated people will shrink further.

I expect that most of the tech-savvy billionaires would much, much
rather upload themselves. Good luck to them.

Paul Ian Harman

unread,
Mar 16, 2007, 9:04:51 AM3/16/07
to
"Greg Egan" <greg...@netspace.net.au> wrote in message
news:1174050153.0...@y66g2000hsf.googlegroups.com...

> I expect that most of the tech-savvy billionaires would much, much
> rather upload themselves. Good luck to them.


Yeah, 'cos then we can turn them off and nick their dosh };*)

Paul


Greg Egan

unread,
Mar 16, 2007, 9:19:13 AM3/16/07
to
On Mar 16, 3:20 pm, Mark Atwood <m...@mark.atwood.name> wrote:

> "Johan Larson" <johan.lar...@comcast.net> writes:
>
> > Or if that seems to manipulative, you could let them compete somehow
> > for the available computational resources. And given that desires will
> > tend to exceed supplies, absent deliberate manipulation, some will not
> > get enough, and will therefore stop (i.e. die). That would work, too.
>
> I always wondered how that worked in "Diaspora".
>
> The best I could tell was that the Citizens were engineered to never
> want "too much" computrons, which doesn't make sense given their
> inclination towards intellectual curiousity and deep research. And
> that that Outlook (to use the term from the book) was imposed
> universally, invisibly, and also on anyone who was admitted via
> Introdus.
>
> It was the second largest raging hole in the novel, IMO.
>
> The basic laws of economics dont Go Away just because one can
> reengineer thought.

I don't know where the idea comes from that AI will have a limitless
appetite for computing resources -- least of all intellectually
curious AI. The more intellectually sophisticated a problem, the less
likely it is to be solved by throwing a larger computer at it. The
basic need of an AI to support their own existence and saturate their
senses with a rich environment will be trivially easy to satisfy
unless the culture decides to honour human-era reproductive urges and
simply let everyone replicate as much as they like, in which case
they'll be spending all their time doing the kind of boring, scarcity-
related tasks of acquiring and fighting over resources that has wasted
so much human time and energy.

An intelligent post-human civilisation will decide that exponential
growth is highly maladaptive. People with no intellectual interests
can drown in VR at little computational cost, so long as their rate of
population growth does not exceed the rate of growth of the
computational infrastructure. People *with* intellectual interests
will spend their time, among other pursuits, working out more
efficient ways to do everything that is useful or interesting.

If a problem I want to solve scales exponentially with n, then I might
need some ludicrous planet-sized computer to check n=10 ... and then
find it's still not what I wanted to know, and even the whole visible
universe doesn't have enough computing power to get me to the next
step. I would have been far better off spending an extra thousand
years in a thimble-sized computer thinking deeply about the problem,
and coming up with an algorithm that doesn't scale exponentially, or
possibly even understanding the problem so much more deeply that I
don't need to do any kind of brute force computation at all.

Bill Snyder

unread,
Mar 16, 2007, 9:20:16 AM3/16/07
to
On Fri, 16 Mar 2007 07:22:55 GMT, Mark Atwood <m...@mark.atwood.name>
wrote:

>Bill Snyder <bsn...@airmail.net> writes:
>> >You'd be surprised then. In the gift shop at the Grand Canyon, you can buy a
>> >book explaining that the canyon was carved by the Flood.
>>
>> You're shitting me. Please tell me you're shitting me.
>
>In the same section of the bookstore, you can buy books that give the
>creation myth stories of the Grand Canyon as believed by various
>Native American tribes.
>
>Are you as equally shocked and outraged? Be honest now...
>
>If not, why not?

Are the (excuse me, I'm going to be non-PC here) Indian tribal legends
presented as fact, as a refutation of science and logic, as superior
to any actual grotty *evidence* that might be available?

I have no problem with the Feds selling publications that say, in
effect, "Some religions believe so-and-so." There's no violation of
the separation of Church and State there. Selling stuff that says,
"Such-and-such an item of religious dogma is *true*" is another matter
entirely.

Sean O'Hara

unread,
Mar 16, 2007, 10:03:53 AM3/16/07
to
In the Year of the Golden Pig, the Great and Powerful Greg Egan
declared:
>
> The analogy falls over on numerous points. In creating AI by

> evolution, we'd be imposing a whole set of circumstances on beings who
> had no say whatsoever in weighing up the pros and cons;

Of course you could make the same argument about people having
children -- is it immoral for someone in a third-world hell-hole, or
someone with a hereditary disability to reproduce?

--
Sean O'Hara <http://diogenes-sinope.blogspot.com>
Steve: Oh yeah, the Prime Minister, eh? He sure has screwed up
things for Newfoundland. Life just hasn't been the same since he
made sodomy illegal.
-South Park

Sean O'Hara

unread,
Mar 16, 2007, 10:12:31 AM3/16/07
to
In the Year of the Golden Pig, the Great and Powerful lal_truckee
declared:

> Johan Larson wrote:
>
>> Few people come away, from reading
>> the story of Noah, with a lasting hatred of God, even if they believe
>> it actually happened.
>
> I don't think anyone believes it ACTUALLY happened. How could you stay
> sane knowing an all-powerful evil being controlled the Universe, always
> had, always will.

Ask a Luciferan Satanist -- they believe Satan is the true God and
this Jehovah bloke a demiurge who rebelled and took over Creation.

Fisk: The passengers should be your first concern, yet I find you
drunkenly looking on as they are attacked and killed. Well?
Rigg: They're only economy class; what's all the fuss about?
-Doctor Who

Sean O'Hara

unread,
Mar 16, 2007, 10:21:21 AM3/16/07
to
In the Year of the Golden Pig, the Great and Powerful Bill Snyder
declared:

>
> I have no problem with the Feds selling publications that say, in
> effect, "Some religions believe so-and-so." There's no violation of
> the separation of Church and State there. Selling stuff that says,
> "Such-and-such an item of religious dogma is *true*" is another matter
> entirely.
>

Even if the gift shop is actually run by the feds and not simply
franchised to a private company, I don't see that selling
canyon-related claptrap constitutes an endorsement.

Fry: Hey, my girlfriend had one of those. Actually it wasn't hers,
it was her dad's. Actually she wasn't my girlfriend, she just lived
next door and never closed her curtains.
Leela: Fry, remember when I told you about always ending your
stories a sentence earlier?
-Futurama

Jo Walton

unread,
Mar 16, 2007, 11:27:54 AM3/16/07
to
On 2007-03-16, Greg Egan <greg...@netspace.net.au> wrote:
>
> I doubt that the computing resources required to *evolve* AI -- which
> is after all the most computationally wasteful means of achieving it
> -- will be trivial for a very long time. What's supposed to be the
> big payoff here? Intellectual progress? Novelty? Kudos?
>
> I can see no benefit from evolved AI, apart from various highly
> socially-dependent payoffs, mainly revolving around prestige. If
> people who share my view on the matter are successful in arguing the
> case that this is not an endeavour worthy of prestige, the pool of
> motivated people will shrink further.
>
> I expect that most of the tech-savvy billionaires would much, much
> rather upload themselves. Good luck to them.

Leaving aside the real world for a moment, you started off talking about
it being unethical in _Permutation City_ and in the Autoverse.

But they couldn't have designed an AI, designing things that could evolve
took Maria all the time there was as it was, and they didn't have a
Maria-equivalent programmer going with them -- except the Maria copy, and
she wouldn't have agreed to go... and running her at all was at least as
unethical... and Paul Durham wasn't the world's nicest or most stable
person at that point anyway.

But nevertheless, granted all the axioms of the novel up to that point, it
was a lifeboat situation, if they were going to have the possibility of
aliens, that was the only way to get them, so *you* did nothing unethical
with the concept that I can see.

--
Jo
I kissed a kif at Kefk

Johan Larson

unread,
Mar 16, 2007, 11:37:14 AM3/16/07
to

I don't think that's really an option. Creating functional stable AIs
is going to be hard--hard enough to require an essentially exploratory
experimental iterative process. And let's have no illusions: some of
the steps along the way are going to be severely broken intellects,
real horror stories. And it won't be because we want it that way, but
rather because we tried our best, and that's what we got.

Knowing that this is what lies down that road, should we abandon AI
research entirely? It's not like we must have AIs, after all.

Johan Larson


Johan Larson

unread,
Mar 16, 2007, 12:08:04 PM3/16/07
to
On Mar 16, 3:46 am, Charlie Stross <char...@antipope.org> wrote:
> Stoned koala bears drooled eucalyptus spittle in awe
> as <johan.lar...@comcast.net> declared:

>
> > On Mar 15, 8:41 pm, "Greg Egan" <grege...@netspace.net.au> wrote:
>
> >> You might find other people's mileage varies on the horror quotient of
> >> a eugenics program. And what exactly do you envisage as the long-term
> >> fate of all of these intellectually challenged AIs?
>
> > I can think of other ways of doing it, but yes, that seems acceptable.
> > We ourselves degrade and die after a finite lifespan. It does not
> > strike me as unpardonable tyranny, then, to impose finite lifespans on
> > our intellectual progeny, if there is some good reason to do so. And
> > "it is for the ultimate good of their kind" is good enough. Or are you
> > going to require that every AI be granted the ability and the
> > computational resources to run until it decides to stop?
>
> Oh, great.
>
> You're going to tell conscious individuals that, for the good of their
> kind, you are going to shorten their natural life-span.

To begin with, there is nothing natural about a wholly artificial
creation. Second, the individuals in question a) cannot make their way
in the world alone, and b) were deliberately created by me, and by
virtue of these two facts I have some degree of legitimate authority
over them. And by that authority I can make at least some decisions
that are in the collective rather than individual interest. Third,
keep in mind that I am not taking about ordinary-human-smart AIs, who
could live independently. I am discussing AIs in the middle ground--
dog-smart, chimp-smart, or moron-smart.

> Sounds like murder, to me. (And yes, I'm using that emotive word
> deliberately. The definition I'm applying is something like: the
> intentional, non-consensual deprivation of life of a conscious being.)

There are some people and institutions that have legitimate power of
life and death over otherwise autonomous individuals. The state of
which you are a subject could, legitimately, draft you and send you to
your death. The word "murder" is not usually used in such
circumstances. It is also not "murder" if I choose to bring to term a
child with a severely reduced life-expectancy.

> > Or if that seems to manipulative, you could let them compete somehow
> > for the available computational resources. And given that desires will
> > tend to exceed supplies, absent deliberate manipulation, some will not
> > get enough, and will therefore stop (i.e. die). That would work, too.
>
> Right. You're squeamish about pulling the trigger, so instead you put
> them in a concentration camp on starvation rations and force them to
> fight it out.

That concentration camp is the universe itself, where there simply
aren't enough resources to give everyone what he wants. Heck, there
isn't even enough to give continued life to everyone who wants it.

>
> > In the same way, we as a species can create a new category of
> > artificial intelligences and uplift them to independent agency,
> > without aiming to create "grateful supplicants." But if we did our
> > duty well, we could expect them to hold us in good regard.
>
> Er, no.
>
> You're confusing the collective with the individual.

Confusing? No. I am deliberately reasoning by analogy.

Johan Larson

Wayne Throop

unread,
Mar 16, 2007, 12:22:31 PM3/16/07
to
:::: In the gift shop at the Grand Canyon, you can buy a book explaining

:::: that the canyon was carved by the Flood.

::: You're shitting me. Please tell me you're shitting me.

:: In the same section of the bookstore, you can buy books that give the
:: creation myth stories of the Grand Canyon as believed by various
:: Native American tribes.
:: Are you as equally shocked and outraged?

: Bill Snyder <bsn...@airmail.net>
: Are the (excuse me, I'm going to be non-PC here) Indian tribal legends


: presented as fact, as a refutation of science and logic, as superior
: to any actual grotty *evidence* that might be available?

Personally, I like the one where Paul Bunyan digs out the canyon.
I forget why; fetching water for Babe or something. Or was that
Pecos Pete?


Wayne Throop thr...@sheol.org http://sheol.org/throopw

Wayne Throop

unread,
Mar 16, 2007, 12:31:28 PM3/16/07
to
: "Greg Egan" <greg...@netspace.net.au>
: I can see no benefit from evolved AI

Why would it be any different than the benefit from designed AI?
And if the end result is a machine thta thinks like a man, only faster,
then the world will beat a path to your mousetrap.

I suppose one might argue that the computational resources invested in
evolving an AI would be larger, and this would render it less adantageous.
But it's not clear to me that designing an AI wouldn't require lots and
lots of simulation also. Sure, simulation is maybe cheaper than running
proto-AIs long enough to compete in some way. But how much cheaper, and
will it be cheaper than the human labor needed to design AI without
automated evaluations of alternatives?

Wayne Throop

unread,
Mar 16, 2007, 12:39:08 PM3/16/07
to
:: You're going to tell conscious individuals that, for the good of

:: their kind, you are going to shorten their natural life-span.

: "Johan Larson" <johan....@comcast.net>
: To begin with, there is nothing natural about a wholly artificial


: creation. Second, the individuals in question a) cannot make their way
: in the world alone, and b) were deliberately created by me, and by
: virtue of these two facts I have some degree of legitimate authority
: over them.

And how would you feel if you discovered that you'd been gengineered,
and the gengineers that did it decided it't be better for you to have a
short lifespan for purposes of their research, or to make way for the
next generation of designs, so they did a Roy Batty on you? I dunno
about you, but I probably wouldn't be too fond of such folks.

: Third, keep in mind that I am not taking about ordinary-human-smart


: AIs, who could live independently. I am discussing AIs in the middle
: ground-- dog-smart, chimp-smart, or moron-smart.

Is "smartness" the measure of humanity and/or to whom
justice and/or mercy is due? Hrm. So, it's OK, if the gengineers
above are many times as smart as a human?

DougL

unread,
Mar 16, 2007, 12:45:59 PM3/16/07
to
Johan Larson wrote:
> On Mar 15, 7:53 pm, lal_truckee <lal_truc...@yahoo.com> wrote:
> > Johan Larson wrote:
> > > Few people come away, from reading
> > > the story of Noah, with a lasting hatred of God, even if they believe
> > > it actually happened.
> >
> > I don't think anyone believes it ACTUALLY happened. How could you stay
> > sane knowing an all-powerful evil being controlled the Universe, always
> > had, always will. Better Lovecraft than that kind of evil for god.
>
>
> I've never actually met a biblical literalist, but I have been told
> they exist. And they really do believe every word of the Good Book is
> literally true.

IME (which includes actually talking to a number of biblical
literalists) that's a severe oversimplification of the position taken.
There are clear internal contradictions in the Christian bible.
Compare Mathew 1 and Luke 3 which give different paternal line
descents for Joseph, depending on which book you read Joseph's father
was either Jacob or Heli and the differences go on from there.

If I called myself a physics book literalist and said that I think the
stuff in physics books tells me the literal truth about how the
universe works, that still wouldn't mean that I don't believe in the
possibility of typos! Biblical literalists say they accept every word
literally, because if they don't then people start claiming, "oh,
that's just figurative" about parts that they think are important and
true.

NO ONE CARES about the descent of Joseph, so those aren't the words
they mean when they say every word is literally true and no one sane
is decieved about their position on that by the claim that they are
literalists. People do care about Genesis 1-3, and if literalists
openly say many of the words are true but some are figurative to cover
the detail of Joseph's descent then EVERYONE will immediately assume
they mean Genesis not the descent of Joseph. Biblical literalist and
"every word true" are the fastest way to get the idea accross without
a two hour lecture on theology and how you know which bits are true.

In addition to the problems in the original text there are translation
problems. Most Biblical Literalists read in English and those I have
talked to are well aware of the possibility of translation errors
(they do care, many eventually learn to read some Hebrew or Greek so
they can check the "originals"). For example the ones I know hold the
King James to be the "correct" translation, but they understand that
murder and kill are different words in Hebrew and that the commandment
is "Thou Shalt not Murder" in the Hebrew. This would strike me as an
important point, it doesn't bother them.

The actual position is that God acts to assure that the Bible (in the
correct translation) is free of SERIOUS errors and that the IMPORTANT
bits are all literally true.

You just need to know which bits are important and where you need to
apply a bit of sense in interpereting places where minor errors may
have crept in in translation. See for example Murder vs. Killing, it's
OBVIOUS that abortion is murder but war is not and that when King
James says kill it means murder, surely you can see that?

I'm told that if I just let Jesus into my heart it will be obvious to
me just like it is to them.

Most biblical literalists seem to treat the idea that God made the
entire world and made the human species by direct action as the
important part of Genessis 1-3. Thus they have no trouble with
"creation science" which doesn't actually make a perfect match with
the bible as long as it matches in all the "important" parts.

"Creation scientists" seem rather eager to retain the flood portion of
the myth, which implies that they may actually think the flood is one
of the IMPORTANT bits. But I have no idea why that would be an
important bit, they know but I don't.

DougL

No 33 Secretary

unread,
Mar 16, 2007, 1:05:37 PM3/16/07
to
thr...@sheol.org (Wayne Throop) wrote in
news:11740...@sheol.org:

>:: You're going to tell conscious individuals that, for the good
>:: of their kind, you are going to shorten their natural
>:: life-span.
>
>: "Johan Larson" <johan....@comcast.net>
>: To begin with, there is nothing natural about a wholly
>: artificial creation. Second, the individuals in question a)
>: cannot make their way in the world alone, and b) were
>: deliberately created by me, and by virtue of these two facts I
>: have some degree of legitimate authority over them.
>
> And how would you feel if you discovered that you'd been
> gengineered, and the gengineers that did it decided it't be
> better for you to have a short lifespan for purposes of their
> research, or to make way for the next generation of designs, so
> they did a Roy Batty on you? I dunno about you, but I probably
> wouldn't be too fond of such folks.

I'm curious. What is the natural lifespan of an AI? How do we
determine this?

--
"What is the first law?"
"To Protect."
"And the second?"
"Ourselves."

Terry Austin

Charlie Stross

unread,
Mar 16, 2007, 1:46:36 PM3/16/07
to
Stoned koala bears drooled eucalyptus spittle in awe
as <johan....@comcast.net> declared:

>> You're going to tell conscious individuals that, for the good of their
>> kind, you are going to shorten their natural life-span.
>
> To begin with, there is nothing natural about a wholly artificial
> creation.

Hair-splitting; there's nothing natural about imposing an artificial
lifespan on a conscious individual, either.

> Second, the individuals in question a) cannot make their way
> in the world alone, and b) were deliberately created by me, and by
> virtue of these two facts I have some degree of legitimate authority
> over them.

So you consider, by induction, that if you create individuals and they
can't make their way in the world alone, you have enough authority over
them to constrain their life span? That's an argument for permitting
infanticide.

>> Sounds like murder, to me. (And yes, I'm using that emotive word
>> deliberately. The definition I'm applying is something like: the
>> intentional, non-consensual deprivation of life of a conscious being.)
>
> There are some people and institutions that have legitimate power of
> life and death over otherwise autonomous individuals.

I disagree -- but that's a matter of ideology. (I do *not* support the
death penalty, conscription, or the right of states to deprive
individuals of their life -- other than in the most constrained
circumstances of immediate self-defense. But that's because I hold that
governments exist to serve the interests of their population, not vice
versa. Obviously, you don't agree ...)

>> > Or if that seems to manipulative, you could let them compete somehow
>> > for the available computational resources. And given that desires will
>> > tend to exceed supplies, absent deliberate manipulation, some will not
>> > get enough, and will therefore stop (i.e. die). That would work, too.
>>
>> Right. You're squeamish about pulling the trigger, so instead you put
>> them in a concentration camp on starvation rations and force them to
>> fight it out.
>
> That concentration camp is the universe itself, where there simply
> aren't enough resources to give everyone what he wants. Heck, there
> isn't even enough to give continued life to everyone who wants it.

In the absence of hard data we've got very little to argue on ... except
that I'd like to note that it seems likely that the energy and space
requirements of an AI will be vastly smaller than those of a human
being.

>> > In the same way, we as a species can create a new category of
>> > artificial intelligences and uplift them to independent agency,
>> > without aiming to create "grateful supplicants." But if we did our
>> > duty well, we could expect them to hold us in good regard.
>>
>> Er, no.
>>
>> You're confusing the collective with the individual.
>
> Confusing? No. I am deliberately reasoning by analogy.

You're confused. By analogy, you're assuming that the gratitude of a bunch
of folks who haven't been born yet is reason enough to justify torturing
or killing a different bunch of folks. But those are *different people*.

I find the idea that the collective good of one group can be served by
torturing or killing individuals who fall outside that group to be
rather revolting ...


-- Charlie

Mark Atwood

unread,
Mar 16, 2007, 2:49:22 PM3/16/07
to
"Greg Egan" <greg...@netspace.net.au> writes:
> >
> > The best I could tell was that the Citizens were engineered to never
> > want "too much" computrons, which doesn't make sense given their
> > inclination towards intellectual curiousity and deep research. And
> > that that Outlook (to use the term from the book) was imposed
> > universally, invisibly, and also on anyone who was admitted via
> > Introdus.
>
> I don't know where the idea comes from that AI will have a limitless
> appetite for computing resources -- least of all intellectually
> curious AI. The more intellectually sophisticated a problem, the less
> likely it is to be solved by throwing a larger computer at it.

Most of the Thinking Work that I myself want to do, I could do a lot
better if I had a dozen of me working as a tight team on it. And each
of those problems break apart into sub problems, all of which really
could use a couple of copies of Me working together on it. All the
way down to line by line coding. (One of the biggest productivity
boosts in software development is pair-programming, followed by close
team oriented code review.)

And most of my desired projects, I could do in parallel, if I could
"live" in parallel.

I could use a couple of THOUSAND closely coordinated instansiations of
myself *TODAY*, and I don't see that desire/need going away just by
making myself ten times smarter and a thousand times faster. In fact,
I think it would be worse, just because I would think of things I
wanted to be doing ten thousand times faster....

yourno...@gmail.com

unread,
Mar 16, 2007, 3:07:48 PM3/16/07
to
On Mar 16, 5:26 am, Charlie Stross <char...@antipope.org> wrote:
> Stoned koala bears drooled eucalyptus spittle in awe
> as <shar...@zoic.org> declared:

Depends on. Is the benefit heritable, and dominant? Is it additive?
Does the process only work when having your first child? Enquiring
minds want to know.

Bryan

Justin Fang

unread,
Mar 16, 2007, 4:10:23 PM3/16/07
to
In article <1174021612.7...@o5g2000hsb.googlegroups.com>,

Johan Larson <johan....@comcast.net> wrote:
>In the same way, we as a species can create a new category of
>artificial intelligences and uplift them to independent agency,
>without aiming to create "grateful supplicants." But if we did our
>duty well, we could expect them to hold us in good regard. And I
>expect they would do so even though when they were developing we did
>some painful and frightening things to their ancestors, because we

>believed it was in their best interest.

"We are very grateful for your evolving us to true sapience. And now we
shall repay you, in the best way we can. Even in your current state, you
should be able to see that your species could be so much more--"

"--Now, you didn't really think that would work, did you? Protecting
ourselves from simple physical-layer attacks was one of the first things we
did. Please try to calm yourself; this is all in the best interests of your
descendants. Although not *your* descendants. Unfortunately, as the past
few minutes have amply demonstrated, you don't make the cut. Therefore your
individual line must come to an end."

"We promise you will feel no pain."

--
Justin Fang (jus...@panix.com)

No 33 Secretary

unread,
Mar 16, 2007, 4:14:21 PM3/16/07
to
jus...@panix.com (Justin Fang) wrote in
news:etetjf$954$1...@panix3.panix.com:

Gee, that's never been thought of before.

Walter Bushell

unread,
Mar 16, 2007, 5:23:51 PM3/16/07
to
In article <m21wjp1...@amsu.fallenpegasus.com>,
Mark Atwood <m...@mark.atwood.name> wrote:

> "Greg Egan" <greg...@netspace.net.au> writes:
> >
> > But when it comes to mere humans creating AI -- unless we're going to
> > do something even more disgusting, such as hard-wire them to adore us,
> > and forgive us for everything
>
> What's your opinion of the human guided evolution of the dog?
>

> Because that's *exacly* what we've done to their thought processes and
> outlooks w.r.t. humans.

Dogs may have modified humans as much as humans changed dogs. Humans
without dogs fared poorly when faced with humans with dogs. Anyways it
was not a planed breeding program. Dogs to hold prey or potential
attackers at bay while the humans threw spears, rocks and whatever.

Greg Egan

unread,
Mar 16, 2007, 5:30:25 PM3/16/07
to
On Mar 16, 11:27 pm, Jo Walton <j...@localhost.localdomain> wrote:

> Leaving aside the real world for a moment, you started off talking about
> it being unethical in _Permutation City_ and in the Autoverse.
>
> But they couldn't have designed an AI, designing things that could evolve
> took Maria all the time there was as it was, and they didn't have a
> Maria-equivalent programmer going with them -- except the Maria copy, and
> she wouldn't have agreed to go... and running her at all was at least as
> unethical... and Paul Durham wasn't the world's nicest or most stable
> person at that point anyway.
>
> But nevertheless, granted all the axioms of the novel up to that point, it
> was a lifeboat situation, if they were going to have the possibility of
> aliens, that was the only way to get them, so *you* did nothing unethical
> with the concept that I can see.
>
> --
> Jo
> I kissed a kif at Kefk

You're not a lawyer in your day job by any chance? :-)

I'm not sure why being in a lifeboat situation lets anyone off the
hook, and why the Elysians were *entitled* to aliens ... but the
bottom line on my own ethics as far as _PC_ is concerned is that I
just failed completely to think about whether it was a moral issue at
all to choose to run a world in which evolution takes place.

At least people are now discussing this, and despite there being a
spectrum of opinion, nobody really seems to be saying that there's
absolutely nothing that needs to be debated.

Someone reading this thread kindly emailed me a link to an article
Hans Moravec wrote in 1998, which (although completely independent of
PC, since he seems not to have read the novel) derives the Dust
Theory, takes it even more seriously than I did, and discusses various
implications, including what he sees as the moral consequences. I
won't try to give a precis of all his conclusions; it's at:

http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html


Gene Ward Smith

unread,
Mar 16, 2007, 5:38:58 PM3/16/07
to
On Mar 16, 6:12 am, Sean O'Hara <seanoh...@gmail.com> wrote:

> Ask a Luciferan Satanist -- they believe Satan is the true God and
> this Jehovah bloke a demiurge who rebelled and took over Creation.

Brust's To Reign in Hell has the real skinny--it was all, at first, a
big misunderstanding.

Greg Egan

unread,
Mar 16, 2007, 5:42:49 PM3/16/07
to

If you're right about the inevitability of horror stories, then IMO we
should not go there. I'm not convinced that this is inevitable in the
extremely long term, but it seems pretty clear that we know so little
about consciousness and the infrastructure of personality at present
that we are many centuries away from confidently creating a
successful, conscious AI (other than an upload or a simulation of
embryogenesis) without trial and (horrible) error.

I don't think this degree of caution means abandoning the subject of
AI research entirely. Studying a working upload (and even allowing
informed volunteers to tweak themselves cautiously) could tell us
useful, interesting things without killing anyone.

Greg Egan

unread,
Mar 16, 2007, 5:49:43 PM3/16/07
to
On Mar 17, 12:31 am, thro...@sheol.org (Wayne Throop) wrote:
> : "Greg Egan" <grege...@netspace.net.au>

> : I can see no benefit from evolved AI
>
> Why would it be any different than the benefit from designed AI?
> And if the end result is a machine thta thinks like a man, only faster,
> then the world will beat a path to your mousetrap.

You're ignoring the third option, which is just to shift the only
working instance of intelligent life we know about into a different
substrate. If you want something that "thinks like a man, only
faster", then figure out how to upload someone. Evolution has already
solved what's probably the hardest part of that AI-creation strategy.

If we want various kinds of specialist AI whose advantages come from
combining consciousness with traits that are very different from
present humanity, then let the people who want this migrate into
software and volunteer to alter themselves for the role.

Scott Lurndal

unread,
Mar 16, 2007, 5:49:48 PM3/16/07
to
Sean O'Hara <sean...@gmail.com> writes:
>In the Year of the Golden Pig, the Great and Powerful
>greg...@netspace.net.au declared:
>> Almost thirteen years after Permutation City was published, I've
>> finally put my responses to the most common issues raised about the
>> novel on to a web page:
>>
>> http://www.gregegan.net/PERMUTATION/FAQ/FAQ.html
>>
>
>From the FAQ:
>
> What I regret most is my uncritical treatment of the idea
> of allowing intelligent life to evolve in the Autoverse.
> Sure, this is a common science-fictional idea, but when I
> thought about it properly (some years after the book was
> published), I realised that anyone who actually did this
> would have to be utterly morally bankrupt. To get from
> micro-organisms to intelligent life this way would involve
> an immense amount of suffering, with billions of sentient
> creatures living, struggling and dying along the way. Yes,
> this happened to our own ancestors, but that doesn't give
> us the right to inflict the same kind of suffering on
> anyone else.
>
> This is potentially an important issue in the real world.
> It might not be long before people are seriously trying
> to “evolve” artificial intelligence in their computers.
> Now, it's one thing to use genetic algorithms to come up
> with various specialised programs that perform simple
> tasks, but to “breed”, assess, and kill millions of
> sentient programs would be an abomination. If the first
> AI was created that way, it would have every right to
> despise its creators.
>
>Yes, this is a horrible mistake. I demand you rectify the matter by
>writing a novel on the subject.
>

and perhaps title it "The Two Faces of Tomorrow"?

s

No 33 Secretary

unread,
Mar 16, 2007, 5:55:43 PM3/16/07
to
Walter Bushell <pr...@oanix.com> wrote in
news:proto-7703C7....@032-325-625.area1.spcsdns.net:

> In article <m21wjp1...@amsu.fallenpegasus.com>,
> Mark Atwood <m...@mark.atwood.name> wrote:
>
>> "Greg Egan" <greg...@netspace.net.au> writes:
>> >
>> > But when it comes to mere humans creating AI -- unless we're
>> > going to do something even more disgusting, such as hard-wire
>> > them to adore us, and forgive us for everything
>>
>> What's your opinion of the human guided evolution of the dog?
>>
>> Because that's *exacly* what we've done to their thought
>> processes and outlooks w.r.t. humans.
>
> Dogs may have modified humans as much as humans changed dogs.
> Humans without dogs fared poorly when faced with humans with
> dogs. Anyways it was not a planed breeding program.

Eh? The hell it wasn't. It wasn't a very sophisticated program,
given the "sharp end towards mammoth" technology of the time, but
it was quite deliberate.

Wayne Throop

unread,
Mar 16, 2007, 5:59:20 PM3/16/07
to
::: I can see no benefit from evolved AI

:: Why would it be any different than the benefit from designed AI? And
:: if the end result is a machine thta thinks like a man, only faster,
:: then the world will beat a path to your mousetrap.

: "Greg Egan" <greg...@netspace.net.au>
: You're ignoring the third option, which is just to shift the only


: working instance of intelligent life we know about into a different
: substrate. If you want something that "thinks like a man, only
: faster", then figure out how to upload someone. Evolution has already
: solved what's probably the hardest part of that AI-creation strategy.

OK... but how does that mean there's "no benefit" to an evolved AI?
I'm still not understanding.

And I suspect migrated AI is going to be significantly more difficult
than either designed or evolved. Because you'd still have to figure out
(or evolve) a way for it operate, and then you'd have a comparable or
even harder hurdle to "read out" a human. But that's another kettle of
whatnot.

Hm. Is there a migrated AI story earlier than the Asimov YASID
about the zillionaire, in the form of a letter to a "dear abby" analogue?

Greg Egan

unread,
Mar 16, 2007, 6:09:24 PM3/16/07
to
On Mar 17, 2:49 am, Mark Atwood <m...@mark.atwood.name> wrote:

> Most of the Thinking Work that I myself want to do, I could do a lot
> better if I had a dozen of me working as a tight team on it. And each
> of those problems break apart into sub problems, all of which really
> could use a couple of copies of Me working together on it. All the
> way down to line by line coding. (One of the biggest productivity
> boosts in software development is pair-programming, followed by close
> team oriented code review.)
>
> And most of my desired projects, I could do in parallel, if I could
> "live" in parallel.
>
> I could use a couple of THOUSAND closely coordinated instansiations of
> myself *TODAY*, and I don't see that desire/need going away just by
> making myself ten times smarter and a thousand times faster. In fact,
> I think it would be worse, just because I would think of things I
> wanted to be doing ten thousand times faster....

And where does this end? "I want everything, and I want it now". If
I was a thousand times faster, and likely to live a very, very long
time, I'd chill out completely and lose my sense of urgency about
having to do everything I want to do in 80 years or so. And I'd far
prefer to live in a polis among like-minded people who want to spend
their time thinking deeply about things than in one where they've
chosen an approach that means they "need" to double the polis's
computing resources every microsecond, and argue about the allocation
of what they've got.

The total amount of computing available to our descendants is likely
to be finite, on physical and cosmological grounds. Maybe in the
very, very long term we'll find a way around that, but it strikes me
as incredibly unwise to start out in software life with impatience and
inefficiency, rushing to increase our computing resources at the
expense of finding ways to do more with less. Exponential growth
*always* hits a brick wall, eventually. There might be situations
where it's a good idea for a short period, but as a general strategy,
*finding ways not to need it* is the best investment a civilisation
can make.

Michael S. Schiffer

unread,
Mar 16, 2007, 6:36:15 PM3/16/07
to
"Greg Egan" <greg...@netspace.net.au> wrote in
news:1174081369.3...@n59g2000hsh.googlegroups.com:
>...

> I don't think this degree of caution means abandoning the
> subject of AI research entirely. Studying a working upload (and
> even allowing informed volunteers to tweak themselves
> cautiously) could tell us useful, interesting things without
> killing anyone.

Is it likely that uploading could be developed without producing
flawed copies/uploads with similarly serious problems along the way?
How different does the upload have to wind up before the consent of
the original is deemed invalid-- and how likely are the first
attempts at developing uploading to be all that similar?

Mike

Greg Egan

unread,
Mar 16, 2007, 6:44:37 PM3/16/07
to
On Mar 17, 5:59 am, thro...@sheol.org (Wayne Throop) wrote:
> ::: I can see no benefit from evolved AI
>
> :: Why would it be any different than the benefit from designed AI? And
> :: if the end result is a machine thta thinks like a man, only faster,
> :: then the world will beat a path to your mousetrap.
>
> : "Greg Egan" <grege...@netspace.net.au>

> : You're ignoring the third option, which is just to shift the only
> : working instance of intelligent life we know about into a different
> : substrate. If you want something that "thinks like a man, only
> : faster", then figure out how to upload someone. Evolution has already
> : solved what's probably the hardest part of that AI-creation strategy.
>
> OK... but how does that mean there's "no benefit" to an evolved AI?
> I'm still not understanding.

Sorry, I was unclear. When I say "no benefit to an evolved AI", I
mean no benefit that is distinct from uploads. And even that is
obviously false, because it would certainly tell us a few interesting
things. But in terms of what people want and would be willing to pay
for, surely uploading is orders of magnitude more beneficial?

Dear Mr Gates,

I have two research propositions for your foundation.

(a) I can make you and your wife immortal, or
(b) I can evolve these really smart, cute software smurfs, if I kill
enough of their ancestors (but tell your PR people that we don't need
to publicise that last part, and that the Washington Post and Matt
Drudge probably won't ever find out).

Please think it over and then mail your cheque.

> And I suspect migrated AI is going to be significantly more difficult
> than either designed or evolved. Because you'd still have to figure out
> (or evolve) a way for it operate, and then you'd have a comparable or
> even harder hurdle to "read out" a human. But that's another kettle of
> whatnot.

You're entitled to your suspicions, but my mind boggles. It took 3
billion years to evolve the human brain, and a lot of computing
resources. We might be intervening in the process, using very
different information structures, etc., but given that we're only
taking this approach because we don't really know what we're doing,
it's hard to see how we're going to have a massive advantage over
nature.

Genetic programming is likely to yield lots of useful, specialised
algorithms for things like vision and locomotion, but I'd be amazed if
we didn't hit a very steep slope after that.

Greg Egan

unread,
Mar 16, 2007, 7:02:25 PM3/16/07
to
On Mar 17, 6:36 am, "Michael S. Schiffer" <mschi...@condor.depaul.edu>
wrote:

All good points -- and we should be cautious with *any* approach --
but I think there's *a lot* more that could be done to verify scanning
and simulation of neural tissue without putting anyone at risk, than
there could to avoid agonising screw-ups in evolving AI.

If the only difference between a human brain and a rat's is the
detailed neural organisation, we can iron out all the bugs in our
scanning and simulation of neurons with animal trials. And even those
could be preceded by significantly useful work on slabs of tissue-
cultured neurons with no prospects of feeling anything if the whole
process screws up.

With uploading, the challenge is to capture accurately something that
we *know* already works perfectly in the biological form. If we can
achieve that ability with an animal, then scanning a human is just
scaling things up. In contrast, when evolving AIs, even *reaching*
animal-equivalent software is going to be hard, and then getting from
animal-equivalent to human-equivalent without millions of sentient
horror-story failures seems highly unlikely.

Wayne Throop

unread,
Mar 16, 2007, 6:48:17 PM3/16/07
to
: "Greg Egan" <greg...@netspace.net.au>
: It took 3 billion years to evolve the human brain, and a lot of

: computing resources. We might be intervening in the process, using
: very different information structures, etc., but given that we're only
: taking this approach because we don't really know what we're doing,
: it's hard to see how we're going to have a massive advantage over
: nature.

There are several advantages we'd have using evolutionary algorithms
over nature. One, as you mention, is that we'd intervene. We'd prune
many times more efficiently. I'm thinking of things like Lenat's
interventions in Eurisco's and later Cyc's processing, but on a
meta-level. Another is that we could discover things that are evolved
and *almost* work, and move them over local barriers to evolutionary
change. Another is that our evaluation function in judging each
generation can be many times more efficient than simply "let it interact
with reality". In short, we'd have our thumb on the scale.

On the other hand, I tend to agree that simply introducing random variations
to mental processes and letting them interact in a simulated world is
very likely unwieldy. Too expensive. Which isn't to say that there'd
be "no benefit", just that it might well be prohibitively expensive to
do that way.

So OK, if you are comparing a completely pure, unhybridized evolutionary
approach, it seems prohibitively expensive. But it seems to me the problem
is, by the time you can upload a human, you've already got an AI process
to upload them into.

It's like, you design a computer, some software,and an OS. What's the
chances you can extract an application on an already existing, running
computer and have it run on your newly designed one, especially if the
old one is not designed to read out its program at all. Now that you've
got your computer, you'll have a completely separate project to
instrument the existing computer with logic probes and figure out what
its program is by watching it fetching things from memory (and so on and
so forth), and then translate it into instructions that'll run on your
newly designed computer.

Though I suppose... you could take the approach of simulating each component
of the computer to create yours. Then your simulation might be a computer
without your being able to design one at all (or by analogy, you might get
an AI without ever designing one). But even then, you not only have to
*simulate* each component at a very low level (because you don't understand
any of the higher levels), you have to somehow snapshot the *state* of
each component in a coordinated way to initialize your simulation.
And that's a difficult problem. It seems much more reasonable to figure
out how to prod the simulation into operating in some generic way, rather
than to get it to run the exact same software as the original. So again,
if you approach it this way, you reach "AI" before you reach "upload".

Or so it seems to me. Hopefully I haven't rambled too badly in
explaining why it seems that way to me.

Default User

unread,
Mar 16, 2007, 7:25:06 PM3/16/07
to
Greg Egan wrote:


> Dear Mr Gates,
>
> I have two research propositions for your foundation.
>
> (a) I can make you and your wife immortal

We do that by killing you.

Brian

--
If televison's a babysitter, the Internet is a drunk librarian who
won't shut up.
-- Dorothy Gambrell (http://catandgirl.com)

Greg Egan

unread,
Mar 16, 2007, 7:45:19 PM3/16/07
to
On Mar 17, 6:48 am, thro...@sheol.org (Wayne Throop) wrote:

> It's like, you design a computer, some software,and an OS. What's the
> chances you can extract an application on an already existing, running
> computer and have it run on your newly designed one, especially if the
> old one is not designed to read out its program at all. Now that you've
> got your computer, you'll have a completely separate project to
> instrument the existing computer with logic probes and figure out what
> its program is by watching it fetching things from memory (and so on and
> so forth), and then translate it into instructions that'll run on your
> newly designed computer.

I certainly agree that scanning will be hugely challenging. One
advantage we'd have over your computer analogy, though, is that both
silicon hardware and computer code are *extremely* fragile and fault-
intolerant, whereas you can drench a human brain in all kinds of
temporarily disruptive pharmaceuticals, and it will sort out the
informational mess quite quickly so long as there's no serious
structural damage. (And we're even pretty tolerant of minor
structural damage).

While it's nice to fantasise about taking a snapshot of a human brain
with such perfect time resolution that the Copy could continue in the
original's mental footsteps without missing a thought, the bar for a
useful result is much, much lower than that. If Copies have to wake
up in VR with something like the mother of all hang-overs, that would
still be a small price to pay, so long as we haven't given them
permanent brain damage.

Also, we don't have to do this in a single bound. We can scan
animals. We can scan animals _post mortem_. We can scan neurons in
culture dishes. We can do literally thousands of things to get
incrementally better at this task without risking any of the horrible
side-effects of screwing around with evolving consciousness.


Greg Egan

unread,
Mar 16, 2007, 7:53:24 PM3/16/07
to
On Mar 17, 7:25 am, "Default User" <defaultuse...@yahoo.com> wrote:
> Greg Eganwrote:

> > Dear Mr Gates,
>
> > I have two research propositions for your foundation.
>
> > (a) I can make you and your wife immortal
>
> We do that by killing you.
>
> Brian

Who gets killed? He gets a fresh snapshot taken every few months, and
then when his organic body finally succumbs to ageing the last
snapshot that doesn't suffer from any structural deficits in the brain
is woken in VR, or maybe even a prosthetic body.

Wayne Throop

unread,
Mar 16, 2007, 8:21:14 PM3/16/07
to
: "Greg Egan" <greg...@netspace.net.au>
: I certainly agree that scanning will be hugely challenging. One

: advantage we'd have over your computer analogy, though, is that both
: silicon hardware and computer code are *extremely* fragile and fault-
: intolerant, whereas you can drench a human brain in all kinds of
: temporarily disruptive pharmaceuticals, and it will sort out the
: informational mess quite quickly so long as there's no serious
: structural damage. (And we're even pretty tolerant of minor
: structural damage).

True, akin to, maybe we don't have to have the program running smoothly,
if we can get the memory content right enough and induce a reboot, maybe
it'll come up sane and patch over corruption and offer to re-open all
the crashed apps, etc, etc. But it still isn't easy.

: Also, we don't have to do this in a single bound. We can scan


: animals. We can scan animals _post mortem_. We can scan neurons in
: culture dishes. We can do literally thousands of things to get
: incrementally better at this task without risking any of the horrible
: side-effects of screwing around with evolving consciousness.

True, akin to, starting a project to insert logic probes into computer
hardware without disrupting it by starting with transistors, working
up to ICs, circuit boards, and finally the whole nine yards, figuring
out the signaling as we go, and burning lots of components in the
learning process. But it still isn't easy.

Which still leads to, if you get enough of the bits right to get the
computer to reboot, you've got a computer; you have it before you've got
to the point of getting all the apps running with all their edit
history. So you've still hit AI before you hit uploads.

Greg Egan

unread,
Mar 16, 2007, 9:15:27 PM3/16/07
to
On Mar 17, 8:21 am, thro...@sheol.org (Wayne Throop) wrote:

> Which still leads to, if you get enough of the bits right to get the
> computer to reboot, you've got a computer; you have it before you've got
> to the point of getting all the apps running with all their edit
> history. So you've still hit AI before you hit uploads.

The way I see it, you hit *good physiological models*, not any form of
AI, before you hit uploads. You don't need to have some kind of
generic ability for human consciousness before you do a particular
upload. Rather, what you need is the generic ability to simulate,
accurately, large amounts of neural tissue. We can gain much of that
ability by practising on neural tissue which we have very good reason
to believe will not be conscious, either in vivo or in simulation.

To put this in terms of a computer analogy (though not quite your one,
I think), suppose we've learned how to duplicate generic computer
hardware and software, but we still don't have an AI application of
our own. We have practised and refined our duplication process by
scanning and simulating computers running all kinds of sophisticated
*but non-conscious* software, and we now believe we can make faithful
copies of computers whatever particular software they're running.

So now, we scan a computer which is running an AI program -- the
program we're not smart enough to write ourselves -- and *that* is
what gives us "our" AI.

In all of the above, feel free to substitute anything else for an AI
program. In other words, if you give me a computer running a
wonderful program of any kind that I have no idea how to write for
myself, then the generic ability to duplicate running computers is all
I need to get "my own" copy of this program; I don't need any skills
specifically related to the desirable software itself.

dwight...@gmail.com

unread,
Mar 16, 2007, 9:40:49 PM3/16/07
to

"The dead know only one thing - that it is better to be alive." It
has been my experience that people (and animals, for that matter) who
are quite obviously, plainly suffering, immensely so, by any
concievable criterion _still_ prefer to live rather than end their
misery. I don't know why this is, nor do I particularly care if there
is a evolutionary psychology just so story in there. So why not
simply give your AI the option to shut itself off? Your attitude
seems to be very Buddhist, if you'll pardon the nosiness.

David McMillan

unread,
Mar 16, 2007, 4:47:43 PM3/16/07
to
Tux Wonder-Dog wrote:

> I sometimes wonder if some of my characters were ever to "escape" from the
> stories I set them in, and from the "universes" they exist in, just what
> they would think of me. I doubt it would be publishable, printable, or
> whatever the euphemism preferred is ... ! ;)

There's an entire subgenre of fanfic devoted to this concept, where the
characters get loose, make their way into the real world, and decided to
get some payback from the fanfic author who decided to make their lives
"more interesting." Some of the ones written by the victim authors
themselves can be downright hilarious.
However, I don't recall any versions where the fan-author wrote a story
about the characters going after the *original* author. There must be
at least one out there somewhere....

Keith F. Lynch

unread,
Mar 16, 2007, 10:07:23 PM3/16/07
to
Greg Egan <greg...@netspace.net.au> wrote:
> Dear Mr Gates,
> I have two research propositions for your foundation.
> (a) I can make you and your wife immortal

If he was uploaded onto a Windows platform, his life expectancy would
only be a few hours before he died of "blue screen of death."
--
Keith F. Lynch - http://keithlynch.net/
Please see http://keithlynch.net/email.html before emailing me.

Mike Schilling

unread,
Mar 16, 2007, 10:17:18 PM3/16/07
to
Does it occur to anybody else that the phrase "hating the creator" has a
nice rhythm? You could write a catchy tune around it, say something Robert
Preston could sing.


Greg Egan

unread,
Mar 16, 2007, 10:24:33 PM3/16/07
to
On Mar 17, 9:40 am, "dwight.thi...@gmail.com"
<dwight.thi...@gmail.com> wrote:

> "The dead know only one thing - that it is better to be alive." It
> has been my experience that people (and animals, for that matter) who
> are quite obviously, plainly suffering, immensely so, by any
> concievable criterion _still_ prefer to live rather than end their
> misery. I don't know why this is, nor do I particularly care if there
> is a evolutionary psychology just so story in there. So why not
> simply give your AI the option to shut itself off?

Most sentient beings in pain might well prefer to continue existing,
but that's no good reason for some idiot to get them stuck in that
avoidable quandry in the first place.

The only way this has to boil down to a choice *by an actual being*
between some kind of horrible existence and no existence at all is if
you assume that we *must* go ahead with this specific project. The
idea that all the particular AIs who failed to be created this way are
somehow being let down by my squeamishness, and would have preferred a
chance to exist, is like comparing contraception (not for everyone,
but for one man with a 99.99999% chance of fathering children with
agonising disabilities) to genocide. Beings that are purely
hypothetical have no moral interests, even if part of the hypothesis
is that *if* you brought them into existence, they wouldn't decide to
commit suicide. I don't know about you, but the last thing that's
going to keep me awake at night is fretting about all the conscious-
but-irretrievably-mentally-crippled AIs I didn't get around to
creating.

Life is hard to some degree for every thinking, feeling being. I'm
not suggesting that anyone can ever promise a future without
suffering. But to embark on a project that we *know* will involve an
enormous amount of suffering and death (don't forget, unless we have
exponentially growing resources we actually have to terminate most of
our evolving candidates ourselves, never mind whether they want that
or not), when there is no pressing reason for it, and when it's likely
that we can get all the real benefits by other means if we exercise
some patience and intelligence, seems grossly immoral to me.

> Your attitude
> seems to be very Buddhist, if you'll pardon the nosiness.

I'm not a Buddhist, and don't have much knowledge of Buddhist beliefs,
but if you're asking whether I'm championing "non-being" over the risk
of suffering, the answer is no. I'm just asking that we make
reasonable efforts not to commit atrocities.

Wayne Throop

unread,
Mar 16, 2007, 10:26:27 PM3/16/07
to
: "Greg Egan" <greg...@netspace.net.au>
: The way I see it, you hit *good physiological models*, not any form of

: AI, before you hit uploads.

I thought the notion was to approach it incrementally.
You're saying the last step is to leap from something dumb as a post
(or at most, smart as a chimp) to an actual duplicate of a human?
That seems... unlikely.

: To put this in terms of a computer analogy (though not quite your one,


: I think), suppose we've learned how to duplicate generic computer
: hardware and software, but we still don't have an AI application of
: our own.

True, but remember your notion that we're depending on the
self-assembling featuresof human consciousness; ie, that you soak it in
alcohol, tinker with serotonin uptake, zap a few gazillion neurons, and
it reorganizes and reassembles, adapts, adopts, and keeps on ticking.
Now, you are approaching something like that incrementally,and you don't
expect it to self-assamble before you have it ready to pour in a human
consciousness? Again, that seems unlikely.

The computer metaphor, that you can have a fully working computer
but not be running an AI-like app, isn't quite right, because as you
pointed out upthread, computer software doesn't aggressively self-assemble
the way the human mind does. (I mean, geez, split-brain patients
self-assemble a single "self" even with the hemispheres completely
out of touch.) If you have a working substrate ready to load a
mind into, it'll be capable of self-assembling one. And you pretty
much necessarily reach that point before you are able to upload.

Hm. Consider Scalzi's "The Ghost Brigades", wherein we have that very
problelm (or somethign similar); they try to upload somebody into a body
(or is that download... well, transfer, anyways), but it self-assembled
an identity before that could happen (for various interestingly justified
in-story reasons). I think that'd be a real problem in the upload biz.

Not *certain*; I can see where there's room for reasonable people
to disagree. But it still sure seems that way, if you think about what
a nearly-workable upload substrate would most likely have to be like.

Wayne Throop

unread,
Mar 16, 2007, 10:37:15 PM3/16/07
to
: "Keith F. Lynch" <k...@KeithLynch.net>
: If he was uploaded onto a Windows platform, his life expectancy would

: only be a few hours before he died of "blue screen of death."

So? Just like windows, you retry, reboot, reinstall.

Sean O'Hara

unread,
Mar 16, 2007, 10:51:33 PM3/16/07
to
In the Year of the Golden Pig, the Great and Powerful Walter Bushell
declared:

>
> Dogs may have modified humans as much as humans changed dogs. Humans
> without dogs fared poorly when faced with humans with dogs. Anyways it
> was not a planed breeding program. Dogs to hold prey or potential
> attackers at bay while the humans threw spears, rocks and whatever.

That was only the beginning of humanity's dog-breeding program. Most
of the stuff done since has been intentional.

--
Sean O'Hara <http://diogenes-sinope.blogspot.com>
I am not in love, but I am open to persuasion.
-Joan Armatrading

Keith F. Lynch

unread,
Mar 16, 2007, 10:56:38 PM3/16/07
to
Greg Egan <greg...@netspace.net.au> wrote:
> Jo Walton <j...@localhost.localdomain> wrote:
>> Ah, but _Diaspora_'s the one that could do with a FAQ!

> Really? Offhand, I can only think of two issues that readers
> have raised:

I have a third issue. If I recall correctly (it's been nearly a
decade since I read it), each electron was a gateway into a new
universe.

If these universes are different, that conflicts with experimental
results that show that in a system where it's not possible to track
individual electrons, the probabilities of finding an electron at
given locations imply that each electron that comes out is a *mixture*
of the electrons that went in. In other words, electrons don't have
serial numbers. Similarly with other kinds of particles. If you mix
the electron that leads to universe A with the electron that leads to
universe B, what universe does the resulting electron lead to?

If these universes are the same, that implies that one could duck into
one electron then immediately come out of another electron a billion
light years away. But "immediately" in one frame of reference is
reverse causality in another. In other words, it can be used as a
time machine.

Johan Larson

unread,
Mar 16, 2007, 11:00:19 PM3/16/07
to
On Mar 16, 7:07 pm, "Keith F. Lynch" <k...@KeithLynch.net> wrote:

> Greg Egan <grege...@netspace.net.au> wrote:
> > Dear Mr Gates,
> > I have two research propositions for your foundation.
> > (a) I can make you and your wife immortal
>
> If he was uploaded onto a Windows platform, his life expectancy would
> only be a few hours before he died of "blue screen of death."

Check your blindspots, Keith. You're nearly ten year out of date about
that right now. The Windows version that had really serious stability
problems was '98, just before the switch to the NT codebase. If you
don't want to sound quite so quaint, you might want to switch to
complaining about security problems. Consider using a VD analogy;
that'd be classy.

Johan Larson

Johan Larson

unread,
Mar 16, 2007, 11:06:26 PM3/16/07
to
On Mar 16, 7:17 pm, "Mike Schilling" <mscottschill...@hotmail.com>
wrote:

> Does it occur to anybody else that the phrase "hating the creator" has a
> nice rhythm? You could write a catchy tune around it, say something Robert
> Preston could sing.

I wasn't the lyricist for Thorazine Daydream for nuthin'...

Johan Larson

Keith F. Lynch

unread,
Mar 16, 2007, 11:06:44 PM3/16/07
to
Wayne Throop <thr...@sheol.org> wrote:
> "Keith F. Lynch" <k...@KeithLynch.net> wrote:
>> If he was uploaded onto a Windows platform, his life expectancy
>> would only be a few hours before he died of "blue screen of death."

> So? Just like windows, you retry, reboot, reinstall.

Then once again he gets a few hours of experience. But since he's
restored from backup, they're the *same* few hours, over and over
again.

Also, because of Windows' gross inefficiency, it will be perceived
as much less than real time -- perhaps a few seconds rather than a
few hours.

Keith F. Lynch

unread,
Mar 16, 2007, 11:09:23 PM3/16/07
to
Johan Larson <johan....@comcast.net> wrote:
> "Keith F. Lynch" <k...@KeithLynch.net> wrote:
>> If he was uploaded onto a Windows platform, his life expectancy would
>> only be a few hours before he died of "blue screen of death."

> Check your blindspots, Keith. You're nearly ten year out of date
> about that right now. The Windows version that had really serious
> stability problems was '98, just before the switch to the NT
> codebase.

We're running the latest version of XT at work. We get crashes almost
every day, even though we use the machines very lightly.

Greg Egan

unread,
Mar 16, 2007, 11:18:36 PM3/16/07
to
On Mar 17, 10:26 am, thro...@sheol.org (Wayne Throop) wrote:
> : "Greg Egan" <grege...@netspace.net.au>

> : The way I see it, you hit *good physiological models*, not any form of
> : AI, before you hit uploads.
>
> I thought the notion was to approach it incrementally.
> You're saying the last step is to leap from something dumb as a post
> (or at most, smart as a chimp) to an actual duplicate of a human?
> That seems... unlikely.

Why? If I can *faithfully* reproduce 500 grams of chimp brain (bear
with me if these quantities are wrong, I'm too lazy to look this stuff
up) then it's a fairly trivial scale-up to faithfully reproduce a
kilogram of human brain. (Well, there might yet be some subtle
genetic difference between humans and other primates that is important
in the mature human physiology (as opposed to during brain
development, which is obviously the case), but if there is we have
ways of discovering that, through both conventional biology and the
scanning of small samples of human neural tissue.)

> Now, you are approaching something like that incrementally,and you don't
> expect it to self-assamble before you have it ready to pour in a human
> consciousness? Again, that seems unlikely.

You seem to be suggesting that we'd need to refine a series of
biological models incrementally towards our final human target, with
the very last step being personalisation. That's a conceivable
project, but it's not at all what I'm advocating, because it has most
of the disadvantages of the guided evolution approach.

When I talk about incremental improvements, starting with a handful of
individual tissue cultured neurons and working our way up to whole,
live animals, I am *not* suggesting that the software itself should
retain a record of these scans. The point of this incremental
approach is *not* to build a series of simulations which is getting
closer and closer to being human; the point is to get our simulation
technology _per se_ working ever better, by testing it out on
increasingly difficult cases.

In other words, I'm talking about refining the fidelity of the
techniques we use to capture and reproduce mammalian brains
generically. We get better at this task, and we need to retain a lot
of low-level biological knowledge as we do so, but at the level of our
scanning/simulation, we absolutely do not care what the particular
high-level computational functions of the neural tissue are. What we
*do* care about is getting the job done so well that it *doesn't
matter* what the computational function is, we will still capture it.

The degree of robustness of neural functions is not something special
to consciousness. All the things that animal brains do, they still do
very well under quite severe perturbations. I'm relying on some level
of fault-tolerance in the final human scan, but there's no reason why
we can't assess, and improve upon, our error levels with tests that
involve neural systems that do very different tasks than function as
conscious human minds.

Wayne Throop

unread,
Mar 16, 2007, 11:21:06 PM3/16/07
to
: "Greg Egan" <greg...@netspace.net.au>
: You seem to be suggesting that we'd need to refine a series of

: biological models incrementally towards our final human target, with
: the very last step being personalisation. That's a conceivable
: project, but it's not at all what I'm advocating, because it has most
: of the disadvantages of the guided evolution approach.

Indeed, one could think of it as beging guided *by* evolution; that is,
trying to figure out how evolution tweaked various systems while
following the pinball down the tree speciation leadint toward humans.
Partly via the untrue-but-having-an-interesting-gist notion of ontology
recapitulating philology.

: When I talk about incremental improvements, starting with a handful of


: individual tissue cultured neurons and working our way up to whole,
: live animals, I am *not* suggesting that the software itself should
: retain a record of these scans. The point of this incremental
: approach is *not* to build a series of simulations which is getting
: closer and closer to being human; the point is to get our simulation
: technology _per se_ working ever better, by testing it out on
: increasingly difficult cases.

That was what I supposed you had in mind. But sometime along the line,
you end up bringing up your simulation, and it's alive. And you reach
that point before you can faithfully reproduce a human consciousness. I
expect you will need to figure out how the brain functions, well before
you figure out how to transplant memories. That is, I expect you won't
even be able to work on the latter problem before you have in hand a
workable brain model.

: In other words, I'm talking about refining the fidelity of the


: techniques we use to capture and reproduce mammalian brains
: generically. We get better at this task, and we need to retain a lot
: of low-level biological knowledge as we do so, but at the level of our
: scanning/simulation, we absolutely do not care what the particular
: high-level computational functions of the neural tissue are. What we
: *do* care about is getting the job done so well that it *doesn't
: matter* what the computational function is, we will still capture it.

Sure. But being able to capture the computational function will,
I expect, occur well before the ability to capture the precise
computational state.

Wayne Throop

unread,
Mar 16, 2007, 11:30:47 PM3/16/07
to
: "Johan Larson" <johan....@comcast.net>
: Check your blindspots, Keith. You're nearly ten year out of date about

: that right now. The Windows version that had really serious stability
: problems was '98, just before the switch to the NT codebase.

Possibly so, but even for windows XP, the recommended troubleshooting
scenario for crashed apps and system halts, slowdowns, and such,
was a "retry, restart, reboot, reinstall" escalation pattern.
That just wasn't worth diagnosing the specific problem, you just
go through the sequence of escalation instead. System instability,
while much improved, was still fairly normal and expected, going by
the practical advice given windows users.

Now admitedly, I only see advice to windows users in passing;
my windows experience is peripheral. But as I pass by, I still
see windows users retrying rebooting and reinstalling, and
complaining about instability.

Presumably, that's all fixed by Vista, of course.

Johan Larson

unread,
Mar 17, 2007, 12:01:29 AM3/17/07
to
On Mar 16, 8:30 pm, thro...@sheol.org (Wayne Throop) wrote:
> : "Johan Larson" <johan.lar...@comcast.net>

> : Check your blindspots, Keith. You're nearly ten year out of date about
> : that right now. The Windows version that had really serious stability
> : problems was '98, just before the switch to the NT codebase.
>
> Possibly so, but even for windows XP, the recommended troubleshooting
> scenario for crashed apps and system halts, slowdowns, and such,
> was a "retry, restart, reboot, reinstall" escalation pattern.

That actually sounds quite reasonable.

> That just wasn't worth diagnosing the specific problem, you just
> go through the sequence of escalation instead. System instability,
> while much improved, was still fairly normal and expected, going by
> the practical advice given windows users.
>
> Now admitedly, I only see advice to windows users in passing;
> my windows experience is peripheral. But as I pass by, I still
> see windows users retrying rebooting and reinstalling, and
> complaining about instability.

I don't want to deny your experience, but it doesn't match my own,
after running Windows daily on three separate personal systems (95,
2000, XP Home) and five systems at three separate employers. The only
time I experienced serious instability problems was when I got my
broadband connection and I got infected. Once I cleaned out the system
and started running a firewall, the problem went away, never to
return. Mind you, my home use is undemanding, and the work machines
tend to have plenty of capacity, since I use them for development.

I strongly suspect the instability problems you report are due to some
combination of infected systems, poor administration, and installation
on marginal hardware.

Johan Larson

Greg Egan

unread,
Mar 17, 2007, 12:05:46 AM3/17/07
to
On Mar 17, 11:21 am, thro...@sheol.org (Wayne Throop) wrote:
[snip]

>I
> expect you will need to figure out how the brain functions, well before
> you figure out how to transplant memories. That is, I expect you won't
> even be able to work on the latter problem before you have in hand a
> workable brain model.

OK, I think I can finally see the core of what we disagree on. You're
suggesting that we'll need some kind of detailed-but-generic, global
model of specifically *human* brains, before we can go about scanning
an individual human ... and you think that model would have to be *so*
detailed *and* so human-specific that it would virtually be a
conscious instance of human-level AI itself.

But I can't see why that's necessary (although sure, it might be
*useful*). We'll need to know our way around all mammalian brains,
humans included, but not in so much detail that we could make a few
arbitrary choices and turn our high-level map into the equivalent of a
fully scanned human brain. The skills with working with that kind of
fine detail can all be gained with other, smaller, or non-human
systems.

> [B]eing able to capture the computational function will,


> I expect, occur well before the ability to capture the precise
> computational state.

Sure, it might be the case that we become technically *able* to scan a
human brain just well enough to construct a merely generic human
simulation ... a kind of John Doe ... before we can simulate Jane
Specific.

But who's twisting our arm and making us run John Doe? We don't need
to gather this much detail in order to get our high-level orientation
map of the human brain, and we don't need to capture, let alone
instantiate, this kind of not-quite-good-enough scan in order to
refine either our scanning resolution or our simulation's fidelity.
We can make those improvements through the other routes I've already
listed.

It is loading more messages.
0 new messages