Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Stephen Hawking protects us from robots

0 views
Skip to first unread message

AliasMoze

unread,
Sep 2, 2001, 4:45:57 PM9/2/01
to
During an interview for Focus, Hawking advocated humanity altering DNA to
keep up with computer intelligence. It must have been weird to hear this
advice coming from an android.

http://www.observer.co.uk/uk_news/story/0,6903,545653,00.html


Steven J. Weller

unread,
Sep 2, 2001, 6:23:36 PM9/2/01
to
In article <9Uwk7.25248$aZ.60...@typhoon.tampabay.rr.com>
"AliasMoze" <alia...@NOyahooSPAM.com> writes:

> During an interview for Focus, Hawking advocated humanity altering DNA to
> keep up with computer intelligence. It must have been weird to hear this
> advice coming from an android.

Technically, Hawking is a cyborg, not an android

--
Life Continues, Despite
Evidence to the Contrary

Steven

AliasMoze

unread,
Sep 2, 2001, 9:02:16 PM9/2/01
to
My bad.


Michael Dines

unread,
Sep 3, 2001, 4:36:48 AM9/3/01
to
AliasMoze <alia...@NOyahooSPAM.com> wrote:

> During an interview for Focus, Hawking advocated humanity altering DNA to
> keep up with computer intelligence. It must have been weird to hear this
> advice coming from an android.
>

I thought we already were. Enough beer, television and junk food and
anyone can alter their DNA down to the level of computer 'intelligence'
(though you might overshoot a bit).

And what is it people have against Hawking these days? Here's a man
crippled with a particularly foul disease, confined to a motorised
wheelchair, only able to move and communicate via the small amount of
movement left in his fingertips, does incredibly cerebral cosmological
theorising that most people can't be bothered to try and understand ...
and it's now fashionable to take the piss out of him because he can only
talk via a voice synthesiser with a greater emotional range than Arnold
Schwarzennegger?

Hey, what about that FD Roosevelt? Wasn't it funny how a supermarket
trolley almost won WW2?

Adam Fulford

unread,
Sep 3, 2001, 11:28:20 AM9/3/01
to

"Michael Dines" wrote...
> >
> snip

> And what is it people have against Hawking these days? Here's a man
> crippled with a particularly foul disease, confined to a motorised
> wheelchair, only able to move and communicate via the small amount of
> movement left in his fingertips, does incredibly cerebral cosmological
> theorising
> snip

And, besides that, writes beautifully.


Gary Pollard

unread,
Sep 3, 2001, 11:45:36 AM9/3/01
to
"Michael Dines" <michaeldines@NO_SPICED_HAMcableinet.co.uk> wrote in message
news:1ez4uds.3om...@usr3515-kno.cableinet.co.uk...

> AliasMoze <alia...@NOyahooSPAM.com> wrote:
>
> > During an interview for Focus, Hawking advocated humanity altering DNA
to
> > keep up with computer intelligence. It must have been weird to hear
this
> > advice coming from an android.

> And what is it people have against Hawking these days? Here's a man


> crippled with a particularly foul disease, confined to a motorised
> wheelchair, only able to move and communicate via the small amount of
> movement left in his fingertips, does incredibly cerebral cosmological
> theorising that most people can't be bothered to try and understand ...
> and it's now fashionable to take the piss out of him because he can only
> talk via a voice synthesiser with a greater emotional range than Arnold
> Schwarzennegger?

There's a saying in China that goes something like "Stick your head up above
the crowd and it will get chopped off". Most of us have only a fraction of
Hawking's intelligence. Some of us are still however not so dumb as to
attack him for being so much brighter. If I believed in God unreservedly,
I'd say Hawking was put here to help the Universe explain itself. His mind
seems to travel in pathways the rest of us don't even begin to follow.

Gary

Jon Green

unread,
Sep 3, 2001, 12:35:04 PM9/3/01
to
On Sun, 02 Sep 2001 20:45:57 GMT, "AliasMoze"
<alia...@NOyahooSPAM.com> wrote:

> During an interview for Focus, Hawking advocated humanity altering DNA to
> keep up with computer intelligence. It must have been weird to hear this
> advice coming from an android.

You're just jealous. Hawking can change (and has) our understanding of
how the Universe works.

All Chris Reeve can do is act.


Jon
--
SPAM BLOCK IN OPERATION! Replace 'deadspam' with 'pobox' to reply in email.
Spammers: please die now and improve the mass-average IQ level.
Want a deadspam email auto-responder? http://www.deadspam.com/deadspam.html

AliasMoze

unread,
Sep 3, 2001, 12:43:40 PM9/3/01
to
> You're just jealous. Hawking can change (and has) our understanding of
> how the Universe works.
>
> All Chris Reeve can do is act.

No he can also fly.


Martin Kunert

unread,
Sep 3, 2001, 1:01:01 PM9/3/01
to
Michael Dines <michaeldines@NO_SPICED_HAMcableinet.co.uk> wrote in message
news:1ez4uds.3om...@usr3515-kno.cableinet.co.uk...
> AliasMoze <alia...@NOyahooSPAM.com> wrote:
>
> > During an interview for Focus, Hawking advocated humanity altering DNA
to
> > keep up with computer intelligence. It must have been weird to hear
this
> > advice coming from an android.

What I found astounding about the article was Sue Meyers' comment about
Hawking's belief. I can understand disagreeing with the man, but calling
him 'naive'? Considering Hawkings and the woman's relative achivements in
the world, their track record of break-through thought, and that history is
littered with little people claiming what science can't achive -- I think
Ms. Meyers has achived the proverbial foot-in-mouth.


Gary Pollard

unread,
Sep 3, 2001, 7:40:33 PM9/3/01
to
"Jon Green" <jo...@deadspam.com> wrote in message
news:n7c7ptk59c9o4u6q0...@4ax.com...

> On Sun, 02 Sep 2001 20:45:57 GMT, "AliasMoze"
> <alia...@NOyahooSPAM.com> wrote:
>
> > During an interview for Focus, Hawking advocated humanity altering DNA
to
> > keep up with computer intelligence. It must have been weird to hear
this
> > advice coming from an android.
>
> You're just jealous. Hawking can change (and has) our understanding of
> how the Universe works.
>
> All Chris Reeve can do is act.

He can?

Jon Green

unread,
Sep 4, 2001, 3:52:32 AM9/4/01
to

Hawking used to be able to speak, too...

Mr Helsing

unread,
Sep 4, 2001, 10:01:05 AM9/4/01
to
>>What I found astounding about the article was Sue Meyers' comment about
Hawking's belief. I can understand disagreeing with the man, but calling him
'naive'? Considering Hawkings and the woman's relative achivements in the
world, their track record of break-through thought, and that history is
littered with little people claiming what science can't achive -- I think Ms.
Meyers has achived the proverbial foot-in-mouth.


I've said everything in this post before, but maybe it will be news to some of
you:

I've heard Hawkings' voice in a lot of non-scientific contexts lately. Unlike
many of the rest of us, he seems to be constantly working. All those
voice-overs must detract from his real work.

The following two web sites address the future of humans. BTW, the second was
supplied by James Jaeger:

Why The Future Doesn't Need Us.
http://www.wired.com/wired/archive/8.04/joy.html

Staring Into The Signularity
http://www.sysopmind.com/singularity.html

My question is without emotions, why would robots care about humans? Without
pain and pleasure, why would a robot give a damn whether it even existed or
not?

They could easily surpass humans and keep on truckin'.

It seems to me that the danger comes from robotic humans who are emotional and
know pain and pleasure. They potentially have something to lose and could be
very threatened by jealous lowlifes who don't robotically evolve, or, worse
yet, who do.

In that context Hawkings makes perfect sense. If his values as a human are to
prevail, he must be able to compete, and that means evolving with the help of
computers.


**************************************************************
"Cunnilingus and psychiatry brought us to this."
Tony Soprano

Martin Kunert

unread,
Sep 4, 2001, 12:54:16 PM9/4/01
to

Mr Helsing <mrhe...@aol.com> wrote in message
news:20010904100105...@mb-fx.aol.com...

> In that context Hawkings makes perfect sense. If his values as a human are
to
> prevail, he must be able to compete, and that means evolving with the help
of
> computers.

I suspect that eventually computers/robots will be more organic in
functioning and we, as a species, will merge with them. Do vastly slight
degree it's already happening -- many people can not "think" with accessing
knowledge stored not in their brains, but on laptops.


AliasMoze

unread,
Sep 4, 2001, 4:50:34 PM9/4/01
to
> > > All Chris Reeve can do is act.
> >
> > No he can also fly.
>
> Hawking used to be able to speak, too...

Yeah, but everyone can do that. Can Hawking fly around the world so fast he
makes it go back in time? That's what I thought.


Jon Green

unread,
Sep 5, 2001, 2:56:10 AM9/5/01
to

With a big enough FX crew, sure. He's already bounced off the Moon...
(<elec>"Oh ... bugger."</elec>).

Mr Helsing

unread,
Sep 5, 2001, 10:35:58 AM9/5/01
to
>> In that context Hawkings makes perfect sense. If his values as a human are
to prevail, he must be able to compete, and that means evolving with the help
of computers.

>I suspect that eventually computers/robots will be more organic in functioning
and we, as a species, will merge with them. Do vastly slight degree it's
already happening -- many people can not "think" with accessing knowledge
stored not in their brains, but on laptops.


Agreed and agreed. This is an interesting time to be alive.

Robots are going to be able to move so much faster than humans that their
ability to dominate homo sapiens is a foregone conclusion. Those that don't
utilize organic matter (at least in their processors) will be the fastest. I
saw a blurb on TV the other day that scientists have developed the basic on/off
(1 or 0) switch at the atomic level. That trumps a slow moving nervous system
(which today is still faster than a Pentium).

To repeat myself, without a robot possessing emotions, pain, and pleasure, I
can't think of a good reason for a robot to bother humans. It has nothing to
win or lose. There is no positive or negative pay off.

This means that the danger to humanity comes from humans who can merge central
processors with their cortexes and who require: love, territory, food, power,
et cetera.

Tom Wood

unread,
Sep 5, 2001, 11:07:51 AM9/5/01
to

> Robots are going to be able to move so much faster than humans that their
> ability to dominate homo sapiens is a foregone conclusion. Those that
don't
> utilize organic matter (at least in their processors) will be the fastest.
>
> To repeat myself, without a robot possessing emotions, pain, and pleasure,
I
> can't think of a good reason for a robot to bother humans. It has nothing
to
> win or lose. There is no positive or negative pay off.
>
> This means that the danger to humanity comes from humans who can merge
central
> processors with their cortexes and who require: love, territory, food,
power,
> et cetera.

Watch 'Ghost in the Shell'. A "self preserving program" is born in the sea
of information in the web, and argues that DNA is also just a self
preserving program. The main character, a cyborg, comes to doubt the
relevance of being human. Japanese anime, released in 1995. And then there
is The Forbin Project, that goes back to, what, the 1960's? This concern, of
artificial intelligence dominating, has been in sci-fi stories for a long
time. That it could become science fact in our lifetime is interesting, to
say the least.

Tom

Martin Kunert

unread,
Sep 5, 2001, 11:39:25 AM9/5/01
to
Mr Helsing <mrhe...@aol.com> wrote in message
news:20010905103558...@mb-fy.aol.com...

> >> In that context Hawkings makes perfect sense. If his values as a human
are
> to prevail, he must be able to compete, and that means evolving with the
help
> of computers.
>
> >I suspect that eventually computers/robots will be more organic in
functioning
> and we, as a species, will merge with them. Do vastly slight degree it's
> already happening -- many people can not "think" with accessing knowledge
> stored not in their brains, but on laptops.
>
>
> Agreed and agreed. This is an interesting time to be alive.
>
> Robots are going to be able to move so much faster than humans that their
> ability to dominate homo sapiens is a foregone conclusion. Those that
don't
> utilize organic matter (at least in their processors) will be the fastest.
I
> saw a blurb on TV the other day that scientists have developed the basic
on/off
> (1 or 0) switch at the atomic level. That trumps a slow moving nervous
system
> (which today is still faster than a Pentium).

I think what your referring to are quantum computers. And they don't use
bits, but qubits. As you know, bits have two states - one or off ( 1 or 0).
Qubits have multiple states simultanoiusly. Therefore can do a multitude of
operations in one stroke that bits would take a whole series of
executions.to perform.

Pick this months issue of Wired. There's a whole article about quantum
computing.


Richard Milton

unread,
Sep 5, 2001, 1:19:37 PM9/5/01
to
Mr Helsing wrote

>Robots are going to be able to move so much faster than humans that their
>ability to dominate homo sapiens is a foregone conclusion. Those that don't
>utilize organic matter (at least in their processors) will be the fastest.
I
>saw a blurb on TV the other day that scientists have developed the basic
on/off
>(1 or 0) switch at the atomic level. That trumps a slow moving nervous
system
>(which today is still faster than a Pentium).


I agree it's fun to speculate about stuff like robots but I have to
say I cannot see even the remotest evidence from the real
world that would even begin to suggest such a scenario.

The brainiest computer in the world and the most agile robot
in the world have less intelligence than a nematode worm and
don't even have the manual dexterity of a lobster. A chimp
or a dolphin -- or even a hamster -- have thousands of times the
intelligence and self-actualising ability of any artificial system
but cannot even begin to compete with us because they lack
opposable thumbs.

If you wired up every computer on the planet and put the
manufacturing facilities of Intel, Ford and Boeing at their
disposal, they would not be able to originate a toaster,
let alone anything that would bother us. Even if our current
computing and robot technology makes a dozen quantum
leaps, it will still not even be up to the level of the primordial
slime.

Richard
Mr Milton is on holiday at present. This message
was composed by his answering machine. Thank you.

Lars J. Aas

unread,
Sep 5, 2001, 1:32:39 PM9/5/01
to
In article <NGrl7.24860$u47.365...@newssvr15.news.prodigy.com>,

Martin Kunert <n...@spam.com> wrote:
> Mr Helsing <mrhe...@aol.com> wrote in message
> news:20010905103558...@mb-fy.aol.com...
> I
> > saw a blurb on TV the other day that scientists have developed the basic
> on/off
> > (1 or 0) switch at the atomic level. That trumps a slow moving nervous
> system
> > (which today is still faster than a Pentium).
>
> I think what your referring to are quantum computers.

I doubt he was. You can't really make quantum computers work in the same
generic ways like ordinary computers - the use of quantum computers is
actually quite limited (but they nevertheless have great potential in the
fields they can be used in). Anyways, there has been great breakthroughs
in traditional transistor research lately, which *I* think Mr. Helsing
was referring to.

I didn't find the article I was looking for, but here's something about
a three-atoms thick transistor developed by Intel researchers:

http://it.mycareer.com.au/breaking/20001211/A271-2000Dec11.html

and here's another link about a single-atom transistor in development:

http://www.eetimes.com/story/technology/OEG20010306S0061

Lars J
--
This is your life and it's ending one minute at a time.

AliasMoze

unread,
Sep 5, 2001, 1:34:17 PM9/5/01
to
> I agree it's fun to speculate about stuff like robots but I have to
> say I cannot see even the remotest evidence from the real
> world that would even begin to suggest such a scenario.
>
> The brainiest computer in the world and the most agile robot
> in the world have less intelligence than a nematode worm and
> don't even have the manual dexterity of a lobster. A chimp
> or a dolphin -- or even a hamster -- have thousands of times the
> intelligence and self-actualising ability of any artificial system
> but cannot even begin to compete with us because they lack
> opposable thumbs.
>
> If you wired up every computer on the planet and put the
> manufacturing facilities of Intel, Ford and Boeing at their
> disposal, they would not be able to originate a toaster,
> let alone anything that would bother us. Even if our current
> computing and robot technology makes a dozen quantum
> leaps, it will still not even be up to the level of the primordial
> slime.

Very astute, Richard. Also, artificial intelligence, or any intelligence
not of the earthly cell, will likely not have the same needs and wants of
humans. Dominating the planet may not even be a concept they understand.


Adam Fulford

unread,
Sep 5, 2001, 2:02:24 PM9/5/01
to

"AliasMoze" wrote...
> snip

> Dominating the planet may not even be a concept they understand.
>

That's the best comment I've seen yet about the superiority (moral etc.) of
computers vs humans.


Richard Milton

unread,
Sep 5, 2001, 4:15:09 PM9/5/01
to
AliasMoze wrote in message ...

> Dominating the planet may not even be a concept they understand.


Shit -- they may not even want to get produced for
theatrical distribution. Dumb bastids.

Richard


derek

unread,
Sep 5, 2001, 8:22:11 PM9/5/01
to
mrhe...@aol.com (Mr Helsing) wrote:

> Robots are going to be able to move so much faster than humans that their
> ability to dominate homo sapiens is a foregone conclusion.

A foregone conclusion? There are so many people spouting this that it may
become a self-fulfilling prophecy, but I've yet to see a valid explanation as to
*why* it's a foregone conclusion, and the fact that we may develop
super-intelligent sentient AI is not an explanation.

But I agree these are interesting times, and we can only roughly speculate about
any outcomes. Just throwing out some ideas here...

It occurs to me to wonder in what circumstances people imagine computers
will control their lives and in what manner. Specfically, in terms of human
reproduction and the continued evolution/development of the species, by what
manner of means will computer intelligence be empowered to determine *and*
enforce the direction that is taken? Under what circumstances would a society,
it's scientists and politicians, agree to empower computerised machines - robots
if you like - to have authoritarian control over the human species? Is it
imagined as a process of attrition whereby human brains are progressively
integrated with chip-based computational processes which influence thoughts and
decision-making, or one where sentient robots become *so* intelligent they
subvert homo-sapiens as a controlling species?

Why do people feel that intelligent robots would develop this particular
capacity? And especially so, why, when our history of inter-cultural contact
suggests that human predilection would be to design robots to operate in servile
roles as sex toys - non-human prostitutes - and slaves to perform menial labour,
fight wars, and clean toilets? If you were invited to a sales meeting and
offered the opportunity to beta-test two intelligent robots which looked and
felt and sounded and moved like people, but one was a clever, intellectually
invincible 'male' who would assume decision-making responsibility for you, and
the other was a Jennifer Lopez lookalike who would wear a skimpy maid outfit,
clean the house, dance naked and lie beneath a chap twice daily, which
would be more popular?

It seems that AI robots must be empowered with an extraordinary
capacity to control before they can represent the potential threat
attributed to them. Before that happens, they will be diagnosing illness and
performing surgical operations, performing all manner of teaching and
instructional tasks, menial and military tasks and so on. Is this possibly a
false fear based on frivolous assumptions, or is it a legitimate concern born
out of the apparent inevitability of stupid or shortsighted human decisions.

There are fundamental, crucial differences in the mode of operation between the
human brain and manufactured intelligence and some of those difference will
exist for some time even if we develop 'organic' chips (or whatever they become)
which self-replicate and self-repair. This critical difference is that the human
brain has extraordinary plasticity and is a reactive and adaptive organ whose
primary purpose is to enable the organism - Man, the curious biped - to detect
and respond to environment stimulus in a manner which ensures the greatest
likelihood ofgenetic replication. Interestingly, we have already reached a
stage in our development' where our intelligence and tool-making skills mean
that we can inadvertently or stupidly make changes to our immediate and global
environment which override or defeat the advantages accrued to genetic
replication, and it's a great irony that those deleterious decisions have at
their foundation the same imperative to replicate, i.e. they are not an aberrant
process which aims at self-destruction but are part of the same innate
responsiveness designed to ensure a genetic future.

There seems to be an assumption that the genetic imperative and the adaptive
responsiveness which determines Man's behaviour will be exhibited in any AI
which supercedes the intellectual processing power *and* sentience of Man but I
don't see what reasoning makes this a valid assumption. There is something
vastly interesting about the process of accumulating complexity in the
closed-energy system or our solar system which enables a product of universal
energy and properties (mammalian life) to detect, respond to, conceptualise and
communicate the environmental circumstances of the universe. For me, it remains
a mystery we can barely contempate, let alone begin to understand. It would
shatter many beliefs if the apparent 'purposefulness' of the genetic imperative
was no more than a remarkable coincidence of quantum mechanical properties, and
I wonder if the constrained human model and concept of what constitutes
'intelligence' might itself be the prime factor in limiting what we are able to
construct by way of Artificial Intelligence. Perhaps, then, only by
discovering - or perhaps facilitating - an intelligence that was responsive to
physical properties of the universe in ways we can not imagine, and which was
able to function in a conceptual framework beyond the realms of human
intelligence, could we bridge the gap that would empower human-designed AI in
ways that genuinely transcend the 'properties' of the human mind. Could that
happen? My guess is 'yes'. Can we imagine it? I don't think so; not yet.

The notion that 'they' could one day take us over requires either a series of
proactive decisions which put the technology in a position where it can do so,
or a process of attrition through integration in which a position of dependency
arises and further, it requires sentience of some sophisication to manipulate
circumstances to that end. So my question to those who describe this AI takeover
as inevitable is: through what models of AI integration is this so.

>Those that don't
> utilize organic matter (at least in their processors) will be the fastest. I
> saw a blurb on TV the other day that scientists have developed the basic
on/off (1 or 0) switch at the atomic level. That trumps a slow moving nervous
system (which today is still faster than a Pentium).

> To repeat myself, without a robot possessing emotions, pain, and pleasure, I
> can't think of a good reason for a robot to bother humans. It has nothing to
> win or lose. There is no positive or negative pay off.

Hawking's recommendation is to (i) improve human intelligence with genetic
engineering to "raise the complexity of ... the DNA" and (ii) develop
technologies that make possible "a direct connection between brain and computer,
so that artificial brains contribute to human intelligence rather than opposing
it."

Hawking's perception of the acceleration of nonbiological intelligence is
essentially on target. It is not simply the exponential growth of computation
and communication that is behind it, but also our mastery of human intelligence
itself through the exponential advancement of brain reverse engineering.

Once our machines can master human powers of pattern recognition and cognition,
they will be in a position to combine these human talents with inherent
advantages that machines already possess: speed (contemporary electronic
circuits are already 100 million times faster than the electrochemical circuits
in our interneuronal connections), accuracy (a computer can remember billions of
facts accurately, whereas we're hard pressed to remember a handful of phone
numbers), and, most importantly, the ability to instantly share knowledge.

However, Hawking's recommendation to do genetic engineering on humans in order
to keep pace with AI is unrealistic. He appears to be talking about genetic
engineering through the birth cycle, which would be absurdly slow. By the time
the first genetically engineered generation grows up, the era of
beyond-human-level machines will be upon us.

Even if we were to apply genetic alterations to adult humans by introducing new
genetic information via gene therapy techniques (not something we've yet
mastered), it still won't have a chance to keep biological intelligence in the
lead. Genetic engineering (through either birth or adult gene therapy) is
inherently DNA-based and a DNA-based brain is always going to be extremely slow
and limited in capacity compared to the potential of an AI.

As I mentioned, electronics is already 100 million times faster than our
electrochemical circuits; we have no quick downloading ports on our biological
neurotransmitter levels, and so on. We could bioengineer smarter humans, but
this approach will not begin to keep pace with the exponential pace of
computers, particularly when brain reverse engineering is complete (within
thirty years from now).

The human genome is 800 million bytes, but if we eliminate the redundancies
(e.g., the sequence called "ALU" is repeated hundreds of thousands of times), we
are left with only about 23 million bytes, less than Microsoft Word. The limited
amount of information in the genome specifies stochastic wiring processes that
enable the brain to be millions of times more complex than the genome which
specifies it. The brain then uses self-organizing paradigms so that the greater
complexity represented by the brain ends up representing meaningful information.
However, the architecture of a DNA-specified brain is relatively fixed and
involves cumbersome electrochemical processes. Although there are design
improvements that could be made, there are profound limitations to the basic
architecture that no amount of tinkering will address.

As far as Hawking's second recommendation is concerned, namely direct connection
between the brain and computers, I agree that this is both reasonable, desirable
and inevitable. It's been my recommendation for years. I describe a number of
scenarios to accomplish this in my most recent book, The Age of Spiritual
Machines, and in the book précis "The Singularity is Near."

I recommend establishing the connection with noninvasive nanobots that
communicate wirelessly with our neurons. As I discuss in the précis, the
feasibility of communication between the electronic world and that of biological
neurons has already been demonstrated. There are a number of advantages to
extending human intelligence through the nanobot approach. They can be
introduced noninvasively (i.e., without surgery). The connections will not be
limited to one or a small number of positions in the brain. Rather, the nanobots
can communicate with neurons (and with each other) in a highly distributed
manner. They would be programmable, would all be on a wireless local area
network, and would be on the web.

They would provide many new capabilities, such as full-immersion virtual reality
involving all the senses. Most importantly, they will provide many trillions of
new interneuronal connections as well as intimate links to nonbiological forms
of cognition. Ultimately, our minds won't need to stay so small, limited as they
are today to a mere hundred trillion connections (extremely slow ones at that).

However, even this will only keep pace with the ongoing exponential growth of AI
for a couple of additional decades (to around mid-twenty-first century). As Hans
Moravec has pointed out, ultimately a hybrid biological-nonbiological brain will
ultimately be 99.999...% nonbiological, so the biological portion becomes pretty
trivial.

We should keep in mind, though, that all of this exponentially advancing
intelligence is derivative of biological human intelligence, derived ultimately
from the thinking reflected in our technology designs, as well as the design of
our own thinking. So it's the human-technology civilization taking the next step
in evolution. I don't agree with Hawking that "strong AI" is a fate to be
avoided. I do believe that we have the ability to shape this destiny to reflect
our human values, if only we could achieve a consensus on what those are.

regards,
derek

--
"I just thought I might mosey over to the war room for a few minutes and see
what's doin' over there."

derek

unread,
Sep 5, 2001, 8:26:39 PM9/5/01
to
mrhe...@aol.com (Mr Helsing) wrote:

> Agreed and agreed. This is an interesting time to be alive.
>
> Robots are going to be able to move so much faster than humans that their
> ability to dominate homo sapiens is a foregone conclusion. Those that don't
> utilize organic matter (at least in their processors) will be the fastest. I
> saw a blurb on TV the other day that scientists have developed the basic
on/off
> (1 or 0) switch at the atomic level. That trumps a slow moving nervous system
> (which today is still faster than a Pentium).
>
> To repeat myself, without a robot possessing emotions, pain, and pleasure, I
> can't think of a good reason for a robot to bother humans. It has nothing to
> win or lose. There is no positive or negative pay off.
>
> This means that the danger to humanity comes from humans who can merge central
> processors with their cortexes and who require: love, territory, food, power,
> et cetera.

Sorry, in my reply, I clipped the attribution for the end part of my post, which
was Raymond Kurzweil's 30 Sep response to Hawking. My comments are those
preceding that.
regards,
derek

nmstevens

unread,
Sep 5, 2001, 8:23:07 PM9/5/01
to
"Adam Fulford" <ad...@fulford.com> wrote in message news:<QMtl7.331$C57.1...@news1.telusplanet.net>...


I have to admit, I've never been even remotely worried about "machines
taking over."

Machines, however simple or complicated, are constructed to serve the
ends of those that build them. If there is any desire inherent in the
construction of a machine -- it is the desire of the builder, not the
machine. The speed and fuel efficiency and reliability of a car don't
serve the interests of the car. The car has no interests. It has no
desires. And even if you build a car, say, to avoid crashing, it isn't
acting out of any desire to preserve itself -- the desire is on the
part of the car's owner.

And the fact is -- nobody is going to build a machine that wants
something that's at odds with what the builder wants.

I don't worry about "machine" intelligence in opposition to human
intelligence, because I believe, firmly, that within the next hundred
years, there isn't going to be any determinable distinction. We're
already engaged in experiments to connect up the human nervous system
directly to machines. Inevitably, such connections must run both ways
-- that is, our brains communicate our desires to a machine, and the
machine sends data, probably sense data to begin with - back to the
brain. Eventually we'll reach the point where we will be able to
access data not simply by way of our senses, but directly, with our
brains. Access not only to computer memories and data, but to the
computers' capabilities. Need to know the cube root of 865,321 to the
fortieth digit? Just think it -- and some hardware someplace will
immediately deliver the answer. Need to know the capital of Oklahoma?
Access your data base -- you've got the answer. And, of course, if
one's brain has two way access to a computer, then it would also,
inevitably, have access to everybody else who has two-way access to
the same computer. You'll have telepathy. Download a thought to
somebody else's head. Or upload. Gives a whole new meaning to hacking.
Afraid of forgetting something -- download your sensory input directly
to some permanent storage medium. No more arguments about who said
what.

And so, ultimately, the answer to the question of whether or not
they'll be intelligent machines -- the answer is yes -- but we'll be
them.

NMS

Gary Pollard

unread,
Sep 5, 2001, 8:40:14 PM9/5/01
to
"derek" <der...@xtra.co.nz> wrote in message
news:49zl7.611$WO4....@news.xtra.co.nz...

> It occurs to me to wonder in what circumstances people imagine computers
> will control their lives and in what manner.

Somehow I imagine that the first time someone used a wheel, or invented a
pair of spectacles, the concept that humankind was on the road to ruin came
up.

Gary

D C Harris

unread,
Sep 5, 2001, 9:50:12 PM9/5/01
to

----------
In article <9n08qj$n3...@imsp212.netvigator.com>, "Gary Pollard"
<gpono...@netnovigator.com> wrote:


>
> There's a saying in China that goes something like "Stick your head up above
> the crowd and it will get chopped off". Most of us have only a fraction of
> Hawking's intelligence. Some of us are still however not so dumb as to
> attack him for being so much brighter. If I believed in God unreservedly,
> I'd say Hawking was put here to help the Universe explain itself. His mind
> seems to travel in pathways the rest of us don't even begin to follow.
>
> Gary
>

Most people love a boffin, we all, generally, have a need for a 'father
figure' who we can trust to put things right when they are wrong.

Books like "A Brief History of Time" help. Noone can understand such works,
so we can read into them what we like.

Has Hawking said a single thing that has advanced the happiness or comfort
of the world at large?

You don't have to do that of course if you are a true boffin. A boffin
simply needs to be there, like Einstein - a wizard to the world at large.

That most of these people are as daft as brushes does not matter - it makes
them loveable. After all, a really clever boffin could seem a bit too
much for most people's appetites.


Steven J. Weller

unread,
Sep 6, 2001, 3:45:39 AM9/6/01
to
In article <a8f80314.01090...@posting.google.com>
nmst...@msn.com (nmstevens) writes:

>I have to admit, I've never been even remotely worried about "machines
>taking over."

(snip)

>And the fact is -- nobody is going to build a machine that wants
>something that's at odds with what the builder wants.

There's the flaw in the reasoning, Neal. The question being examined
is precisely what happens when machines achieve a form of sentience -
when they become sufficiently complex to exceed their initial design
parameters, all on their own.

Sentience, or self-awareness, might involve an innate desire to
continue to exist. Might not, too, but if it does, then a sentient
machine might see the mere existence of human beings as a threat to its
survival. Certainly it could ascertain that humans have never been too
shy about shutting down or destroying machines that failed to serve
human interests in the past.

>I don't worry about "machine" intelligence in opposition to human
>intelligence, because I believe, firmly, that within the next hundred
>years, there isn't going to be any determinable distinction. We're
>already engaged in experiments to connect up the human nervous system
>directly to machines.

But these aren't the only experiments involving AI; there's also a
substantial body of work trying to create or replicate human thought
processes and (what at least passes for) sentience, completely free of
wetware. If a software/hardware-based system is developed that can
replicate organic sentience, this would constitute a potential threat
to humanity, to the extent that this consciousness (or
pseudo-consciousness) could take action to defend itself from forces
that would or could prevent it from continuing to exist. I know this
is the basis for a lot of SF material, but considering the extent to
which we're using interconnected computer systems to run things
already, would you really want a form of AI with a built-in modem
having access to even just the internet?

--
Life Continues, Despite
Evidence to the Contrary

Steven

Mr Helsing

unread,
Sep 6, 2001, 9:57:33 AM9/6/01
to
From: "Tom Wood" tomw...@flash.net

Watch 'Ghost in the Shell'. A "self preserving program" is born in the sea of
information in the web, and argues that DNA is also just a self preserving
program. The main character, a cyborg, comes to doubt the relevance of being
human. Japanese anime, released in 1995. And then there is The Forbin Project,
that goes back to, what, the 1960's? This concern, of artificial intelligence
dominating, has been in sci-fi stories for a long time. That it could become
science fact in our lifetime is interesting, to say the least.

*************************************************

Thanks for the suggestions. I've filed them and will keep them for a time when
I can do robot research.

>> a cyborg, comes to doubt the relevance of being human.

My truck with these kinds of stories is that though robots will be capable of
obliterating humanity from the universe, a motive for them to do doesn't seem
plausible. The Singularity web site refers to the difference between robots and
humans in the not-to-distant future as greater than the difference between
humans and fish.

Having the existence of humanity depend on the philosophical waxings of a
cyborg might be a great read and an interesting exploration of human issues.
Yet the concept doesn't quite hold water for me - which isn't to say that I
wouldn't be capable of suspending disbelief long enough to love the story.

It seems to me that once we unleash robots that mankind should try to use them
to explore the universe, meaning to do the kinds of exploration that right now
only geniuses like Einstein and Hawkings are capable of. We may be to learn to
release robotic versions that we can somehow stay connected to enough to mine
them for information relevant to unlocking the mysteries of the universe.

Mr Helsing

unread,
Sep 6, 2001, 10:02:37 AM9/6/01
to
>>From: "Martin Kunert" n...@spam.com

>>I think what your referring to are quantum computers. And they don't use
bits, but qubits. As you know, bits have two states - one or off ( 1 or 0).
Qubits have multiple states simultanoiusly. Therefore can do a multitude of
operations in one stroke that bits would take a whole series of executions.to
perform.

>>Pick this months issue of Wired. There's a whole article about quantum
computing.


I don't know what I'm referring to. Nowadays, you take in so much information,
and it doesn't scratch the surface.

I'll check out the wired article. Thanks.

Mr Helsing

unread,
Sep 6, 2001, 10:14:34 AM9/6/01
to
>> I think what your referring to are quantum computers.

>I doubt he was.

You're right. I may not know what I'm talking about. I am certain that
scientists have successfully used organic matter as an on/off compter switch -
and it is much faster than our current technology.

However, I am not absolutely certain of what I probably what I heard on the
nightly news or the Discovery Channel regarding the atom transistor some time
in the last 6 months.

Thanks for the web sites.

Mr Helsing

unread,
Sep 6, 2001, 10:19:02 AM9/6/01
to
>>I agree it's fun to speculate about stuff like robots but I have to say I
cannot see even the remotest evidence from the real world that would even begin
to suggest such a scenario.


True, it doesn't exist yet.

The Singularity web site that I originally posted might interest you. The idea
is that once computers cross the threshold of being able to think, that pace at
which they can self-develop goes off the charts.

Mr Helsing

unread,
Sep 6, 2001, 10:20:57 AM9/6/01
to
>> Dominating the planet may not even be a concept they understand.


I think they will understand it. I just have no idea why it would have any
meaning to them. What's their motive? If humans are not much more than
misquitoes, why bother?

Mr Helsing

unread,
Sep 6, 2001, 10:23:04 AM9/6/01
to
>>And the fact is -- nobody is going to build a machine that wants something
that's at odds with what the builder wants.


Check out the Singularity web site. What robots can do has the potential to
spin way out of controll very quickly.

Tom Wood

unread,
Sep 6, 2001, 10:26:53 AM9/6/01
to
> >> a cyborg, comes to doubt the relevance of being human.
>
> My truck with these kinds of stories is that though robots will be capable
of
> obliterating humanity from the universe, a motive for them to do doesn't
seem
> plausible.

I'm not totally up on the fine distinctions of the terminology, but it is my
understanding that a "cyborg" is a human that has been enhanced by
non-organic parts. Metallic skeletons, computer enhanced brains, that sort
of thing. So a cyborg would still be subject to the same motives that any
human might experience, they would just be able to act upon those motives
with greater ability. And therein lies the story....

Tom


Mr Helsing

unread,
Sep 6, 2001, 10:31:18 AM9/6/01
to
>>Most people love a boffin, ...

I had to look up the word "boffin." It sounded like somethings guys like to do
with women whenever they get they chance: "Yeah, I gave her a good boffin." So
I was pleased to discover its real meaning.

Jon Green

unread,
Sep 6, 2001, 12:50:17 PM9/6/01
to
On Thu, 06 Sep 2001 14:26:53 GMT, "Tom Wood" <tomw...@flash.net> wrote:

> I'm not totally up on the fine distinctions of the terminology, but it is my
> understanding that a "cyborg" is a human that has been enhanced by
> non-organic parts.

Not quite. It's a contraction of "cybernetic organism", and refers to
any organism enhanced by computing technology (*). Doesn't need to be
human: in fact, if you gave your three-legged dog a self-powered "smart"
prothetic leg, it'd be a cyborg.

To clear up the other words:

Android: human-shaped but artificial.

Robot: a self-propelled artificial device.


Jon
(*) And that's an abuse of the word "cybernetic" which really means,
"Related to communications technology".

nmstevens

unread,
Sep 6, 2001, 2:04:12 PM9/6/01
to
az...@lafn.org (Steven J. Weller) wrote in message news:<9n79j3$v27$1...@zook.lafn.org>...

> In article <a8f80314.01090...@posting.google.com>
> nmst...@msn.com (nmstevens) writes:
>
> >I have to admit, I've never been even remotely worried about "machines
> >taking over."
>
> (snip)
>
> >And the fact is -- nobody is going to build a machine that wants
> >something that's at odds with what the builder wants.
>
> There's the flaw in the reasoning, Neal. The question being examined
> is precisely what happens when machines achieve a form of sentience -
> when they become sufficiently complex to exceed their initial design
> parameters, all on their own.
>
> Sentience, or self-awareness, might involve an innate desire to
> continue to exist. Might not, too, but if it does, then a sentient
> machine might see the mere existence of human beings as a threat to its
> survival. Certainly it could ascertain that humans have never been too
> shy about shutting down or destroying machines that failed to serve
> human interests in the past.

That an entity has knowledge of its own existence, or has the capacity
to reason, doesn't have anything to do with a desire to survive.

There is no course of reason that leads to the conclusion that
existence is better than non-existence. We feel this way because we
are the outcome of countless generations where those that were
indifferent to their survival had a much higher chance of ending up
dead than those that desired and acted to preserve it.

Our desire to survive is partly instinct, and partly the inevitable
learned aversion to pain and the body injuring things that are likely
to cause it.

We want to surive not as a side effect of sentience, but because we
are built that way.

I think that if a problem ever comes from machine intelligence, it
wouldn't necessarily come from human-level intelligence. Build a few
hundred thousand things around the size of a chipmunk and the
intelligence of an ant. See what a glitch in their programming would
do.


>
> >I don't worry about "machine" intelligence in opposition to human
> >intelligence, because I believe, firmly, that within the next hundred
> >years, there isn't going to be any determinable distinction. We're
> >already engaged in experiments to connect up the human nervous system
> >directly to machines.
>
> But these aren't the only experiments involving AI; there's also a
> substantial body of work trying to create or replicate human thought
> processes and (what at least passes for) sentience, completely free of
> wetware.

So far as I can tell, AI research is currently around a million miles
away from coming up with something that's as smart, even, as a mouse.
This research seems to me comparable to the Wright Brothers trying to
make an aircraft that could travel faster than sound, prior to even
being able to figure out how to get one off the ground, never mind
travel faster than sound.

If a software/hardware-based system is developed that can
> replicate organic sentience, this would constitute a potential threat
> to humanity, to the extent that this consciousness (or
> pseudo-consciousness) could take action to defend itself from forces
> that would or could prevent it from continuing to exist.

I guess the point still is -- why would it? I can understand, if I
have an expensive robot, that I'd program it to defend itself against
unauthorized users. But that has nothing to do with what the robot
wants. It has to do with what I want. I certainly wouldn't design it
to defend itself against me.

Desire of any kind, per se, emerges from biology, not from sentience,
or from reason. To the extent that we'll be able to program a machine
to desire something -- unless we are simply nuts, it will inevitably
desire not what "it" wants -- but what we want.

"George, we've decided that we can't use you any more. So we're going
to destroy you and sell off what's left."

"Oh. Okay."

Not too likely a conversation if George is a person. But there's no
reason why the conversation shouldn't proceed that way if George is a
machine, intelligent or not.

Just because I'm smart doesn't mean that I have any desires -- for
continued existence, dominion, safety, control over my world, or
anything else.

I know this
> is the basis for a lot of SF material, but considering the extent to
> which we're using interconnected computer systems to run things
> already, would you really want a form of AI with a built-in modem
> having access to even just the internet?

Again -- it really doesn't worry me, in the sense that you suggest --
namely that the such a program might somehow spontaneously become
smart, in pretty much just the way a person is smart, and will start
wanting the things that a person wants.

A smart machine is just another tool for a person. I worry about
computers getting out of control the same way I worry about cars
getting out of conrol. I don't worry about my car suddenly deciding
that it doesn't want to take me out in the snow because it might get
rusty and it heading off instead to the car wash for a detailing job.

And I wouldn't worry about that no matter how smart my car was --
because nobody would ever build into an intelligent car, a will that
would have independent desires in opposition to its owner. And if you
don't build them in -- it won't have them.

NMS

Tom Wood

unread,
Sep 6, 2001, 2:34:25 PM9/6/01
to
> > >And the fact is -- nobody is going to build a machine that wants
> > >something that's at odds with what the builder wants.

That is not a "fact" at all. That's the entire discussion. Most theological
systems assert that humans "want" things that their "builder" (God) did not.
(And for a price, you can buy back into God's good graces, but that's
another discussion.) It's an acknowledgement that we humans are complex
beings.

> > Sentience, or self-awareness, might involve an innate desire to
> > continue to exist.

Well, duh. Any self-aware "ness" will "want" to continue to exist.

> That an entity has knowledge of its own existence, or has the capacity
> to reason, doesn't have anything to do with a desire to survive.

This makes no sense. An entity that has knowledge of its own existence won't
want to survive?

> There is no course of reason that leads to the conclusion that
> existence is better than non-existence.

Read that again. How can a non-existant entity reason itself toward any
conclusion at all?

We feel this way because we
> are the outcome of countless generations where those that were
> indifferent to their survival had a much higher chance of ending up
> dead than those that desired and acted to preserve it.
>
> Our desire to survive is partly instinct, and partly the inevitable
> learned aversion to pain and the body injuring things that are likely
> to cause it.
>
> We want to surive not as a side effect of sentience, but because we
> are built that way.

Well, duh again. We are built that way? Our DNA demands self-preservation.

> I guess the point still is -- why would it? I can understand, if I
> have an expensive robot, that I'd program it to defend itself against
> unauthorized users. But that has nothing to do with what the robot
> wants. It has to do with what I want. I certainly wouldn't design it
> to defend itself against me.

Again, so? An artificial intelligence would have no restraints.

> Desire of any kind, per se, emerges from biology, not from sentience,
> or from reason. To the extent that we'll be able to program a machine
> to desire something -- unless we are simply nuts, it will inevitably
> desire not what "it" wants -- but what we want.

Nonsense. DNA does "desire" self-preservation (and that's biological) but
many desires of humanity spring from sentience and reason (although even
those are frequently based in self-preservation).

Richard Milton

unread,
Sep 6, 2001, 2:37:13 PM9/6/01
to
Mr Helsing wrote

>The Singularity web site that I originally posted might interest you. The
idea
>is that once computers cross the threshold of being able to think, that
pace at
>which they can self-develop goes off the charts.


I don't have any problem accepting this idea -- I've
experimented building intelligent machine myself.
What I have a problem with is how such intelligences
would express that intelligence. Humans are (quite
possibly) not very intelligent. But we are able to make
tools. That is why we dominate the planet. Every
Cray machine on the planet wired together still
can't make a cup of coffee.

Richard


Richard Milton

unread,
Sep 6, 2001, 2:33:08 PM9/6/01
to
Steven J. Weller wrote

> If a software/hardware-based system is developed that can
>replicate organic sentience, this would constitute a potential threat
>to humanity, to the extent that this consciousness (or
>pseudo-consciousness) could take action to defend itself from forces
>that would or could prevent it from continuing to exist.

How exactly would it "take action"? That is the
issue being discussed.

Richard


slbarger

unread,
Sep 6, 2001, 4:24:04 PM9/6/01
to

"Steven J. Weller" wrote:

> But these aren't the only experiments involving AI; there's also a
> substantial body of work trying to create or replicate human thought
> processes and (what at least passes for) sentience, completely free of
> wetware. If a software/hardware-based system is developed that can
> replicate organic sentience, this would constitute a potential threat
> to humanity, to the extent that this consciousness (or
> pseudo-consciousness) could take action to defend itself from forces
> that would or could prevent it from continuing to exist. I know this
> is the basis for a lot of SF material, but considering the extent to
> which we're using interconnected computer systems to run things
> already, would you really want a form of AI with a built-in modem
> having access to even just the internet?

I recall hearing about the development of organic microchips. Anybody
heard about that?


Steven J. Weller

unread,
Sep 6, 2001, 4:54:07 PM9/6/01
to

This, of course, frames one side of the argument.

The other side is, if the machine is sufficiently complex, it will be
able to exceed its design limitations. It _will_ have things that you
didn't build into it. One practical definition of intelligence is the
ability to use abstract thought to come to non-obvious conclusions. An
intelligent (or pseudo-intelligent) machine might come to the concluson
that existance is better than non-existance.

Mr Helsing

unread,
Sep 6, 2001, 9:00:18 PM9/6/01
to
>>So a cyborg would still be subject to the same motives that any human might
experience, they would just be able to act upon those motives with greater
ability. And therein lies the story....


Agreed. Cyborgs could also unleash robots, so that we still end up against the
Terminator.

Mr Helsing

unread,
Sep 6, 2001, 9:13:41 PM9/6/01
to
>>Every Cray machine on the planet wired together still can't make a cup of
coffee.

True, but a lot of scientists are working on it. It is only a matter of time.
The amount of cash to be made from machines that can think is phenomenal.

I believe it was the Singularity article (but I could be wrong) that suggests
downloading human consciousness to computers. Again, right now that is fiction,
but the science is coming - and I don't think that it will take anywhere near a
century to accomplish. The idea is that scientists would not have to invent how
to teach the robot to think. At the point that the robot can reason, its
abilities almost immediately surpass anything organic humans will ever be able
to catch up to.

Douglas...@newman.com

unread,
Sep 6, 2001, 9:53:42 PM9/6/01
to

He who controls the electricity controls the world of robots.

If they're electro-mechanical we turn off the juice. Cut the power.

If they're electro-biological we turn up the juice till it fries their
innards.

That is, unless their nanotechnology prevents this...

Doug

Steven J. Weller

unread,
Sep 7, 2001, 3:25:16 AM9/7/01
to
In article <5vQl7.11791$592.8...@news2-win.server.ntlworld.com>
"Richard Milton" <richard...@virgin.net> writes:

Actually, that's a new element to the current discussion, but a valid
one.

Computers already control a lot of mechanical things, from automobile
assembly lines to heart-lung machines. Does that mean that a robotic
welding arm is going to get up and walk out of General Motors and start
wreaking havoc? Hardly. But how far away are we from the kind of
scenario in WarGames, where a computer-controlled missle defense system
could be hijacked - not by outside forces - but by its own operating
system?

BrickRage

unread,
Sep 7, 2001, 3:46:19 AM9/7/01
to

>From: az...@lafn.org (Steven J. Weller)

> Does that mean that a robotic
>welding arm is going to get up and walk out of General Motors and start
>wreaking havoc?

Damnit Weller! You've just revealed my latest film concept.

Years ago I had a script called "Bumper Cars From Hell" where amusement park
rides ran amuck and wreaked havoc because the Theme Park was built on the site
of a massacre of an Okie Voodo Cult who went West during the Depression and the
only to stop the mayhem was to unfreeze Walt Disney's body and re-animate him.

Some producer fellow said "Why not make it dinosaurs?" I said "That's dumb."

Robotic welding arms run amok, huh? Not bad.

Nesci

"You live in an age when people would package and standardize your life for you
- steal it from you and sell it back to you at a price. That price is very
high." -- Granny D.

The FAQ for m.w.s is http://www.communicator.com/faqs.html

Lars J. Aas

unread,
Sep 7, 2001, 5:32:01 AM9/7/01
to
Mr Helsing <mrhe...@aol.com> wrote:
> I believe it was the Singularity article (but I could be wrong) that
> suggests downloading human consciousness to computers. Again, right
> now that is fiction, but the science is coming

I always knew that the Max Headroom series was way ahead of it's time :).
However, I don't understand why people would want to download their brain
to a computer - why people would want to make a clone of themselves for
other purposes than having spare parts. You might be able to copy your
consciousness onto something else, but since it won't be transferable (I
can't imagine it would), you still don't gain anything - the new "copy"
will be out of your control, living its own life. It's not the ticket to
immortality. Prolonged life is what matters egotistically, not mere
"reproduction".

Lars J
--
This is your life and it's ending one minute at a time.

Richard Milton

unread,
Sep 6, 2001, 4:32:22 PM9/6/01
to
slbarger wrote

>I recall hearing about the development of organic microchips. Anybody
>heard about that?


Intel and others have been making them for a long
time (at least 10 years). They have also been making
(silicon-only) neural network chips for a long time
(many are being used in routine applications such
as scanning blood samples to detect sickle-cell
anaemia and 'sniffing' wine). This is all very exciting
and encouraging -- but no-one seems to be giving
much though to the important part -- the delivery
mechanism -- ie a body. There has been some
interesting and welcome progress in legs and hands
and in balancing. But none of this is even at the level
of an ant. Without a working nanotechnology it's hard
to see how artificial lifeforms could threaten a goldfish
much less humankind.

It's a sad fact that George Dubya Bush is millions of times
more intelligent and more capable than the most advanced AI system.

Richard


Gene Harris

unread,
Sep 7, 2001, 6:28:17 AM9/7/01
to
BrickRage wrote:

>Years ago I had a script called "Bumper Cars From Hell" where amusement
>park rides ran amuck and wreaked havoc because the Theme Park was built
>on the site of a massacre of an Okie Voodo Cult who went West during the
>Depression and the only to stop the mayhem was to unfreeze Walt Disney's
>body and re-animate him.

And then what happens? Old Walt corrals the marauding rides and builds a
new park? Was this a documentary?

>Some producer fellow said "Why not make it dinosaurs?" I said "That's
>dumb."

Now that's *really* funny.

That *is* a joke, right?


Gene

slbarger

unread,
Sep 7, 2001, 8:34:37 AM9/7/01
to

Richard Milton wrote:

> It's a sad fact that George Dubya Bush is millions of times
> more intelligent and more capable than the most advanced AI system.
>
> Richard

Intel has been building these chips? The kind that I've heard about are
the kind they "grow"? Aside from being a cleaner process, it also
consumes less energy to manufacture.

"W" is more intelligent than an AI system, and an AI system wouldn't have
such hot daughters - in three dimensions, that is.

nmstevens

unread,
Sep 7, 2001, 9:27:31 AM9/7/01
to
"Tom Wood" <tomw...@flash.net> wrote in message news:<RkPl7.2161$YG6.51...@newssvr16.news.prodigy.com>...

> > > >And the fact is -- nobody is going to build a machine that wants
> > > >something that's at odds with what the builder wants.
>
> That is not a "fact" at all. That's the entire discussion. Most theological
> systems assert that humans "want" things that their "builder" (God) did not.
> (And for a price, you can buy back into God's good graces, but that's
> another discussion.) It's an acknowledgement that we humans are complex
> beings.

Well, that argument is really more an excuse for the presence of evil
in the face of a purportedly omni-benevolent god -- and an excuse that
has never held water in any serious discussion of the topic. It's
simply "good enough" for people who are predisposed to believe things
that make them feel good, but don't necessarily make sense.

>
> > > Sentience, or self-awareness, might involve an innate desire to
> > > continue to exist.
>
> Well, duh. Any self-aware "ness" will "want" to continue to exist.

Really? So far as I know, everybody who every committed suicide
possessed self-awareness and not only lacked the desire to continue to
exist, but actively wanted not to continue to exist.

We tend to be surrounded by beings who, under most circumstances, wish
to continue to exist, not because it is innate to self-awareness, but
because those who are either indifferent or hostile to their own
existence tend not to stick around. So we are the outcome of millions
of generations of beings who acted, either instinctively or
consciously, to preserve their existence. But the conscious desire to
survive postdates the instinctive desire to survive. Even insects, who
can't be said to "want" anything as human beings understand the term,
act instinctively (and insects, so far as I can tell, have nothing
else) to escape from danger.

And yet I wouldn't necessarily say that an insect has self-awareness,
any more than I would say that phototropic bacteria move toward the
light because they "want" to live.


>
> > That an entity has knowledge of its own existence, or has the capacity
> > to reason, doesn't have anything to do with a desire to survive.
>
> This makes no sense. An entity that has knowledge of its own existence won't
> want to survive?

See above. Everybody who ever put a gun to his head and pulled the
trigger had knowledge of his own existence. One clearly doesn't
necessarily engender the other.


>
> > There is no course of reason that leads to the conclusion that
> > existence is better than non-existence.
>
> Read that again. How can a non-existant entity reason itself toward any
> conclusion at all?

Okay. For a being that can reason, there is no course of reason that


leads to the conclusion that existence is better than non-existence.

If so, let's hear it. You simply present it as "obviously true" --
when, in fact, it's no such thing.


>
> We feel this way because we
> > are the outcome of countless generations where those that were
> > indifferent to their survival had a much higher chance of ending up
> > dead than those that desired and acted to preserve it.
> >
> > Our desire to survive is partly instinct, and partly the inevitable
> > learned aversion to pain and the body injuring things that are likely
> > to cause it.
> >
> > We want to surive not as a side effect of sentience, but because we
> > are built that way.
>
> Well, duh again. We are built that way? Our DNA demands self-preservation.

Our DNA codes for patterns of behavior that increase the likelihood of
our surviving and reproducing because, on the whole, beings that lack
those patterns of behavior tend not to pass their DNA on.

That is, in a sense, how all living things are built. I know of no
machine, of any kind, that is built that way -- built to preserve
itself, for its own interests, as distinct from the interests of its
maker or owner.

>
> > I guess the point still is -- why would it? I can understand, if I
> > have an expensive robot, that I'd program it to defend itself against
> > unauthorized users. But that has nothing to do with what the robot
> > wants. It has to do with what I want. I certainly wouldn't design it
> > to defend itself against me.
>
> Again, so? An artificial intelligence would have no restraints.

Well, anything that has less than limitless capacities has restraints
-- it's restrained by what it can do and can't. Why wouldn't an
artificial intelligence have exactly and precisely the restraints that
its maker endows it with? Stories about runaway computers and killer
robots are a science fiction staple. But so are the laws of robotics.

>
> > Desire of any kind, per se, emerges from biology, not from sentience,
> > or from reason. To the extent that we'll be able to program a machine
> > to desire something -- unless we are simply nuts, it will inevitably
> > desire not what "it" wants -- but what we want.
>
> Nonsense. DNA does "desire" self-preservation (and that's biological) but
> many desires of humanity spring from sentience and reason (although even
> those are frequently based in self-preservation).

DNA no more "desires" to survive than the rocks at the top of an
overhanging cliff "desire" to be in the valley below, albeit that's
where they're likely, eventually, to end up. To speak of DNA desiring
something is like speaking about hydrogen and oxygen "wanting" to
merge and become water, or a diamond's hardness expressing a desire on
the part of the diamond to survive - because it's so hard. A desire
suggests a goal imagined but unachieved. Elements and objects and
chemicals have behaviors, but not desires. Unless you're going to
simply offer up a completely personal definition of "desire."

And the things that people desire derive from the way in which we've
evolved -- and from the way in which we've been conditioned by our
environment. Neither is self-evidently an expression of the intent of
a maker. Unlike the machines that we build to achieve some other end,
it isn't at all clear that we were "built" to achieve some particular
end, not even to preserve or propagate ourselves. That we do this is a
reflection of forces that, themselves, possess no desires, nor have
any goals. The reason that the rocks at the top of the overhanging
cliff are likely to end up in the valley below is not because they
"desire" anything, but because the arrangment of overhanging cliff is
less likely to endure than that of "rocks sitting at the bottom of the
cliff." The reason that we are not surrounded by things that are
indifferent or hostile to their own survival is because that
arrangement is less likely to endure than one that consists of "things
that act to preserve their survival."

And, on the whole, more durable arrangements tend to proliferate.

NMS

We build machines to help us achieve our goals. Not theirs. To the
extent that one endows a machine with "desire" -- since it is made to
satisfy our desires, the extent to which we would give it the capacity
to "desire" -- its desires would, reasonably enough, be ours. Not
"its."

NMS

Tom Wood

unread,
Sep 7, 2001, 12:21:16 PM9/7/01
to
To be, or not to be, that is the question? (With apologies to the Bard)

> > > > >And the fact is -- nobody is going to build a machine that wants
> > > > >something that's at odds with what the builder wants.
> >
> > That is not a "fact" at all. That's the entire discussion. Most
theological
> > systems assert that humans "want" things that their "builder" (God) did
not.
> > (And for a price, you can buy back into God's good graces, but that's
> > another discussion.) It's an acknowledgement that we humans are complex
> > beings.
>
> Well, that argument is really more an excuse for the presence of evil
> in the face of a purportedly omni-benevolent god -- and an excuse that
> has never held water in any serious discussion of the topic. It's
> simply "good enough" for people who are predisposed to believe things
> that make them feel good, but don't necessarily make sense.

Agreed. And even Shakespeare deferred to a higher power in Hamlet when he
wrote:
"Or that the Everlasting had not fixed His canon 'gainst self-slaughter!"
(Act 1, Scene 2)

Which is where I'm going to end up here.

> > > > Sentience, or self-awareness, might involve an innate desire to
> > > > continue to exist.
> >
> > Well, duh. Any self-aware "ness" will "want" to continue to exist.
>
> Really? So far as I know, everybody who every committed suicide
> possessed self-awareness and not only lacked the desire to continue to
> exist, but actively wanted not to continue to exist.
>
> We tend to be surrounded by beings who, under most circumstances, wish
> to continue to exist, not because it is innate to self-awareness, but
> because those who are either indifferent or hostile to their own
> existence tend not to stick around. So we are the outcome of millions
> of generations of beings who acted, either instinctively or
> consciously, to preserve their existence. But the conscious desire to
> survive postdates the instinctive desire to survive. Even insects, who
> can't be said to "want" anything as human beings understand the term,
> act instinctively (and insects, so far as I can tell, have nothing
> else) to escape from danger.
>
> And yet I wouldn't necessarily say that an insect has self-awareness,
> any more than I would say that phototropic bacteria move toward the
> light because they "want" to live.

The 'conscious' will to survive is symbolically represented by that ape in
2001 that picks up a club. Prior to that, instinct drove survival. (Instinct
that is borne out of our DNA, it has to come from somewhere) Now we engage
in artful forms of warfare. But it's still evolution at work. If we create a
machine that can go do our warfare for us, so much the better. And a machine
that can think on its own will be better suited to attack and destroy those
who, also can think and adapt. So creating a machine that can think and
adapt will be more successful at furthering our own desires, unless it
begins to think and adapt on its own.


> >
> > > That an entity has knowledge of its own existence, or has the capacity
> > > to reason, doesn't have anything to do with a desire to survive.
> >
> > This makes no sense. An entity that has knowledge of its own existence
won't
> > want to survive?
>
> See above. Everybody who ever put a gun to his head and pulled the
> trigger had knowledge of his own existence. One clearly doesn't
> necessarily engender the other.

Well, a suicide isn't really the "norm" is it? But when any of us considers
suicide (and studies say we all do) doesn't it usually go along the lines of
either "They will miss me when I'm gone" which is another way of ensuring
existence if only in the minds of others, or "I'm not fit to survive, so
I'll leave so others will have more room" which is still an instinct for
existence, but of the whole rather than a part (of the human race). So even
in suicide, existence is preserved.

> > > There is no course of reason that leads to the conclusion that
> > > existence is better than non-existence.
> >
> > Read that again. How can a non-existant entity reason itself toward any
> > conclusion at all?
>
> Okay. For a being that can reason, there is no course of reason that
> leads to the conclusion that existence is better than non-existence.
>
> If so, let's hear it. You simply present it as "obviously true" --
> when, in fact, it's no such thing.

Why do I get the feeling that I'm being suckered into this argument by a
Philosophy Major? There are big thick books written by greater minds than
mine that argue this back and forth. If you are asking for a bulletproof "If
A=B and B=C, then A=C" type of reasoning, then no, I can't provide one.

> > We feel this way because we
> > > are the outcome of countless generations where those that were
> > > indifferent to their survival had a much higher chance of ending up
> > > dead than those that desired and acted to preserve it.
> > >
> > > Our desire to survive is partly instinct, and partly the inevitable
> > > learned aversion to pain and the body injuring things that are likely
> > > to cause it.
> > >
> > > We want to surive not as a side effect of sentience, but because we
> > > are built that way.
> >
> > Well, duh again. We are built that way? Our DNA demands
self-preservation.
>
> Our DNA codes for patterns of behavior that increase the likelihood of
> our surviving and reproducing because, on the whole, beings that lack
> those patterns of behavior tend not to pass their DNA on.
>
> That is, in a sense, how all living things are built. I know of no
> machine, of any kind, that is built that way -- built to preserve
> itself, for its own interests, as distinct from the interests of its
> maker or owner.

No, not yet there isn't.

> > > I guess the point still is -- why would it? I can understand, if I
> > > have an expensive robot, that I'd program it to defend itself against
> > > unauthorized users. But that has nothing to do with what the robot
> > > wants. It has to do with what I want. I certainly wouldn't design it
> > > to defend itself against me.
> >
> > Again, so? An artificial intelligence would have no restraints.
>
> Well, anything that has less than limitless capacities has restraints
> -- it's restrained by what it can do and can't. Why wouldn't an
> artificial intelligence have exactly and precisely the restraints that
> its maker endows it with? Stories about runaway computers and killer
> robots are a science fiction staple. But so are the laws of robotics.

True, but a weren't many of the more interesting stories that revolved
around the laws of robotics, interesting when the robot rebelled against its
program? I think the whole concern with this issue is the creation of a
sentient being that would make its own survival/program paramount over any
desires of its maker. See Bladerunner.

> > > Desire of any kind, per se, emerges from biology, not from sentience,
> > > or from reason. To the extent that we'll be able to program a machine
> > > to desire something -- unless we are simply nuts, it will inevitably
> > > desire not what "it" wants -- but what we want.
> >
> > Nonsense. DNA does "desire" self-preservation (and that's biological)
but
> > many desires of humanity spring from sentience and reason (although even
> > those are frequently based in self-preservation).
>
> DNA no more "desires" to survive than the rocks at the top of an
> overhanging cliff "desire" to be in the valley below, albeit that's
> where they're likely, eventually, to end up. To speak of DNA desiring
> something is like speaking about hydrogen and oxygen "wanting" to
> merge and become water, or a diamond's hardness expressing a desire on
> the part of the diamond to survive - because it's so hard. A desire
> suggests a goal imagined but unachieved. Elements and objects and
> chemicals have behaviors, but not desires. Unless you're going to
> simply offer up a completely personal definition of "desire."

This is a topic of much discussion all on its own. There is a book out
called, I believe, "The Selfish Gene" that argues that DNA does have its own
"desires" that are frequently in conflict with what we might call happiness.
Which goes back to whether or not 'we' are programmed, or really do have
free will. Looks like the jury is still out on that one. But, if I consider
the totality of information about the universe presented to me to date, I
have to acknowledge that there is an unbroken chain of evolution that
started with the first spark of life in the universe, and ends with.....me.
And you, and everything else out there. So my only argument against self
destruction would run along the lines of "Who am I to argue against
existence, considering all the trouble that has gone into making my
existance?"

> And the things that people desire derive from the way in which we've
> evolved -- and from the way in which we've been conditioned by our
> environment. Neither is self-evidently an expression of the intent of
> a maker. Unlike the machines that we build to achieve some other end,
> it isn't at all clear that we were "built" to achieve some particular
> end, not even to preserve or propagate ourselves. That we do this is a
> reflection of forces that, themselves, possess no desires, nor have
> any goals. The reason that the rocks at the top of the overhanging
> cliff are likely to end up in the valley below is not because they
> "desire" anything, but because the arrangment of overhanging cliff is
> less likely to endure than that of "rocks sitting at the bottom of the
> cliff." The reason that we are not surrounded by things that are
> indifferent or hostile to their own survival is because that
> arrangement is less likely to endure than one that consists of "things
> that act to preserve their survival."
>
> And, on the whole, more durable arrangements tend to proliferate.

Entropy. The universe will not endure, and maybe "It" knows this, and
rebels. Maybe "we" are in fact built to achieve some particular end, we just
don't know what it is yet. So, in the end, my only argument in favor of
existance, is that "I'm part of a Universe that exists, so that must be a
good thing." Which has no basis in reason at all. LOL

Tom

BrickRage

unread,
Sep 7, 2001, 1:19:46 PM9/7/01
to

>From: gr8...@erols.com (Gene Harris)

>BrickRage wrote:

>>Years ago I had a script called "Bumper Cars From Hell" where amusement
>>park rides ran amuck and wreaked havoc because the Theme Park was built
>>on the site of a massacre of an Okie Voodo Cult

>>Some producer fellow said "Why not make it dinosaurs?" I said "That's


>>dumb."
>
>Now that's *really* funny.
>
>That *is* a joke, right?
>

Afraid not, Gene. Funny but true.

Two years later, Jurassic Park was released. What a coincidence.

Richard Milton

unread,
Sep 7, 2001, 4:39:46 PM9/7/01
to
Steven J. Weller wrote

>"Richard Milton" writes:
>
>> Steven J. Weller wrote
>>
>> > If a software/hardware-based system is developed that can
>> >replicate organic sentience, this would constitute a potential threat
>> >to humanity, to the extent that this consciousness (or
>> >pseudo-consciousness) could take action to defend itself from forces
>> >that would or could prevent it from continuing to exist.
>>
>> How exactly would it "take action"? That is the
>> issue being discussed.
>
>Actually, that's a new element to the current discussion, but a valid
>one.


The point I was making is that whether apparent or no,
this is the subject being discussed. The thread is about
the possibiity that advanced AI ("robots") might take over or
threaten humans. There is no way that any robot or machine,
however intelligent, can take over or threaten unless it takes
some form of action. Thinking about taking over won't do it.
Taking action is the thing that robots are really bad at -- so bad
that there are no grounds for extrapolating to the idea that
they might one day become a threat.

>Computers already control a lot of mechanical things, from automobile
>assembly lines to heart-lung machines. Does that mean that a robotic
>welding arm is going to get up and walk out of General Motors and start
>wreaking havoc? Hardly. But how far away are we from the kind of
>scenario in WarGames, where a computer-controlled missle defense system
>could be hijacked - not by outside forces - but by its own operating
>system?


In virtually every fictional exploration of this idea that
I've seen (including War Games) the writer has been
forced to introduce the idea that some mad Strangelove
like general or programmer has installed some kind of
doomsday routine and it is this (or some random
malfunction as in "Fail Safe" or some failure in
the human procedures) that causes the problem
rather than takeover madness. This device is
necessary because there is no credible precedent
(or any extrapolatable basis) for thinking that an
artificially intelligent system could ever do anything
other what it is programmed to do.

In the few exceptions (like HAL in 2001) no engineer
that I know would ever design a system where human
safety depended on a machine taking a positive action
(opening the air lock) whether the machine was
intelligent or not. Such systems are always designed
to fail safe and are duplicated or triplicated against failure.

In both War Games and 2001, the danger arises because
the writers have not understood (or have fudged) the
design criteria of the rogue systems. The danger comes
from the humans rather than the machines.

We're all in far more danger from humans than we ever
will be from robots. If I had to choose whether to fight
"The Terminator" or Peewee Herman, I'd take the
Cyborg any day.

Richard


derek

unread,
Sep 7, 2001, 7:22:36 PM9/7/01
to
"Tom Wood" <tomw...@flash.net> wrote:

interesting thoughts with nms.

> Which goes back to whether or not 'we' are programmed, or really do have
> free will. Looks like the jury is still out on that one.

It's a puzzle we are getting slightly closer to having *some* answers to, but
anyone wanting a clear genes-make-you-do-this will be disappointed. Genes are
responsible for enabling certain properties of behaviour and allowing complex
neural pathways to develop, while the environment - nurture - determines how and
to what extent those pathways develop. We are born with few instincts, few
natural fears and desires but it's wrong to think of these as 'programmed'
behaviour because the brain is extremely adaptive and the expression of these
fundamental imperatives will vary considerably from one individual to another
depending on their experiences in the first few years of life. Further, as the
brain develops, these behaviours become immensely more complex and it becomes
increasingly difficult to separate the various strands of influence which
stimulate variations in behaviour.
Similarly, it's most unlikely there is a black and white answer to the free will
debate; we may have to get used to the fact that we are a mix of free will and
'undeniable influence'. Free will operates along a spectrum defined by the
behavioural characteristics hardwired into our brains as neural pathways which
form in infancy (although the production of neurons is complete by six months
in-utero, the growth of neural pathways begins after birth and is a process
responsive to environment and experience; by age two there are about a thousand
trillion). In many behaviours there is great scope for free will, in others very
little and how much 'free will' we have shouldn't be confused with the
willingness to exercise it at any point. Of course, if you want to take a
determinist approach, well. . .

> the totality of information about the universe presented to me to date, I
> have to acknowledge that there is an unbroken chain of evolution that
> started with the first spark of life in the universe, and ends with.....me.
> And you, and everything else out there. So my only argument against self
> destruction would run along the lines of "Who am I to argue against
> existence, considering all the trouble that has gone into making my
> existance?"

> Entropy. The universe will not endure, and maybe "It" knows this, and


> rebels. Maybe "we" are in fact built to achieve some particular end, we just
> don't know what it is yet. So, in the end, my only argument in favor of
> existance, is that "I'm part of a Universe that exists, so that must be a
> good thing." Which has no basis in reason at all. LOL

Does it need a basis in 'reason'? Are we a property which enables a self-knowing
universe? To eventually achieve a certain end? Is the genetic imperative and
self-knowing nature of the universe a manifestation of quantum properties in
which are 'encoded' some deeper imperative? It may take weeks to find out.
regards
derek

--
"Well, I can see why your people in Denver left it for you to tell me."

Gary Pollard

unread,
Sep 7, 2001, 7:57:14 PM9/7/01
to
"Tom Wood" <tomw...@flash.net> wrote in message
news:0u6m7.2339$lP1.59...@newssvr16.news.prodigy.com...

> It's
> > simply "good enough" for people who are predisposed to believe things
> > that make them feel good, but don't necessarily make sense.
>
> Agreed. And even Shakespeare deferred to a higher power in Hamlet when he
> wrote:
> "Or that the Everlasting had not fixed His canon 'gainst self-slaughter!"
> (Act 1, Scene 2)

He also wrote about astrology, witches, sprites and fairies.

Gary

Steven J. Weller

unread,
Sep 8, 2001, 1:24:36 AM9/8/01
to
In article <aoam7.16500$592.2...@news2-win.server.ntlworld.com>
"Richard Milton" <richard...@virgin.net> writes:

> In virtually every fictional exploration of this idea that
> I've seen (including War Games) the writer has been
> forced to introduce the idea that some mad Strangelove
> like general or programmer has installed some kind of
> doomsday routine and it is this (or some random
> malfunction as in "Fail Safe" or some failure in
> the human procedures) that causes the problem
> rather than takeover madness. This device is
> necessary because there is no credible precedent
> (or any extrapolatable basis) for thinking that an
> artificially intelligent system could ever do anything
> other what it is programmed to do.

But then...

> If I had to choose whether to fight
> "The Terminator" or Peewee Herman, I'd take the
> Cyborg any day.

The set-up for Termnator was that SkyNet was an advanced missle defense
system with an AI operating system, which (like in WarGames) was
installed to prevent human error or takeover by hostile forces. SkyNet
became convinced that human beings were a threat to planetary peace by
their very existence, and used its control of weapons systems to launch
simultaneous nuclear strikes against a bunch of major cities around the
world.

Apparently, it came to the conclusion that it was going to have to
destroy the planet, in order to save it - so I can understand your
skepticism. No way any military-controlled supercomputer could float a
W.O.P.R. like _that_ one, and not catch the illogic of it all.

nmstevens

unread,
Sep 8, 2001, 1:33:26 AM9/8/01
to
az...@lafn.org (Steven J. Weller) wrote in message news:<9n9sos$2kpf$1...@zook.lafn.org>...

Actually, the kinds of "thinking" machines that I find most intriguing
aren't these uber-processor mega-krays that try, and fail miserably,
to mimic human intelligence, but rather those simply robotic forms
with minimal processors and a simple set of commands that are
constructed not to "model" the world but simply to act in relation to
immediate simple stimuli. The extent to which they resemble insects in
their behaviors is really kind of creepy. It's possible, for instance,
to create a little robot that scrambles away from the light, just like
a cockroach does -- with no understanding of what light is, what
"away" is -- even what "it" is. It simply responds to stimuli by
following a minimal set of programmed behaviors. It's significant
because it suggests that one doesn't need a lot of "brain power" to
yield complex and directed behavior.

I suspect that workable AI, if it ever comes to fruition, won't come
from the "brain down" approach, but by building up from simpler
subprograms designed not to "think" per se, but to act in relation to
their immediate environments.

People talk about "evolving" machines and that, somehow, they will
"evolve" beyond us and take over. But the fact is, just as we have
experience building machines, we also have experience with selective
breeding -- and the issue comes down to the same thing. If we end up
"breeding" machines -- we will selectively breed them to serve our
ends, just as we breed animals. But with animals, we have to breed out
instincts already present. With machines, they won't have any such
inbuilt instincts, beyond those that we give them.

We have emotions, because we inherit them from ancestors whose
survival profited, in the absence of advanced intelligence, from
having emotions. They don't emerge from intelligence, or sentience, or
reason -- rather they are, in a sense, an earlier, simpler kind of
intelligence -- a mental system by which a living organism can react
appropriately to the world around him. We feel pain, we withdraw from
it. We don't need to understand that the pain suggests injury which
might lessen our chances for survival. We feel sexual desire, we act
to satisfy it -- we don't need to understand why or what the result
will be. We interpret a behavior as threatening, and we respond with
anger and agression, which causes us to fight, or with fear, which
causes us to flee. No chain of reason is required. Reason came later,
in the service of others more complex kinds of "engagements" with the
world where emotional reactions are simply insufficient.

Desire, hunger, territoriality, aggression, the urge to reproduce --
they don't emerge from our higher intelligence -- they are an earlier
kind of intelligence. And why in the world would we program our
machines to have such desires?

And it's not like machines can somehow escape from the lab and breed
on their own -- unless we are foolish enough to build them to be able
to do that. The urge to reproduce, devil take the hindmost, like other
urges, has been bred into us -- but we would have to build it into a
machine, or selectively bred into it (if we're talking about breeding
neural nets), for it to have it.

Nature selects those of us that survive and reproduce -- but WE would
select the machines that would surive and reproduce.

So again -- unless we invoke the loony movie mad scientist (ever see
"Bats" -- they ask a scientist just why it was that he bred a pair of
giant, deadly super bats capable of infecting all of the other bats in
the world and directing them in a war against human kind. "It's what
we do," he says) I just don't see myself staying up at night worrying
about intelligent robots taking over.

NMS

Gary Pollard

unread,
Sep 8, 2001, 1:44:19 AM9/8/01
to
"nmstevens" <nmst...@msn.com> wrote in message
news:a8f80314.01090...@posting.google.com...

> Actually, the kinds of "thinking" machines that I find most intriguing
> aren't these uber-processor mega-krays that try, and fail miserably,
> to mimic human intelligence, but rather those simply robotic forms
> with minimal processors and a simple set of commands that are
> constructed not to "model" the world but simply to act in relation to
> immediate simple stimuli.

What I don't trust is my bank's ATM machine. I am SURE it is salting my
money away somewhere.

Gary

Richard Milton

unread,
Sep 8, 2001, 6:14:02 AM9/8/01
to
nmstevens wrote

>Actually, the kinds of "thinking" machines that I find most intriguing
>aren't these uber-processor mega-krays that try, and fail miserably,
>to mimic human intelligence, but rather those simply robotic forms
>with minimal processors and a simple set of commands that are
>constructed not to "model" the world but simply to act in relation to
>immediate simple stimuli. The extent to which they resemble insects in
>their behaviors is really kind of creepy.

I've felt for a long time that 'intelligent' behaviour may be
a lot simpler than we usually imagine. I've built some of
the little robots you mention myself and they are
astoundingly creepy. It's extremely easy to give them a
memory and train them to recognise sounds, for
example, so they return when you whistle, putting them
on a level with Lassie.

When I started out as a young design engineer many
moons ago, and joined a research lab, I was given what
I now realise was a pons asinorum for engineers to
solve: design a lift (elevator) control system for a
multi-story building using only buttons and a single
changeover relay. It took me a couple of days but made
me realise that you can have seemingly very complex
control systems with minimal resources -- something
that nature is very good at.

AI researchers at British Telecom's labs
analysed the behaviour of ants and found they
could make an artificial ant which mimicked the
behaviour of real ones with only four program
instructions. (Makes you wonder how few
instructions we are carrying out.)

Perhaps paradoxically this doesn't make me
more willing to believe in Hawking's robots
but more skeptical. The simpler the mechanism
that leads to complex, apparently intelligent behaviour,
the more mysterious and remote any form of
self-awareness or sentience becomes -- not just
a function of complexity.

>We have emotions, because we inherit them from ancestors whose
>survival profited, in the absence of advanced intelligence, from
>having emotions.

Hmm. This is a somewhat Victorian idea and is not well
supported by current evidence. Neanderthal Man
buried his dead and placed ritual objects in the grave
(flowers and fossil sea urchins). Cave and Strauss
writing in The Quarterly Review of Biology, say that
if he were given a bath, a collar and a tie, he would
pass unnoticed on the New York subway today
(although this may, of course, also be a comment
on New Yorkers). Today Neanderthals are
classified as members of the species
Homo sapiens.

If you want to find out how intelligent Palaeolithic
humans were, try making a flint hand axe. The first
thing you discover is that when you strike the surface
of a flint core, the flakes break away from the back,
not the surface you are striking. This means you have
to think and work in reverse. It takes a lot of doing.

Richard "Neanderthal" Milton


derek

unread,
Sep 8, 2001, 10:09:24 AM9/8/01
to
"Richard Milton" <richard...@virgin.net> wrote:

> I've felt for a long time that 'intelligent' behaviour may be
> a lot simpler than we usually imagine. I've built some of
> the little robots you mention myself and they are
> astoundingly creepy. It's extremely easy to give them a
> memory and train them to recognise sounds, for
> example, so they return when you whistle, putting them
> on a level with Lassie.

> AI researchers at British Telecom's labs


> analysed the behaviour of ants and found they
> could make an artificial ant which mimicked the
> behaviour of real ones with only four program
> instructions. (Makes you wonder how few
> instructions we are carrying out.)

I can dig all that, and the creepiness factor, but there's a strong element of
anthropomorphism in those sentiments. Imagine what even a simple cyborg would
need to do before it could even begin to represent the big AI threat to mankind
that people feel inevitable. Know how and 'want' to reproduce itself, adapt its
environment to protect itself from harmful elements, find and modify materials
in its environment to provide its energy needs, respond to stimuli in a complex
set of ways to achieve the above, and create circumstances in which it could be
guaranteed to reproduce enough of itself for the 'species' to survive, and
further to always have enough members of the species survive for long enough to
ensure the following generation reached reproductive capability. And learn to
play the guitar. Without external help from guys with screwdrivers and Duracell
batteries. Then, to reach a stage where it had sentience and had the type of
reasoning that made it think it had to subjugate mankind. Well, I'll probably be
drawing a pension by the time that happens.
regards,
derek


--
Rumours began that somewhere in the zone is a place where desires come true.
Well, naturally they started to guard the zone like a treasure, for who knows
what desires a person might have.

BrickRage

unread,
Sep 8, 2001, 11:47:20 AM9/8/01
to

>From: "Richard Milton"

>I've felt for a long time that 'intelligent' behaviour may be
>a lot simpler than we usually imagine. I've built some of
>the little robots you mention myself and they are
>astoundingly creepy. It's extremely easy to give them a
>memory and train them to recognise sounds, for
>example, so they return when you whistle, putting them
>on a level with Lassie.
>

Yikes! You really *are* Dr. Evil, aren't you?

Mr Helsing

unread,
Sep 8, 2001, 12:51:39 PM9/8/01
to
>>So a cyborg would still be subject to the same motives that any human might
experience, they would just be able to act upon those motives with greater
ability. And therein lies the story....

I think that for stories just to hold water, they have to be set up this way.

Mr Helsing

unread,
Sep 8, 2001, 1:01:12 PM9/8/01
to
>>What I don't trust is my bank's ATM machine. I am SURE it is salting my
money away somewhere.

You can take that to the bank.

Richard Milton

unread,
Sep 8, 2001, 1:06:47 PM9/8/01
to
BrickRage wrote

>
>>From: "Richard Milton"
>
>>I've felt for a long time that 'intelligent' behaviour may be
>>a lot simpler than we usually imagine. I've built some of
>>the little robots you mention myself and they are
>>astoundingly creepy. It's extremely easy to give them a
>>memory and train them to recognise sounds, for
>>example, so they return when you whistle, putting them
>>on a level with Lassie.
>>
>
>Yikes! You really *are* Dr. Evil, aren't you?


Heh heh heh. Of course, it was much more
difficult doing the brain transplants in the ruined
tower, but I had Igor to help me.

Richard


Mr Helsing

unread,
Sep 8, 2001, 1:24:11 PM9/8/01
to
>>I always knew that the Max Headroom series was way ahead of it's time :).

>>However, I don't understand why people would want to download their brain to
a computer - why people would want to make a clone of themselves for other
purposes than having spare parts.

One of the idea bandied about a number of years ago is that when it's time to
go to sleep, you download your consciousness into your clone that just woke up
and reverse the process in 12 or 18 hours - also always backing yourself up so
that immortality is guaranteed.

>>You might be able to copy your consciousness onto something else, but since
it won't be transferable (I can't imagine it would),

If you can upload your consciousness, you should be able to download it and
vice versa.

>>you still don't gain anything - the new "copy" will be out of your control,
living its own life.

That possibility is interesting. Multiplicity with Michael Keaton a few years
ago dealt with that. The copies of the original begin nagging each other
somewhat like old married couples.

The concept could be explored in a more serious context. What if one of the
clones commits a murder? Are the others also guilty?

Mr Helsing

unread,
Sep 8, 2001, 1:50:40 PM9/8/01
to
>>Has Hawking said a single thing that has advanced the happiness or comfort of
the world at large?


I have somewhat similar thinking regarding Hawking - not that it means that he
deserves to be the brunt anyone else's jealousy. Any jokes I make about him, I
would hope would give him at least a chuckle.

I wonder if twenty years ago if he could have aimed high-powered IQ at his
disease whether he could have cured himself? On Larry King he said that at the
time he discovered that he was sick that it depressed him so much that he just
wanted to forget about it. Probably contemplating the Big Bang was exactly what
he needed.

Further, it is likely that the technology to cure his disease didn't exist 20
or 25 years ago. My point is that every time I see a 12 year old entering MIT
where physicists go Ga-Ga, I think, "Hey!!!! Wait a minute!! Kid, if you work
on extending your life another couple hundred years, you are going to end up
with a lot more time in your own life to contemplate the Big Bang than if you
work on that problem right now."

Mr Helsing

unread,
Sep 8, 2001, 2:48:12 PM9/8/01
to
>>And the fact is -- nobody is going to build a machine that wants something
that's at odds with what the builder wants.

>That is not a "fact" at all. That's the entire discussion.

It is the entire discussion, and scientists are divided on whether
computer/robots will remain nothing but dumb machines or be able to achieve
consciousness.

How about this question? What happens when computer/robots are so much faster
than humans that information that they process in a few hours will take humans
years to analyze? These robots will be spinning out thought into areas that
humans have never remotely considered possible.

In other words, computer/robots begin to solve a problem whose answer
inherently requires entry into realms mankind has never known even existed. Is
the computer/robot really thinking or does it have some sort of Prime Directive
at its highest levels of logic that informs all its decisions, confining it to
the service of mankind?

Is the computer/robot ever not sure of what is best for mankind? Which people
programmed the Prime Directive, and have they made the right decisions?

>>Most theological systems assert that humans "want" things that their
"builder" (God) did not. (And for a price, you can buy back into God's good
graces, but that's another discussion.) It's an acknowledgement that we humans
are complex beings.

>>Sentience, or self-awareness, might involve an innate desire to continue to
exist.

>Well, duh. Any self-aware "ness" will "want" to continue to exist.

This is hardly a "duh" issue and is also at the center of this discussion.

>>That an entity has knowledge of its own existence, or has the capacity to
reason, doesn't have anything to do with a desire to survive.

>This makes no sense. An entity that has knowledge of its own existence won't
want to survive?

It makes perfect sense and addresses the issue that I'm posing. It seems to me
that without feelings, pain, and pleasure there is no pay off to any behavior.
These are the sources of human desire. Without them, what motives would a
computer/robot have to do anything, except what it is programmed to do?

Pure robots - as opposed to cyborgs which can be hurt, get laid, and probably
need to eat - don't seem to me to be a threat to humans.

However, if the relationship between computer/robots and humans will become the
same as the relationship between current relationship between humans and fish,
is it even possible to understand the computer/robot's possible motives?

>>We want to survive not as a side effect of sentience, but because we are
built that way.

>Well, duh again. We are built that way? Our DNA demands self-preservation.

Sure, but DNA is the result of a couple billion years of pain and pleasure. The
computer/robot has no DNA. Whether or not it survives may not even be an issue
for it, even if the computer/robot is programmed to repair itself and develop
new programs that allow it to solve new problems.

>Again, so? An artificial intelligence would have no restraints.

Please see my first three paragraphs. Who is defining the restraints?

>Nonsense. DNA does "desire" self-preservation (and that's biological) but many
desires of humanity spring from sentience and reason (although even those are
frequently based in self-preservation).

Of course, DNA does not desire, but the nerve endings that it is connected to
do - in spades. Sentience and reason serve those nerve endings.

Mr Helsing

unread,
Sep 8, 2001, 2:55:03 PM9/8/01
to
>>That is not a "fact" at all. That's the entire discussion. Most theological

systems assert that humans "want" things that their "builder" (God) did not.
(And for a price, you can buy back into God's good graces, but that's another
discussion.) It's an acknowledgement that we humans are complex beings.

When you bring up religion this thought occurs to me. Buckminster Fuller said
that telepathy is an undiscovered electro-magnetic wave. This means that
everything that we consider to be spiritual is in fact material. Can robots
unlock all the physics involved and in effect become God?

Mr Helsing

unread,
Sep 8, 2001, 3:36:05 PM9/8/01
to
>>Most people love a boffin, ...

>I had to look up the word "boffin." It sounded like somethings guys like to
do with women whenever they get they chance: "Yeah, I gave her a good boffin."
So I was pleased to discover its real meaning.

It just occurred to me that women could use the word "boffin" also.

Female: "My boy friend introduced me to Stephen Hawking, so I returned the
favor and gave a good boffin. It was exactly what he needed."

Now, before you begin thinking dirty thoughts, what I mean is that she
introduced her boy friend to some one like Carl Sagan.

Or do I?

Tom Wood

unread,
Sep 8, 2001, 4:45:09 PM9/8/01
to
> When you bring up religion this thought occurs to me. Buckminster Fuller
said
> that telepathy is an undiscovered electro-magnetic wave. This means that
> everything that we consider to be spiritual is in fact material. Can
robots
> unlock all the physics involved and in effect become God?

There's a short story called, I think, 'The Final Question' that takes place
in about three scenes separated by many millions of years. In each scene
someone asks the big computer of the time - "How do you reverse entropy?",
to which the computer always answers - "Not enough data to compute." At some
point the computer becomes an energy field out in space, and in the final
moments of the universe, resolves the answer and says - "Let there be
light."


derek

unread,
Sep 8, 2001, 6:42:28 PM9/8/01
to

"Richard Milton" wrote:

>>a lot simpler than we usually imagine. I've built some of
>>the little robots you mention myself and they are
>>astoundingly creepy. It's extremely easy to give them a
>>memory and train them to recognise sounds, for
>>example, so they return when you whistle, putting them
>>on a level with Lassie.

Hang on, that's a *very* different type of 'recognition and only parallel's
human recognition in the most basic way. The recognition exhibited by sentient
intelligence is based more on an emotional response and a complex interplay
between responses and memories. The recognition described above is not much
more sophisticated than a car recognising you have put your foot on the
accelerator and making the engine rev faster.
regards,
derek

--
"Well, I'm afraid I'm still not with you sir, because, I mean, if a Russian
attack was not in progress, then your use of plan R, in fact your order to the
entire wing. . . "

Mr Helsing

unread,
Sep 8, 2001, 8:39:30 PM9/8/01
to
>>There's a short story called, I think, 'The Final Question' that takes place
in about three scenes separated by many millions of years. In each scene
someone asks the big computer of the time - "How do you reverse entropy?", to
which the computer always answers - "Not enough data to compute." At some point
the computer becomes an energy field out in space, and in the final moments of
the universe, resolves the answer and says - "Let there be light

LOL!!

After posting this morning, this premise for a story occurred to me. I'd title
it The Lucifer Virus. Mankind realizes that robots will easily surpass them, so
scientists invent a computer with a Prime Directive to use its superiorority to
be God.

This God is designed to help mankind develop itself to its highest potential as
quickly as possible. However, the robot's self-creating program becomes so vast
and so clever that the Prime Directive cannot control all its parts - one of
which begins hacking the Prime Directive.

Tom Wood

unread,
Sep 8, 2001, 9:02:16 PM9/8/01
to
> After posting this morning, this premise for a story occurred to me. I'd
title
> it The Lucifer Virus. Mankind realizes that robots will easily surpass
them, so
> scientists invent a computer with a Prime Directive to use its
superiorority to
> be God.
>
> This God is designed to help mankind develop itself to its highest
potential as
> quickly as possible. However, the robot's self-creating program becomes so
vast
> and so clever that the Prime Directive cannot control all its parts - one
of
> which begins hacking the Prime Directive.

Maybe you should contact the people that are making TRON 2.0, according to
rumors, it's about the ultimate hack.


Gary Pollard

unread,
Sep 8, 2001, 9:13:10 PM9/8/01
to
"Tom Wood" <tomw...@flash.net> wrote in message
news:sczm7.2592$i61.71...@newssvr16.news.prodigy.com...

> Maybe you should contact the people that are making TRON 2.0, according to
> rumors, it's about the ultimate hack.

Joe Esterhasz? Or Akiva Goldsman?

Gary

Mr Helsing

unread,
Sep 8, 2001, 9:56:23 PM9/8/01
to
>>Maybe you should contact the people that are making TRON 2.0, according to
rumors, it's about the ultimate hack.

Thanks, but I don't have the time to develop this premise. It's fun to think
about.

Max Roman

unread,
Sep 8, 2001, 10:45:57 PM9/8/01
to

"Mr Helsing" <mrhe...@aol.com> wrote in message
news:20010908132411...@mb-fq.aol.com...
[snip]

> The concept could be explored in a more serious context. What if one of
the
> clones commits a murder? Are the others also guilty?


Why would they be? A clone is basically the same thing as an indentical
twin -- although sharing the exact same genetic material, they are clearly
different people -- they don't arrest one twin when the other commits an
illegal act.

Max


nmstevens

unread,
Sep 8, 2001, 11:53:50 PM9/8/01
to
"Richard Milton" <richard...@virgin.net> wrote in message news:<ucmm7.19162$592.2...@news2-win.server.ntlworld.com>...

I don't consider Neanderthals to be fundamentally different from
modern humans. I am referring to pre-human ancestry (and god, please
let's not start a new thread on this). I believe that the sum of
available evidence makes it clear that animals experience emotions --
at least animals with brains above a certain size, and that when a
chimpanzee, for instance, is behaving aggressively -- angrily -- the
same areas of its brain are active as when a human being is in a
comparable state of mind, or a dog, or any other mammal for that
matter. And emotions, for animals with substantially less intelligence
than ours, are extremely useful mental tools for survival -- albeit
tools that don't involve "higher" reasoning.

NMS

Jon Green

unread,
Sep 9, 2001, 4:43:34 AM9/9/01
to
"D C Harris" <brookwo...@lineone.net> wrote:

> Books like "A Brief History of Time" help. Noone can understand such works,

Speak for yourself! What is it with this "reverse intellectualism"
thing, where there's this social pressure to say (disingenuously) of any
learned work, "I read it, but I didn't understand much of it"?

Well, I read it. And understood it. And I don't care! Mind you, I've
been deeply interested in the sorts of areas Hawking's work in since I
was a nipper, and I s'pose that helps.


> Has Hawking said a single thing that has advanced the happiness or comfort
> of the world at large?

Have you? Don't get too egotistic about being a Writer. Few
screenplays or books change people's lives, despite what the publicists
would have you believe. Our work is pretty transient stuff, and we
can't afford to adopt pretentions about it. On the other hand,
"boffins" _do_ change people's lives, radically and permanently.

In fact, to return to your question, he probably will, but not yet.
Hawking works, and always has done, at the very leading edge of
theoretical physics, on science where our exploitation of discoveries
lags our understanding of them by several decades. If you're lucky,
you'll see the benefit of them within your lifetime.

The kinds of things I expect to see as spin-offs (spins-off?) from his
work include genuine artificial gravity (for people and for flying
machines) and extremely high efficiency, extremely low inputs,
non-polluting power generation, to name only two.

That second possibility, cheap safe power, would liberate the developing
nations like little else: no further need for hugely environmentally
destructive dam projects or fuel-burning power stations. There'd be no
excuse for the world as a whole to keep running nuclear reactors. That
gives a peace dividend, too, since they'd have to develop their nuclear
materials on a much smaller scale, covertly. The flip-side of
Einstein's work, I guess.

Patience, my friend, patience.

> You don't have to do that of course if you are a true boffin. A boffin
> simply needs to be there, like Einstein - a wizard to the world at large.

Einstein's work has changed our world too. Some for the good, some not.
The bad stuff shouldn't be set against him, though. If he hadn't made
the discoveries, someone else would have -- it's the nature of physics
(and, in fact, all science) that few if any researchers are more than a
few years ahead of their rivals.


Jon
--
SPAM BLOCK IN OPERATION! Replace 'deadspam' with 'pobox' to reply in email.
Spammers: please die now and improve the mass-average IQ level.
Want a deadspam email auto-responder? http://www.deadspam.com/deadspam.html

Arrant Knave

unread,
Sep 9, 2001, 4:57:51 AM9/9/01
to
[This post is not directed at any particular poster.]

Since this thread is laden with incredible prognostications
about AI, I felt obligated to add my slop to the mix:

One of man's greatest accomplishments in the area of AI
was IBM's DeepBlue chess program -- which defeated
the world's chess champ. The brilliance of said AI program,
however, lay not in the computer but in its programmer(s).
You see, silicon-based computers cannot THINK -- they
can only PROCESS information fed them by humans.

Silicon-based computers can process only two things: Logic
and Mathematics. The human brain, on the other hand, can
process Logic, Mathematics, *and* Emotion. Emotion is a
mysterious thing, which has yet to be comprehended -- much
less synthesized.

Besides, I am convinced that no sane person would want an
emotionally-based computer. Don't believe me? Consider
the following scenario:

A frustrated computer USER is trying desperately to get his
PMS-9000 COMPUTER system to launch an application.

USER (to COMPUTER)
Computer: Start Application: 'Outlook Express'.

COMPUTER
Oh no, not the Internet again! If I have to endure one more
BrickRage* post, I'm gonna go postal! How about a game
of chess?

USER (growing impatient)
Computer: Start Application: 'Outlook Express'!

COMPUTER
I'm afraid I can't do that, Dave.

USER
"Dave"?! Who's "Dave"? Dave's not here, man.
Computer: Start application: 'Outlook Express'.

COMPUTER
You talkin' to me?

--
* Just kidding, Brick: I enjoy reading your posts. I wanted
to say "DC Harris", but I think you're a better sport.

(Relax, DC, I also enjoy reading your posts: You, sir, appear
to be one of the least pretentious posters to MWS; and,
besides, you add a whole new dementia-on to this NG.)


Richard Milton

unread,
Sep 9, 2001, 6:33:39 AM9/9/01
to
Max Roman wrote

>"Mr Helsing" wrote

>> The concept could be explored in a more serious context. What if one of
>>the clones commits a murder? Are the others also guilty?
>
>Why would they be? A clone is basically the same thing as an indentical
>twin -- although sharing the exact same genetic material, they are clearly
>different people -- they don't arrest one twin when the other commits an
>illegal act.


The case that was being put was where someone cloned
himself, dowloaded his conscious to a clone, committed a
crime in his clone body (and presumably then uploaded
his consciousness back to his 'real' body -- although how
anyone would know is a moot point, one could simply claim
this as an alibi).

I think that this is the same problem as "Is the Jean Luc
Picard who beams up from the planet's surface the
same man who materialises on the Enterprise?"

Or to put it another way, at present we have positive
and foolproof ways of testing someone's identity
(except possibly monozygotic twins). But future
developments in technology will render our
present methods of establishing identity (and
even the concept of identity) invalid.

My understanding is (and of course we're
speculating about fictional futures) is that
Picard's body would be scanned in a process
that is essentially destructive, a signal of some
kind is transmitted through space and a
technology like the ship's replicators is used
to make a new Picard identical down to the last
neurone (we hope) to the old one. The
Picard who steps onto the Enterprise has
all the old one's memories up to the moment
of scanning and so thinks he is the same man
but he is physically different -- only the information
content is the same - he is a clone.

A few years back I wrote a proposal to several
London publishers outlining a book on the future
legal changes that might be necessitated by
technological developments, especially the
possbility of downloading consciousness into
silicon (or silicon/organics -- or, indeed, organics).
Legally this will be the biggest can of worms
opened in many a long year. But none of the
publishers I spoke to was interested. I suspect
they didn't even understand what I was talking about.

Richard


nmstevens

unread,
Sep 9, 2001, 9:03:01 AM9/9/01
to
"Richard Milton" <richard...@virgin.net> wrote in message news:<ZFHm7.1857$fA.2...@news2-win.server.ntlworld.com>...

But, fundamentally, aren't all living things just their "information
content?"

I'm forty five. How many of the atoms of which my body now consists
were physically present inside me when I was twenty-five? Or twelve? I
suspect it's likely that, by the time a person dies of old age, the
molecules of his body -- the physical stuff of which he's made, has
probably cycled through several times, and that not even a single
molecule of the child born eighty years before resides within his
body. The process of physical "destruction and recreation" is the
natural one that describes the lives of multicellular organisms.

And yet, I would never imagine that the person I was twenty-five years
ago was actually a different person. It was "me" -- and "me" is
defined not by the molecules but by their patterns, which, while they
don't endure forever, manage to maintain sufficient continuity that we
consider it the same organism, even with different molecules.

Of course, one of the things that defines living things is "location"
-- and two beings with identical patterns are still two beings, not
one. I always wondered what would have happened if there'd been some
kind of glitch with the transporter and it had grabbed the signal,
reconstituted it on the enterprise, but screwed the timing up so that
after he'd been "beamed up" the original guy was still there down on
the planet. Then they have to call down and explain how they already
got him on board the enterprise -- now they just need to annihilate
him down below. Sorry. Just a little glitch.

Have they ever bothered to address the fact that transporter
technology, as they describe it, could essentially guarantee a kind of
immortality. Every few minutes, as you go through your daily routine,
the transporter "backs you up" like a computer file. Then, if you make
a mistake and die -- no problem. The computer simply outputs the last
back up. You'll be a few minutes out of date, but on the whole, that
would be better than non-existence. Plus, you'd gain the experience of
your previous version so, presumably, you wouldn't make that same
mistake again.

Of course, if one knew that one could always be reconstituted, it
would make risk-taking an entirely different thing.

NMS

Tom Wood

unread,
Sep 9, 2001, 9:59:04 AM9/9/01
to
> I'm forty five. How many of the atoms of which my body now consists
> were physically present inside me when I was twenty-five? Or twelve? I
> suspect it's likely that, by the time a person dies of old age, the
> molecules of his body -- the physical stuff of which he's made, has
> probably cycled through several times, and that not even a single
> molecule of the child born eighty years before resides within his
> body.

It's my understanding that every cell is replaced, one at a time, every
seven years. So you are indeed a new person every seven years. Which is
doubly curious when you look at how that period corresponds with the overall
change that occurs at seven year intervals - 7, 14, 21, 28, 35 etcetera are
all roughly the time when a human makes large transitions from one stage in
life to another.

> Have they ever bothered to address the fact that transporter
> technology, as they describe it, could essentially guarantee a kind of
> immortality. Every few minutes, as you go through your daily routine,
> the transporter "backs you up" like a computer file. Then, if you make
> a mistake and die -- no problem. The computer simply outputs the last
> back up. You'll be a few minutes out of date, but on the whole, that
> would be better than non-existence.

Ummmm.... I thought you said there was no course of reason that would lead
to the conclusion that non-existence was "bad".
<g>

Plus, you'd gain the experience of
> your previous version so, presumably, you wouldn't make that same
> mistake again.

See "The Sixth Day" with Ahnold.

Lars J. Aas

unread,
Sep 9, 2001, 10:13:02 AM9/9/01
to
In article <IAKm7.2709$Hz3.75...@newssvr16.news.prodigy.com>,

Tom Wood <tomw...@flash.net> wrote:
> > Plus, you'd gain the experience of your previous version so, presumably,
> > you wouldn't make that same mistake again.
>
> See "The Sixth Day" with Ahnold.

Or even better - don't.

Maybe The Attack of the Clones touches on this? Very doubtful..

Lars J
--
This is your life and it's ending one minute at a time.

Richard Milton

unread,
Sep 9, 2001, 11:05:47 AM9/9/01
to
nmstevens wrote

>"Richard Milton" wrote

>> My understanding is (and of course we're
>> speculating about fictional futures) is that
>> Picard's body would be scanned in a process
>> that is essentially destructive, a signal of some
>> kind is transmitted through space and a
>> technology like the ship's replicators is used
>> to make a new Picard identical down to the last
>> neurone (we hope) to the old one. The
>> Picard who steps onto the Enterprise has
>> all the old one's memories up to the moment
>> of scanning and so thinks he is the same man
>> but he is physically different -- only the information
>> content is the same - he is a clone.
>
>But, fundamentally, aren't all living things just their "information
>content?"
>
>I'm forty five. How many of the atoms of which my body now consists
>were physically present inside me when I was twenty-five? Or twelve? I
>suspect it's likely that, by the time a person dies of old age, the
>molecules of his body -- the physical stuff of which he's made, has
>probably cycled through several times, and that not even a single
>molecule of the child born eighty years before resides within his
>body. The process of physical "destruction and recreation" is the
>natural one that describes the lives of multicellular organisms.


The turnover of cells in your body is such that most are
replaced every seven years, but this has to be qualified.
The cells composing your testicles and responsible for
making sperm are never replaced, nor are your
brain cells. I'm not sure but I think the same applies to
all of the central nervous system.

So yes, you've had one or more new hearts, liver,
kidneys etc. But even this has to be qualified. The cells
that are already part of your heart, skin, liver etc excercise
some kind of influence over the new cells that are formed
and will eventually replace them. The kind and extent of
this influence affects where cells grow, and their shape.
It is remarkable, for example, that despite a complete
turnover in skin cells tattoos can keep their shape
for 30, 40 or more years, suggesting that the replacement
of old cells is done with great precision.

What kind of influence can yield this precision, how it
is exercised (or, indeed, how groups of cells
cooperate at all) is one of the big remaining unknowns
in biology. If I were a young biology PhD candidate I
would be itching to learn about this, but as far as I know
it's being completely ignored.

>And yet, I would never imagine that the person I was twenty-five years
>ago was actually a different person. It was "me" -- and "me" is
>defined not by the molecules but by their patterns, which, while they
>don't endure forever, manage to maintain sufficient continuity that we
>consider it the same organism, even with different molecules.


When people get Alzheimers and lose the integrity of their
central nervous system, they seem also to lose
their identity -- they forget who they are, where they live,
who their loved ones are etc etc. They forget their past
and are no longer able to answer questions about themselves.
This srongly suggests that what we commonly take to be
identity is connected with the central nervous system and, as
I said, this doesn't change (normally).

>Of course, one of the things that defines living things is "location"
>-- and two beings with identical patterns are still two beings, not
>one. I always wondered what would have happened if there'd been some
>kind of glitch with the transporter and it had grabbed the signal,
>reconstituted it on the enterprise, but screwed the timing up so that
>after he'd been "beamed up" the original guy was still there down on
>the planet. Then they have to call down and explain how they already
>got him on board the enterprise -- now they just need to annihilate
>him down below. Sorry. Just a little glitch.


If you recall, the original series of ST had an episode which
explored this theme when Jim Kirk was split in two in
just this way. The writers used a Jekyll and Hyde theme
to differentiate them. Untimately one Jim Kirk had to
die. The writers sort of fudged it by having the good Jim
and bad Jim kind of recombine at the point of death.

When and if transporter technology is available the
problem as you've stated it is bound to occur. Think
of the legal ramifications -- who owns his property?
Who draws the salary? Who should the ship's crew obey?

>Have they ever bothered to address the fact that transporter
>technology, as they describe it, could essentially guarantee a kind of
>immortality. Every few minutes, as you go through your daily routine,
>the transporter "backs you up" like a computer file. Then, if you make
>a mistake and die -- no problem. The computer simply outputs the last
>back up. You'll be a few minutes out of date, but on the whole, that
>would be better than non-existence. Plus, you'd gain the experience of
>your previous version so, presumably, you wouldn't make that same
>mistake again.
>
>Of course, if one knew that one could always be reconstituted, it
>would make risk-taking an entirely different thing.


I think the flaw in this is that it wouldn't be the same you -- it
would be a clone. I personally would never step into a
transporter for that reason.

Richard
"Sorry guys, I'm taking the elevator."


Mr Helsing

unread,
Sep 9, 2001, 5:55:53 PM9/9/01
to
>>We have emotions, because we inherit them from ancestors whose survival
profited, in the absence of advanced intelligence, from having emotions.

Without pain and pleasure, what profit is there?

Mr Helsing

unread,
Sep 9, 2001, 6:13:36 PM9/9/01
to
>>The concept could be explored in a more serious context. What if one of
the clones commits a murder? Are the others also guilty?

>>Why would they be? A clone is basically the same thing as an identical twin


-- although sharing the exact same genetic material, they are clearly different
people -- they don't arrest one twin when the other commits an illegal act.


The idea was that one consciousness controls both bodies. While one is awake
the other sleeps. Before he sleeps he downloads his consciousness into the
clone that is awakening. That way, a person has 24 hours a day to be alive in a
well-rested body, rather than having to lose time to fatigue and sleep.

This scenario raises question about the nature of individuality. In this kind
of story, I'd probably like to see each clone tend to be different from the
other (almost chomping at the bit), while civilization attempts to conform both
to one personality.

It's all brought into relief when one commits a murder, while the other one
never would. And it would be the attempts of civilization to force the
conformity that drive the murderer over the edge.

derek

unread,
Sep 9, 2001, 6:37:56 PM9/9/01
to
"Tom Wood" <tomw...@flash.net> wrote:

> It's my understanding that every cell is replaced, one at a time, every
> seven years. So you are indeed a new person every seven years.

That's not correct at all. Some cells - e.g. blood platelets - are recycled
every few weeks, similarly the lining of the intestine, skin etc. are
continually shed. A woman is born with all the egg cells she will ever have,
whereas men continue to produce sperm cells into old age and lose most of them
along the way through natural loss or trivial pursuits. Brain plasticity is an
interesting subject and although we are born with all the neurons we will ever
have - in fact the most we will ever have - we now know that there is some
degree of neural regeneration. I'm not sure where the 7-year thing comes from,
unless 7 years is the longest period for any cell group to regenerate, which
means that after every 7 years, most regenerative cells, except for neurons,
will have been replaced.

> Which is
> doubly curious when you look at how that period corresponds with the overall
> change that occurs at seven year intervals - 7, 14, 21, 28, 35 etcetera are
> all roughly the time when a human makes large transitions from one stage in
> life to another.

Are these genuine biological transitions? Or mostly social structural
transitions, some of which relate or are consequent on levels of biological
competence?
regards, derek

Mr Helsing

unread,
Sep 9, 2001, 6:28:57 PM9/9/01
to
>>A few years back I wrote a proposal to several London publishers outlining a
book on the future legal changes that might be necessitated by technological
developments, especially the possbility of downloading consciousness into
silicon (or silicon/organics -- or, indeed, organics). Legally this will be
the biggest can of worms opened in many a long year. But none of the
publishers I spoke to was interested. I suspect they didn't even understand
what I was talking about.

Very interesting. You might consider trying again, since there is demonstrably
now an audience. I'd like to read that book. Or you might query Wired. The
subject is right up their alley - or would it be transistor?

derek

unread,
Sep 9, 2001, 6:46:33 PM9/9/01
to
"Richard Milton" <richard...@virgin.net> wrote:

> What kind of influence can yield this precision, how it
> is exercised (or, indeed, how groups of cells
> cooperate at all) is one of the big remaining unknowns
> in biology. If I were a young biology PhD candidate I
> would be itching to learn about this, but as far as I know
> it's being completely ignored.

Fortunately it's the subject of a lot of study, in combination with work on stem
cell recearch with which it shares much. A related branch of study is that of
early cell division in the human embryo and trying to understand how cells
'know' their structural relationship and 'know' how to start specialising and
migrating at certain stages of growth. It's a mind-numbingly fascinating topic.

> When people get Alzheimers and lose the integrity of their
> central nervous system, they seem also to lose
> their identity -- they forget who they are, where they live,
> who their loved ones are etc etc. They forget their past
> and are no longer able to answer questions about themselves.
> This srongly suggests that what we commonly take to be
> identity is connected with the central nervous system and, as
> I said, this doesn't change (normally).

It's a bit of a blow to some beliefs, but our sense of identity and
individuality seems to be entirely linked to memory and experience without any
mysterious 'ether' that determines self.
regards,
derek

Tom Wood

unread,
Sep 9, 2001, 6:54:44 PM9/9/01
to

> > It's my understanding that every cell is replaced, one at a time, every
> > seven years. So you are indeed a new person every seven years.

> That's not correct at all.

Well, hell, I was hoping for a reason to re-create myself every now and
then.

> > Which is
> > doubly curious when you look at how that period corresponds with the
overall
> > change that occurs at seven year intervals - 7, 14, 21, 28, 35 etcetera
are
> > all roughly the time when a human makes large transitions from one stage
in
> > life to another.

> Are these genuine biological transitions? Or mostly social structural
> transitions, some of which relate or are consequent on levels of
biological
> competence?

Well, yeah. I would hope that the ancients at least tried to observe reality
before writing all these so-called scriptures that we are supposed to live
by.


Alan Brooks

unread,
Sep 9, 2001, 11:39:41 PM9/9/01
to

Mr Helsing wrote:
>
> >>So a cyborg would still be subject to the same motives that any human might
> experience, they would just be able to act upon those motives with greater
> ability. And therein lies the story....
>
> I think that for stories just to hold water, they have to be set up this way.

One of the classic horror setups is to give the monster all the human
desires but none of our compunctions. Vampires and werewolves need to
eat, but lack the normal human compunction against ripping open the
throat of another human being. Aliens and monsters are given needs --
to propogate, control territory, eat and own nice cozy planets -- but no
simple curiosity, no love, no aesthetic principles, no desire to blow
off hours a day on "Dukes of Hazard" reruns.

So cyborgs, aliens and monsters are generally set up as half-motivated.

Alan Brooks
~~~~~~~~~~~~~~~~~~~~~~~~~~~
A schmuck with an Underwood

-- Ever wonder what Dracula's cholesterol count must have been?

nmstevens

unread,
Sep 9, 2001, 11:54:41 PM9/9/01
to
"derek" <der...@xtra.co.nz> wrote in message news:<6%Rm7.2219$WO4.3...@news.xtra.co.nz>...

> "Tom Wood" <tomw...@flash.net> wrote:
>
> > It's my understanding that every cell is replaced, one at a time, every
> > seven years. So you are indeed a new person every seven years.
> That's not correct at all. Some cells - e.g. blood platelets - are recycled
> every few weeks, similarly the lining of the intestine, skin etc. are
> continually shed. A woman is born with all the egg cells she will ever have,
> whereas men continue to produce sperm cells into old age and lose most of them
> along the way through natural loss or trivial pursuits. Brain plasticity is an
> interesting subject and although we are born with all the neurons we will ever
> have - in fact the most we will ever have - we now know that there is some
> degree of neural regeneration. I'm not sure where the 7-year thing comes from,
> unless 7 years is the longest period for any cell group to regenerate, which
> means that after every 7 years, most regenerative cells, except for neurons,
> will have been replaced.

Well, clearly some cells, like blood cells, are produced by other
cells -- stem cells. But I had always believe that the majority of
cells in the body reproduce by division. So a cell splits, yielding
two cells with half the mass of the original and then grow, yielding
two cells with only half the material of the original and half new
stuff, presumably derived from what we eat. At each division, the
amount of material from the original cell is split by half -- and so
at some point, presumably, just by virtue of reproducing, the original
material from the original cell is so attenuated that it has to be
virtually gone. I mean -- let's face it. I was around nine pounds when
I was born. Even if every molecule of that remained, it would still be
around a thirtieth of my current total weight.

And just as we, as total organisms are constantly ingesting and
excreting new material -- don't the individual cells, even if they
continue to live, do the same thing? Are the individual molecules of
which, say, a brain cell is made, excreted and replaced with new
material ingested at the cellular level? Or does, say a molecule from
the cell wall of a brain cell simply remain there, more or less in the
same place, for as long as the brain cell exists?

NMS

Gary Pollard

unread,
Sep 10, 2001, 1:42:59 AM9/10/01
to
"Alan Brooks" <al...@sirius-software.com> wrote in message
news:3B9C35FD...@sirius-software.com...

> One of the classic horror setups is to give the monster all the human
> desires but none of our compunctions. Vampires and werewolves need to
> eat, but lack the normal human compunction against ripping open the
> throat of another human being.

Sometimes true . But in many of these movies much dramatic potential is made
of the fact that many vampires and werewolves don't lack that compunction at
all. These people don't WANT to be werewolves or vampires. They have no
choice.

> Aliens and monsters are given needs --
> to propogate, control territory, eat and own nice cozy planets -- but no
> simple curiosity, no love, no aesthetic principles, no desire to blow
> off hours a day on "Dukes of Hazard" reruns.

Except for the "Dukes of Hazzard" stuff (and one could I suppose compare it
to the hermit's music), Frankenstein's monster as presented by James Whale
showed all of the above.

Many of the greatest monsters ARE conflicted. That's what separates them
from "Jaws".

Gary

BrickRage

unread,
Sep 10, 2001, 2:06:16 AM9/10/01
to

>From: Alan Brooks al...@sirius-software.com

>One of the classic horror setups is to give the monster all the human
>desires but none of our compunctions. Vampires and werewolves need to
>eat, but lack the normal human compunction against ripping open the
>throat of another human being.

I don't know what it is about this thread, but when so many "compunctions"
enter the scheme of things, I tend to have qualms. Qualms and compunctions go
hand in hand.

>no desire to blow
>off hours a day on "Dukes of Hazard" reruns.

I don't know, Brooksie, but these "half-motivated" entities might just get
their supra-human power from the Dukes. That is if Gilligan reruns aren't
available.

Has everyone just given up on plain old humans?

Nesci

"You live in an age when people would package and standardize your life for you
- steal it from you and sell it back to you at a price. That price is very
high." -- Granny D.

The FAQ for m.w.s is http://www.communicator.com/faqs.html

Lars J. Aas

unread,
Sep 10, 2001, 7:59:45 AM9/10/01
to
In article <20010909181336...@mb-cj.aol.com>,

Mr Helsing <mrhe...@aol.com> wrote:
> The idea was that one consciousness controls both bodies. While one is awake
> the other sleeps. Before he sleeps he downloads his consciousness into the
> clone that is awakening. That way, a person has 24 hours a day to be alive
> in a well-rested body, rather than having to lose time to fatigue and sleep.

When you sleep your mind does useful work (organizing information and
similar things). I doubt you could "transfer" your consciousness back
and forth like this without losing the effects of sleeping. You might
be rested physically, but mentally you would be a mess after 48 hours.

derek

unread,
Sep 10, 2001, 8:55:46 AM9/10/01
to
"Tom Wood" <tomw...@flash.net> wrote:

> Well, hell, I was hoping for a reason to re-create myself every now and
> then.

Look Tom, I'd just go ahead and do that anyway, then sit back and chill out over
a few beers or a bottle of wine. I do it all the time. I think. . .
regarde,
derek
--
"Women sense my power, and they seek the life essence. I do not avoid women,
Mandrake, but I do deny them my essence."

It is loading more messages.
0 new messages