Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Is science fiction possible?

5 views
Skip to first unread message

phil hunt

unread,
Mar 4, 2003, 7:01:42 PM3/4/03
to
Something I've been thinking for some time is that science fiction
(in the strict sense, which I'll define below) isn't -- or might not
be -- possible.

By science fiction I mean fiction with realistic futuristic science.
Realistic means that it must be consistent with everything we know
about the world, including science and technology, and human social
structures. Futuristic means reasonable extrapolations of present
day abilities.

In the future, the dominant species won't be us, it'll be something
a lot more intelligent than us. This could happen many ways:

- we could construct superior artificial intelligences by
programming

- we could use genetic programming to construct them (we know this
is possible, because it it how human evolved intelligence),

- we could augment human intelligence with artifacts (like we've
already done with writing and the Internet)

- we could upload human intelligences to computers, run them faster,
and reverse-engineer them to improve them

- we could use genetic engineering to create more intelligent
people

Most likely we will do all or most of these (arguably, we're doing
them already). The more intelligenct the results of these
technologies are, the better they will be at creating the next
"new, improved" version of intelligence. And so on, until you get
the Singularity.

The problem with the Singularity is that we cannot imagine what an
intelligence vastly greater than our own would be like, any more
than a dog could imagine what it is like to be a human. So we cannot
write convincingly about post-Singularity societies.

Of course, one can use tricks (such as the post-humans leave humans
alone -- or mostly so, and have stories set in such enclaves of
backwardness), but that seems a bit like cheating to me.

--
|*|*| Philip Hunt <ph...@cabalamat.org> |*|*|
|*|*| "Memes are a hoax; pass it on" |*|*|

Erik Max Francis

unread,
Mar 4, 2003, 10:06:29 PM3/4/03
to
phil hunt wrote:

> By science fiction I mean fiction with realistic futuristic science.
> Realistic means that it must be consistent with everything we know
> about the world, including science and technology, and human social
> structures. Futuristic means reasonable extrapolations of present
> day abilities.

Since we cannot possibly know, saying whether or not this is possible is
a judgement call. So there will necessarily be disagreement.

> In the future, the dominant species won't be us, it'll be something
> a lot more intelligent than us.

Bluh, what? You were just talking about "realistic, futuristic science"
and then you say this?

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE
/ \ And your daddy died for you / And I'll do the same
\__/ India Arie
Bosskey.net: Counter-Strike / http://www.bosskey.net/cs/
A personal guide to Counter-Strike.

David Friedman

unread,
Mar 4, 2003, 11:16:05 PM3/4/03
to
In article <slrnb6afj2...@cabalamat.uklinux.net>,
ph...@cabalamat.org (phil hunt) wrote:

> The problem with the Singularity is that we cannot imagine what an
> intelligence vastly greater than our own would be like, any more
> than a dog could imagine what it is like to be a human. So we cannot
> write convincingly about post-Singularity societies.
>
> Of course, one can use tricks (such as the post-humans leave humans
> alone -- or mostly so, and have stories set in such enclaves of
> backwardness), but that seems a bit like cheating to me.

You are assuming that the singularity happens. There are lots of
assumptions that are consistent with what we know which might slow the
rate of progress enough to make human beings a century hence more or
less like us.

To take one obvious one, Moore's law could run out. Or it could turn out
that A.I. is a much harder problem than Kurzweil et. al. think it is. Or
it could turn out that consciousness isn't an emergent property of very
smart computer programs. Building a nanotech assumbler might turn out to
much harder than Drexler thinks.

Or someone might establish world government and ban many technologies,
thus drastically slowing their development.

Or ... .

What isn't clear is whether sf is possible, more than fifty years or so
out, in a best guess future.

--
www.daviddfriedman.com

John Park

unread,
Mar 5, 2003, 12:53:45 AM3/5/03
to
phil hunt (ph...@cabalamat.org) writes:
> Something I've been thinking for some time is that science fiction
> (in the strict sense, which I'll define below) isn't -- or might not
> be -- possible.
>[...]
> In the future, the dominant species won't be us, it'll be something
> a lot more intelligent than us. This could happen many ways:
>
> [...] The more intelligenct the results of these
> technologies are, the better they will be at creating the next
> "new, improved" version of intelligence. And so on, until you get
> the Singularity.
>
> The problem with the Singularity is that we cannot imagine what an
> intelligence vastly greater than our own would be like, any more
> than a dog could imagine what it is like to be a human. So we cannot
> write convincingly about post-Singularity societies.
>
> Of course, one can use tricks (such as the post-humans leave humans
> alone -- or mostly so, and have stories set in such enclaves of
> backwardness), but that seems a bit like cheating to me.
>
On the other hand, some societies may want to remain human; and on the other
other hand, what are the chances of implementing any of your suggested
developments for several billion people, many of whom live in conditions
where they would be happy just to get reliable drinking water?

And of course, "more intelligent" doesn't necessarily equate to "more
powerful", especially when it may be a question of making the human race
obsolete. (In Gibson's novels, aren't there legal limitations on the
abilities of AIs, for instance?)

--John Park

Stan

unread,
Mar 4, 2003, 6:18:13 PM3/4/03
to
phil hunt wrote:
>
> Something I've been thinking for some time is that science fiction
> (in the strict sense, which I'll define below) isn't -- or might not
> be -- possible.
>
> By science fiction I mean fiction with realistic futuristic science.
(snip)

If this conjecture is true today, then it should prove true for, say,
1950...so set your wayback machine for 1940, research SF of the 20s
to 1950, and see what came true.

Stan.

David Friedman

unread,
Mar 5, 2003, 2:35:23 AM3/5/03
to
In article <3E6534...@delete-upto-the-last-dash-xprt.net>,
Stan <stan...@delete-upto-the-last-dash-xprt.net> wrote:

I don't agree. Phil's point, if I understand it, is that things are
going to be changing so rapidly during the next century that the world
of the late 21st century won't make much sense to us, hence we cannot
write plausible fiction set then or later. He isn't claiming that things
changed that rapidly in the second half of the 20th century.

--
www.daviddfriedman.com

Dave O'Neill

unread,
Mar 5, 2003, 2:47:28 AM3/5/03
to

"Erik Max Francis" <m...@alcyone.com> wrote in message
news:3E6569B5...@alcyone.com...

> phil hunt wrote:
>
> > By science fiction I mean fiction with realistic futuristic science.
> > Realistic means that it must be consistent with everything we know
> > about the world, including science and technology, and human social
> > structures. Futuristic means reasonable extrapolations of present
> > day abilities.
>
> Since we cannot possibly know, saying whether or not this is possible is
> a judgement call. So there will necessarily be disagreement.
>
> > In the future, the dominant species won't be us, it'll be something
> > a lot more intelligent than us.
>
> Bluh, what? You were just talking about "realistic, futuristic science"
> and then you say this?

And your problem with this statement is...?

Stan

unread,
Mar 4, 2003, 9:04:22 PM3/4/03
to
David Friedman wrote: Stan wrote: phil hunt wrote:
> > >
> > > Something I've been thinking for some time is that science fiction
> > > (in the strict sense, which I'll define below) isn't -- or might not
> > > be -- possible.
> > >
> > > By science fiction I mean fiction with realistic futuristic science.
> > (snip)
> >
> > If this conjecture is true today, then it should prove true for, say,
> > 1950...so set your wayback machine for 1940, research SF of the 20s
> > to 1950, and see what came true.
>
> I don't agree. Phil's point, if I understand it, is that things are
> going to be changing so rapidly during the next century that the world
> of the late 21st century won't make much sense to us, hence we cannot
> write plausible fiction set then or later. He isn't claiming that things
> changed that rapidly in the second half of the 20th century.

The phrase "rapid change" could be used by a someone who lived their
adult
life in the first half of the 20th century, then had a "peek" at the
second
half of the 20th century. Maybe the most "unplausible" things in
science
have been discovered (albeit at a fundamental level). Might the
Heisenberg
Uncertainty Principle (and that which derives from it) be the zenith of
scientific plausibility?

Stan.

how...@brazee.net

unread,
Mar 5, 2003, 8:01:21 AM3/5/03
to

On 4-Mar-2003, ph...@cabalamat.org (phil hunt) wrote:

> By science fiction I mean fiction with realistic futuristic science.
> Realistic means that it must be consistent with everything we know
> about the world, including science and technology, and human social
> structures. Futuristic means reasonable extrapolations of present
> day abilities.

Well, there is quite a bit of distopic SF that fits this. Maybe with
dictatorships taking over, maybe with WW III destroying civilization.

how...@brazee.net

unread,
Mar 5, 2003, 8:03:16 AM3/5/03
to
I think my grandmother saw more impactful change than I will. She
remembered seeing her first car and her first phone, her first airplane trip
to another country, and watching people walk on the moon.

phil hunt

unread,
Mar 4, 2003, 11:22:08 PM3/4/03
to
On Tue, 04 Mar 2003 19:06:29 -0800, Erik Max Francis <m...@alcyone.com> wrote:
>phil hunt wrote:
>> In the future, the dominant species won't be us, it'll be something
>> a lot more intelligent than us.
>
>Bluh, what? You were just talking about "realistic, futuristic science"
>and then you say this?

Yes. It seems to me that barring some catastrophe like a major
nuclear war, this is bound to happen. At the latest by 2200, at the
earliest by 2040.

I gave some of my reasons in my original post.

Mark

unread,
Mar 5, 2003, 9:05:44 AM3/5/03
to
ph...@cabalamat.org (phil hunt) wrote in message news:<slrnb6afj2...@cabalamat.uklinux.net>...

> Something I've been thinking for some time is that science fiction
> (in the strict sense, which I'll define below) isn't -- or might not
> be -- possible.
>

Except that science fiction isn't predictive. That's not what it
does. When it has tried to be, it has occasionally nailed something
with amazing success, but the vast majority of times has proved
remarkably wrong.

Science fiction is about the human response to changed conditions. As
such, it's chief job is to tell the truth about us--people and our
responses. The conditions are secondary. The best of it depicts
conditions which arise from the human response to change, but in the
sense of blueprinting the future then no.

The easy answer to your question, then, is No, science fiction is not
possible. The question--put that way--is beside the point.

But all invention is preceded by imagination. So, curiously enough,
while it might not be possible, it may be probable. <g>

Mark
author of:
COMPASS REACH
METAL OF NIGHT
PEACE & MEMORY (forthcoming)
www.marktiedemann.com

e...@ekj.vestdata.no

unread,
Mar 5, 2003, 9:27:10 AM3/5/03
to
On Wed, 5 Mar 2003, phil hunt wrote:

> In the future, the dominant species won't be us, it'll be something a
> lot more intelligent than us.

Strong claim. You are not saying "may not be us", you are saying "won't
be us". Strong claims require strong supporting evidence.

> - we could construct superior artificial intelligences by programming

We *could* however also experience the same advances in the next 20
years as we have experienced in the last 20 years. That is very few
fundamental advances at all. (the little advance we do have are mainly
due to tossing infinitely more computational capacity at the problem.)

> - we could use genetic programming to construct them (we know this
> is possible, because it it how human evolved intelligence),

But we don't know how easy it is goign to be.

> - we could augment human intelligence with artifacts (like we've
> already done with writing and the Internet)

But inspite of this a play written 200 years ago deal with issues often
completely relevant today, even more so understandable today. So such
artifacts (like computers) need not fundamentally change who we are.

> - we could use genetic engineering to create more intelligent
> people

How is this different from "using genetic programming to construct" ?

> Most likely we will do all or most of these (arguably, we're doing
> them already).

Arguably, humans today are very much the same beings they where a
millenium ago. We have vastly different *technical* capabilities, but
texts written back then still deal with issues that are relevant today.

We don't know. That is a typical property of the future. It seems to me
you are lacking the basis for your seemingly unshakable *belief* that
the singularity is coming rigth around the corner.

Notice that the same argument you just made could have been made just as
convincingly 50 years ago, and yet, here we are, pretty much the same
people we where back then.

Sincerely,
Eivind Kjørstad

Albert Deinbeck

unread,
Mar 5, 2003, 9:52:13 AM3/5/03
to
"phil hunt" <ph...@cabalamat.org> schrieb im Newsbeitrag
news:slrnb6afj2...@cabalamat.uklinux.net...

> Something I've been thinking for some time is that science fiction
> (in the strict sense, which I'll define below) isn't -- or might not
> be -- possible.
>
> By science fiction I mean fiction with realistic futuristic science.
> Realistic means that it must be consistent with everything we know
> about the world, including science and technology, and human social
> structures. Futuristic means reasonable extrapolations of present
> day abilities.

It's not the task of SF to stay valid. SF isn't written for the future, but
today, reflecting the hopes and fears of contemporary people. Asimov didn't
foresee the computer or the internet in his novels, nevertheless they are
great SF.

>
> In the future, the dominant species won't be us, it'll be something
> a lot more intelligent than us.

Why? There are so many dumb people around and it seems to me they are doing
fine. Why should something more intelligent evolve? Intelligence is very
energy-consuming. There must be very good reasons for such an expense.
AI in that respect doesn't seem to be cheaper, as far as we know.

phil hunt

unread,
Mar 5, 2003, 9:39:46 AM3/5/03
to
On Wed, 05 Mar 2003 04:16:05 GMT, David Friedman <dd...@daviddfriedman.com> wrote:
>In article <slrnb6afj2...@cabalamat.uklinux.net>,
> ph...@cabalamat.org (phil hunt) wrote:
>
>> The problem with the Singularity is that we cannot imagine what an
>> intelligence vastly greater than our own would be like, any more
>> than a dog could imagine what it is like to be a human. So we cannot
>> write convincingly about post-Singularity societies.
>>
>> Of course, one can use tricks (such as the post-humans leave humans
>> alone -- or mostly so, and have stories set in such enclaves of
>> backwardness), but that seems a bit like cheating to me.
>
>You are assuming that the singularity happens.

Yes.

>There are lots of
>assumptions that are consistent with what we know which might slow the
>rate of progress enough to make human beings a century hence more or
>less like us.
>
>To take one obvious one, Moore's law could run out.

At some point it probably will. But many people have predicted that
it would, and their predictions haven't come to pass. I think the
reason for this is that people are saying (correctly) "you can't
make computing machinery smaller than X with technology Y", but
then technology Z (which permits smaller machinery) is used instead.

Consider that a neuron consists of (at a guess) somewhere in the
region of 10^12 atoms, and we *currently* have technology that can
deal with single atoms at a time (albeit this is just a technology
demonstration at the moment). So I doubt if Moore's law will break
before we get computers with at least as much processing-density as
the brain.

> Or it could turn out
>that A.I. is a much harder problem than Kurzweil et. al. think it is.

It may well be that humans are just too stupid to write software as
intelligent as humans. If that is the case, there are other methods,
which rely on a more brute-force approach: i.e. genetic programming
to produce an intelligent program, and reverse-engineering the
circuitry of the brain.

Bear in ming that human intelligence can't be all that complicated
really: the human genome contains less information than the average
Linux distro, and the genome codes for a lot more than intelligence.

>Or
>it could turn out that consciousness isn't an emergent property of very
>smart computer programs. Building a nanotech assumbler might turn out to
>much harder than Drexler thinks.

Quite possibly. However, there are many possible approaches, and we
are talking about lots of very useful technologies, which are
getting large amounts of money to fund research.

It seems to me that at least one technology that could lead to AI is
likely to succeed.

>Or someone might establish world government and ban many technologies,
>thus drastically slowing their development.

This is possible, but IMO unlikely.

phil hunt

unread,
Mar 5, 2003, 9:40:18 AM3/5/03
to

I'm not sure what you are getting at.

phil hunt

unread,
Mar 5, 2003, 9:42:38 AM3/5/03
to
On Wed, 05 Mar 2003 07:35:23 GMT, David Friedman <dd...@daviddfriedman.com> wrote:
>
>I don't agree. Phil's point, if I understand it, is that things are
>going to be changing so rapidly during the next century that the world
>of the late 21st century won't make much sense to us, hence we cannot
>write plausible fiction set then or later.

Yes. To be precise, the beings of the future will be so much more
intelligent than us that we will not be able to understand them.

> He isn't claiming that things
>changed that rapidly in the second half of the 20th century.

It seems to me that the rate of change is increasing. E.g. the
amount of time for processors to get twice as fast, or hard disks
twice as capacious, is less now than in the 1980s.

phil hunt

unread,
Mar 5, 2003, 9:43:33 AM3/5/03
to

A post-apocalyse society won't have futuristic technology.

Cyrus Levesque

unread,
Mar 5, 2003, 12:09:19 PM3/5/03
to
ph...@cabalamat.org (phil hunt) wrote in message news:<slrnb6afj2...@cabalamat.uklinux.net>...
> Something I've been thinking for some time is that science fiction
> (in the strict sense, which I'll define below) isn't -- or might not
> be -- possible.
>
> By science fiction I mean fiction with realistic futuristic science.
> Realistic means that it must be consistent with everything we know
> about the world, including science and technology, and human social
> structures. Futuristic means reasonable extrapolations of present
> day abilities.
>
> In the future, the dominant species won't be us, it'll be something
> a lot more intelligent than us. This could happen many ways:
>
> - we could construct superior artificial intelligences by
> programming
>
> - we could use genetic programming to construct them (we know this
> is possible, because it it how human evolved intelligence),
>
> - we could augment human intelligence with artifacts (like we've
> already done with writing and the Internet)
>
> - we could upload human intelligences to computers, run them faster,
> and reverse-engineer them to improve them
>
> - we could use genetic engineering to create more intelligent
> people
>

Nitpick: You leave out the most likely way humans will be replaced: we
could be outevolved, probably but not certainly by something that
evolves from us. :)

> Most likely we will do all or most of these (arguably, we're doing
> them already). The more intelligenct the results of these
> technologies are, the better they will be at creating the next
> "new, improved" version of intelligence. And so on, until you get
> the Singularity.
>
> The problem with the Singularity is that we cannot imagine what an
> intelligence vastly greater than our own would be like, any more
> than a dog could imagine what it is like to be a human. So we cannot
> write convincingly about post-Singularity societies.

Where do you get this rule that SF has to be about post-human culture?
Was there something in the news this morning I missed?



> Of course, one can use tricks (such as the post-humans leave humans
> alone -- or mostly so, and have stories set in such enclaves of
> backwardness), but that seems a bit like cheating to me.

I have to disagree - mostly just for the sake of argument :) - that we
can't write convincingly about post-Singularity societies. First, you
say that we can't imagine a greater intelligence, but all life (well,
on earth) is basically similar. A human goes through "mating dances"
very different from a bird's, but the idea is the same - to attract a
mate. Red and black ants fight and die over something as trivial as
the color of their exoskeleton. All these things we can understand. If
we can think down, why is it impossible to think up?

And in the same way, why can't we write convincingly about
"post-Singularity" societies? I think we could, simply because very
little in life changes. One hundred years ago the daily life of an
average person in Europe was almost exactly the same as it was one
thousand years ago. Even today, a majority of the world still lives
like that. A great deal has changed over the past century, but all the
stuff that makes us human has not. Monogamy (and occasional polygamy),
organized governments being more effective on average than
disorganized ones, class differences, faith systems, and the concept
of honor have been part of humanity since the dawn of man. What else
do you need to write a story about? And why would a faster-than-light
drive or virtual reality change that?

If you're saying that "no matter what we write, there will be
something wrong with it," sure, I agree. I mean, duh. No prediction
has ever been %100 percent correct, unless you're the religious type.
In the very unlikely event that we guess right on what new gadgets can
do, we'll still get their names wrong. But if you're saying "there's
no point anymore in trying to imagine what the future will be like
because it's become beyond our comprehension", I think there are so
many things wrong with that.

Michael J Ash

unread,
Mar 5, 2003, 1:05:22 PM3/5/03
to

No, the advances are just coming in other areas. When you were born, how
many people could e-mail their relatives halfway across the world at
almost no cost? How many genetically-engineered plants were being grown?
How many people were sent home from the hospital mere days after heart
surgery?

IMO, any of these is more impactful than watching people walk on the moon.
That was a good stunt, but until we do something more permanent, that's
all it's relaly be.

--
"From now on, we live in a world where man has walked on the moon.
And it's not a miracle, we just decided to go." -- Jim Lovell

Mike Ash - <http://www.mikeash.com/>, <mailto:ma...@mikeash.com>

Mark Atwood

unread,
Mar 5, 2003, 1:41:03 PM3/5/03
to
ph...@cabalamat.org (phil hunt) writes:
>
> Bear in ming that human intelligence can't be all that complicated
> really: the human genome contains less information than the average
> Linux distro, and the genome codes for a lot more than intelligence.

One measly CDROM sent back from 2103 to 2003 could give the receiver
of that CD the ability to conquer/destroy/remake the world and the species.

--
Mark Atwood | Well done is better than well said.
m...@pobox.com |
http://www.pobox.com/~mra

Mark Atwood

unread,
Mar 5, 2003, 1:46:20 PM3/5/03
to
David Friedman <dd...@daviddfriedman.com> writes:
> Building a nanotech assumbler might turn out to
> much harder than Drexler thinks.

Building a general purpose assembler will probably be insanely difficult
and uneconomic, yes.

Fortunately there are lots of small intermediate steps into that realm
of technology, which *are* incrementally doable, financable, and
profitable.

There is also a lot more to nanotech than "carbon structures in hard
vacuum" "hard" assembly techniques. "Wet process" nanotech exists
already, today, as both an existance proof and as something to study /
base on / improve / tweak / build from.

Karl M Syring

unread,
Mar 5, 2003, 2:03:16 PM3/5/03
to
Michael J Ash wrote on Wed, 5 Mar 2003 12:05:22 -0600:
>
> No, the advances are just coming in other areas. When you were born, how
> many people could e-mail their relatives halfway across the world at
> almost no cost? How many genetically-engineered plants were being grown?
> How many people were sent home from the hospital mere days after heart
> surgery?
>
> IMO, any of these is more impactful than watching people walk on the moon.
> That was a good stunt, but until we do something more permanent, that's
> all it's relaly be.

But it is still nothing compared to the most electrifying thing
of all, that is electricity in every house. Every eye witness
would confirm this, if you can find one.

Karl M. Syring

phil hunt

unread,
Mar 5, 2003, 12:44:55 PM3/5/03
to
On Wed, 5 Mar 2003 15:52:13 +0100, Albert Deinbeck <albert....@gmx.de> wrote:
>"phil hunt" <ph...@cabalamat.org> schrieb im Newsbeitrag
>news:slrnb6afj2...@cabalamat.uklinux.net...
>> Something I've been thinking for some time is that science fiction
>> (in the strict sense, which I'll define below) isn't -- or might not
>> be -- possible.
>>
>> By science fiction I mean fiction with realistic futuristic science.
>> Realistic means that it must be consistent with everything we know
>> about the world, including science and technology, and human social
>> structures. Futuristic means reasonable extrapolations of present
>> day abilities.
>
>It's not the task of SF to stay valid. SF isn't written for the future, but
>today, reflecting the hopes and fears of contemporary people. Asimov didn't
>foresee the computer or the internet in his novels, nevertheless they are
>great SF.

Asimov's novels included robots, which implies computers. IIRC one
of his books had a computer called "Multivac".

>> In the future, the dominant species won't be us, it'll be something
>> a lot more intelligent than us.
>
>Why? There are so many dumb people around and it seems to me they are doing
>fine. Why should something more intelligent evolve?

I'm not talking about something evolving by natural selection;
please re-read my article until you understand it.

>Intelligence is very
>energy-consuming.

Indeed. It will probably be the case that future AIs will use less
energy per unit of processing than the human brain does.

phil hunt

unread,
Mar 5, 2003, 12:40:59 PM3/5/03
to
On Wed, 5 Mar 2003 15:27:10 +0100, e...@ekj.vestdata.no <e...@ekj.vestdata.no> wrote:
>On Wed, 5 Mar 2003, phil hunt wrote:
>
>> In the future, the dominant species won't be us, it'll be something a
>> lot more intelligent than us.
>
>Strong claim. You are not saying "may not be us", you are saying "won't
>be us". Strong claims require strong supporting evidence.

Allow me to clarify. IMO the dominant species will be something much
much more intelligent than humans currently are. Having said that,
it may be "us" in the sense that some people alive today will be
transformed into ultra-intelligent personalities downloaded into a
computer.

>> - we could construct superior artificial intelligences by programming
>
>We *could* however also experience the same advances in the next 20
>years as we have experienced in the last 20 years. That is very few
>fundamental advances at all.

I don't agree that there have been very few advances in the last 20
years. I'm not an expert in that field, but my understanding is that
most of what is known today about howe the brain works,. how neurons
work, about neurotransmitters, etc, wasn't known 20 years ago.

>> - we could use genetic programming to construct them (we know this
>> is possible, because it it how human evolved intelligence),
>
>But we don't know how easy it is goign to be.

True. But we do know that it is something that is fundamentally
doable if enough resources are thrown at the problem.

>> - we could augment human intelligence with artifacts (like we've
>> already done with writing and the Internet)
>
>But inspite of this a play written 200 years ago deal with issues often
>completely relevant today, even more so understandable today.

That's true.

> So such
>artifacts (like computers) need not fundamentally change who we are.

Not in art, but it does in technology. For example, using computers
is a *lot* easier now when you are stuck at some complex task and
you can quickly google to get the answer.

>> - we could use genetic engineering to create more intelligent
>> people
>
>How is this different from "using genetic programming to construct" ?

Genetic engineering means looking at the human genome, working out
what various genes do, then changing things here and there in the
hope of producing cleverer people.

Genetic programming means doing the same sorts of things, except
everything is being done to 0s and 1s inside a computer.

>> Most likely we will do all or most of these (arguably, we're doing
>> them already).
>
>Arguably, humans today are very much the same beings they where a
>millenium ago.

Yes, we are. We've only just started using these technologies.

>We don't know.

True, but we can make educated guesses. My guess is the singularity
is a certainty, provided that technology continues to advance in
much the same way it has been doing. (Obviously an anti-technology
world state changes everything).

>Notice that the same argument you just made could have been made just as
>convincingly 50 years ago,

Could it have? Was it?

(Aside: it might be worthwhile collating lots of past futurology,
and see where they've got it right and wrong, and importantly, to
see if there are any *patterns* to what they've got worng: then we
can use those patterns to make current futurology more accurate, by
not repeating past mistakes).

>and yet, here we are, pretty much the same
>people we where back then.

Indeed so.

Kathy Gallagher

unread,
Mar 5, 2003, 2:30:35 PM3/5/03
to

<how...@brazee.net> wrote in message
news:oAm9a.3535$wJ1.3...@newsread2.prod.itd.earthlink.net...

When my son was very little he asked us what kind of computers we had as
kids. We had to explain we didn't have computers. He wanted to know how we
sent email, and I said we wrote letters and mailed them. He was quite
disgusted.

I've had some kind of computer since 1980. The first sounds he responded to
besides my voice were an adding machine and the dot matrix printer.


--
KG
Take what you need and leave the rest.


David Friedman

unread,
Mar 5, 2003, 2:42:18 PM3/5/03
to
In article <slrnb6cdl8...@cabalamat.uklinux.net>,
ph...@cabalamat.org (phil hunt) wrote:

> Genetic engineering means looking at the human genome, working out
> what various genes do, then changing things here and there in the
> hope of producing cleverer people.
>
> Genetic programming means doing the same sorts of things, except
> everything is being done to 0s and 1s inside a computer.

I disagree. Genetic engineering requires, as you say, "working out what
various genes do." Genetic programming doesn't require working out what
various 0's and 1's do. The two processes are fundamentally different.
One is engineering, one is evolution.

> True, but we can make educated guesses. My guess is the singularity
> is a certainty, provided that technology continues to advance in
> much the same way it has been doing. (Obviously an anti-technology
> world state changes everything).

But it also changes everything if the rate of advance slows for
technical reasons--if, for example, Moore's law runs out of steam.

> (Aside: it might be worthwhile collating lots of past futurology,
> and see where they've got it right and wrong, and importantly, to
> see if there are any *patterns* to what they've got worng: then we
> can use those patterns to make current futurology more accurate, by
> not repeating past mistakes).

Orwell did exactly that, limited to his own predictions, in one of his
late essays. He concluded that he got right the way the world was going,
wrong how fast it was going there.

In retrospect, however, it isn't clear that the first conclusion was
true.

--
www.daviddfriedman.com

GSV Three Minds in a Can

unread,
Mar 5, 2003, 2:51:15 PM3/5/03
to
Bitstring <m3bs0pb...@khem.blackfedora.com>, from the wonderful
person Mark Atwood <m...@pobox.com> said

>ph...@cabalamat.org (phil hunt) writes:
>>
>> Bear in ming that human intelligence can't be all that complicated
>> really: the human genome contains less information than the average
>> Linux distro, and the genome codes for a lot more than intelligence.
>
>One measly CDROM sent back from 2103 to 2003 could give the receiver
>of that CD the ability to conquer/destroy/remake the world and the species.

I sort of doubt it, any more than one =book= sent back from now to 1903
would have that effect. Now if they sent us a 2013-style SSMM they could
probably pack enough data in to change the world, except we'd have about
as much chance of reading it as 1903 would have with a DVD.

(The amount of information needed to convey 'world changing technology'
is growing like topsy. "Bang the rocks together" doesn't cut it any
more. 8>.)

--
GSV Three Minds in a Can
Outgoing Msgs are Turing Tested,and indistinguishable from human typing.

Default User

unread,
Mar 5, 2003, 4:38:20 PM3/5/03
to

phil hunt wrote:

> Allow me to clarify. IMO the dominant species will be something much
> much more intelligent than humans currently are. Having said that,
> it may be "us" in the sense that some people alive today will be
> transformed into ultra-intelligent personalities downloaded into a
> computer.

Here's the problem. You make sweeping general assumptions and declare
them to be facts. We don't ever come close to understanding the nature
of conciousness, let alone how to "download" such a thing into a
machine. Or even to create an ultra-intelligent concious entity within a
machine from first principles. And we may not within the lifetime of any
current person.

Similarly, we don't have a good idea of how to interface electronic
systems with the human brain. It's possible that this will be an
extremely difficult task. So while very small, very fast, very powerful,
very high storage computers will almost certainly be a reality,
"plugging in" such a thing may not be easy at all. We may end with a
human/machine interface more crude (aural or visual) than you talk
about. As such, the user would have data and data-crunching available,
but no increase in intelligence, any more than I'm more intelligent than
my Grandpa was because I have a computer with google.

Brian Rodenborn

Malcolm McMahon

unread,
Mar 5, 2003, 8:24:44 AM3/5/03
to

I suspect it depends how old you are.

I find the prospect of Nanotechnology quite convincing, and
nanotechnology can do to manufacturing and medicine what the integrated
circuit has done to information processing.

Scott Dubin

unread,
Mar 5, 2003, 6:23:21 PM3/5/03
to
ph...@cabalamat.org (phil hunt) wrote in message news:<slrnb6afj2...@cabalamat.uklinux.net>...
> Something I've been thinking for some time is that science fiction
> (in the strict sense, which I'll define below) isn't -- or might not
> be -- possible.
>
> By science fiction I mean fiction with realistic futuristic science.
> Realistic means that it must be consistent with everything we know
> about the world, including science and technology, and human social
> structures. Futuristic means reasonable extrapolations of present
> day abilities.
>
> In the future, the dominant species won't be us, it'll be something
> a lot more intelligent than us. This could happen many ways:
>
> - we could construct superior artificial intelligences by
> programming
>
> - we could use genetic programming to construct them (we know this
> is possible, because it it how human evolved intelligence),
>
> - we could augment human intelligence with artifacts (like we've
> already done with writing and the Internet)
>
> - we could upload human intelligences to computers, run them faster,
> and reverse-engineer them to improve them
>
> - we could use genetic engineering to create more intelligent
> people
>
> Most likely we will do all or most of these (arguably, we're doing
> them already).

Or, maybe we will simply find out that the physical laws of our
universe do not make any of these science fiction ideas possible.
People used to largly believe that witchcraft and spirit summoning and
such was possible with the right knowledge, modern belief would seem
to suggest that the physical laws of the world do not make this
possible.

I see no evidence that the psysical laws of our world allow for
cognizant machines, or "augmenting human intelligence."

Genetic engineering may be possible in the sence that you kill/neuter
everybody who isn't smart and allow the smart people to breed, but the
physical laws of the universe may not support a gene manipulation
process that works in a different way.

These things may be possible... or they may not.

phil hunt

unread,
Mar 5, 2003, 3:06:20 PM3/5/03
to
On 05 Mar 2003 10:41:03 -0800, Mark Atwood <m...@pobox.com> wrote:
>ph...@cabalamat.org (phil hunt) writes:
>>
>> Bear in ming that human intelligence can't be all that complicated
>> really: the human genome contains less information than the average
>> Linux distro, and the genome codes for a lot more than intelligence.
>
>One measly CDROM sent back from 2103 to 2003 could give the receiver
>of that CD the ability to conquer/destroy/remake the world and the species.

I doubt if they'll be using CDROMs in 2103.

OTOH, they probably will be using ascii (in its Unicode
incarnation), several Internet protocols such as ftp and http, etc.

phil hunt

unread,
Mar 5, 2003, 3:08:19 PM3/5/03
to
On 05 Mar 2003 10:46:20 -0800, Mark Atwood <m...@pobox.com> wrote:
>David Friedman <dd...@daviddfriedman.com> writes:
>> Building a nanotech assumbler might turn out to
>> much harder than Drexler thinks.
>
>Building a general purpose assembler will probably be insanely difficult
>and uneconomic, yes.

Why? You only have to make one of them, then it can make however
many copies of itself you want.

A bit like software: the 1st copy is expensive, all the others are
free.

>Fortunately there are lots of small intermediate steps into that realm
>of technology, which *are* incrementally doable, financable, and
>profitable.

Indeed. There are plenty of reasons why small machine might be
useful.

Erik Max Francis

unread,
Mar 5, 2003, 7:16:10 PM3/5/03
to
phil hunt wrote:

> Yes. It seems to me that barring some catastrophe like a major
> nuclear war, this is bound to happen. At the latest by 2200, at the
> earliest by 2040.

But keep in mind that people who say we'll be unable to predict the
future in the future are, themselves, attempting to predicting the
future. There's a certain amount of irony involved.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE
/ \ The basis of optimism is sheer terror.
\__/ Oscar Wilde
HardScience.info / http://www.hardscience.info/
The best hard science Web sites that the Web has to offer.

James

unread,
Mar 5, 2003, 7:33:13 PM3/5/03
to
Michael J Ash <mik...@csd.uwm.edu> wrote in message news:<Pine.OSF.3.96.103030...@alpha3.csd.uwm.edu>...

> On Wed, 5 Mar 2003 how...@brazee.net wrote:
> No, the advances are just coming in other areas. When you were born, how
> many people could e-mail their relatives halfway across the world at
> almost no cost?

Nitpick: This same argument could have been used in reference to
telephones. Indeed, both telephones and the Internet were predicted
shortly after their introduction to be inventions that would change
the world. I think that in the case of the Internet we'll see a
situation greatly similar to telephones. Yes, it will be a great
tool, and it will have a major impact on many aspects of our lives.
But--it won't bring forth the drastic changes predicted by many.

> How many genetically-engineered plants were being grown?

I dare say a very large portion of all plants being grown were,
technically, products of genetic engineering.


Now, as for the Singularity, it's another idea that has many
proponents but a shaky foundation. If one is to assume that a
Singularity of technology is upon us becuase technology has been
advancing with increasing rapidity, then one has to also take in to
account that one hundred years ago a similar argument could have been
made. Technology advanced more rapidly in eighteenth and nineteenth
centuries than in the previous millennium. But we saw no Singularity
in the twentieth century. So what makes the twenty-first any
different? Who's to say we don't just get increasingly sophisticated
technology but not a "Spike" or otherwise world-changing Singularity
event.

Remember, just because something is *possible* does *not* guarantee
that it will happen.

We don't even understand our own intelligence, our own minds yet. One
could argye, then, that it would be rather difficult to successfully
create a true artificial intelligence. Personally, I think AI has a
50-50 chance. It may be that we can't come up with anything more
intelligent than a slick and sophisticated talking-computer user
interface. Or we could create AI.

Now, as for writing SF set after a Singularity, I'm afraid you're,
well, wrong. A published example that I know of would be Sean
William's and Shane Dix's "Echoes of Earth" (IIRC), althought one
might argue against it, since that Singularity didn't end well. A
much better example, though one which hasn't yet to my knowledge
produced any printed commercial ficiton, is the Orion's Arm setting
being created online at http://www.orionsarm.com
This setting succeeds in telling post-Singularity stories largely
because the authors realize something I think you've overlooked--just
because we may be superceded by AI superintelligences, doesn't mean
that we ourselves become extinct. There'll still be a number of
sub-Singularity intelligences around, just as there's several
sub-sentient species on Earth at this time (dolphins, chimps,
etc.--and I apologize if I offend anyone by refering to them as
"sub-sentient").

Actually, among the many projects I'm toying around with right now are
a couple involving the Singularity and various possibilities
surrounding it. One is a sort of cross-over story, the Singularity
and the Apocalypse of Revelation. Another is a post-apocalyptic type
thing, based on the Singularity being ultimately self-destructive.


--
James
--
So close your eyes and swallow
Whatever gets you through the night
If I could ask this one thing
Please tell me everything's alright

how...@brazee.net

unread,
Mar 5, 2003, 7:39:38 PM3/5/03
to

On 5-Mar-2003, Michael J Ash <mik...@csd.uwm.edu> wrote:

> > I think my grandmother saw more impactful change than I will. She
> > remembered seeing her first car and her first phone, her first airplane
> > trip
> > to another country, and watching people walk on the moon.
>
> No, the advances are just coming in other areas. When you were born, how
> many people could e-mail their relatives halfway across the world at
> almost no cost? How many genetically-engineered plants were being grown?
> How many people were sent home from the hospital mere days after heart
> surgery?
>
> IMO, any of these is more impactful than watching people walk on the moon.
> That was a good stunt, but until we do something more permanent, that's

> all it's really be.

But none of those are nearly as impactful as the automobile and phone.

Mike Schilling

unread,
Mar 5, 2003, 7:49:50 PM3/5/03
to

<how...@brazee.net> wrote in message
news:eNw9a.4445$gF3.4...@newsread1.prod.itd.earthlink.net...

Or as impactful as what often occurs when using both at the same time.


Richard James

unread,
Mar 5, 2003, 8:35:01 PM3/5/03
to
Default User wrote:

>
>
> phil hunt wrote:
>
>> Allow me to clarify. IMO the dominant species will be something much
>> much more intelligent than humans currently are. Having said that,
>> it may be "us" in the sense that some people alive today will be
>> transformed into ultra-intelligent personalities downloaded into a
>> computer.
>
> Here's the problem. You make sweeping general assumptions and declare
> them to be facts. We don't ever come close to understanding the nature
> of conciousness, let alone how to "download" such a thing into a
> machine. Or even to create an ultra-intelligent concious entity within a
> machine from first principles. And we may not within the lifetime of any
> current person.

It's like saying in the future we will all have flying cars and then
assuming that all science fiction must have flying cars in it. It ignores
the possibility that there will never be any flying cars. Just as there may
never be AI, Ultra Intelligent concious enitities or even the singularity
itself.

The singularity is itself science fiction.

Richard :)

--
Will kill for Documentation.
A Vic 20 is faster than a C64: 8bit roxs
http://dogmilk.homelinux.com/

Keith F. Lynch

unread,
Mar 5, 2003, 9:45:27 PM3/5/03
to
phil hunt <ph...@cabalamat.org> wrote:
> A post-apocalyse society won't have futuristic technology.

Maybe, maybe not. A nuclear war, asteroid strike, or bio-engineered
plague could result in billions of deaths, but it wouldn't necessarily
end civilization, or even slow down technological progress very much.
--
Keith F. Lynch - k...@keithlynch.net - http://keithlynch.net/
I always welcome replies to my e-mail, postings, and web pages, but
unsolicited bulk e-mail (spam) is not acceptable. Please do not send me
HTML, "rich text," or attachments, as all such email is discarded unread.

Keith F. Lynch

unread,
Mar 5, 2003, 10:26:59 PM3/5/03
to
GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:
> ... we'd have about as much chance of reading it as 1903 would have
> with a DVD.

Interesting concept. Just how difficult would it have been to read
a DVD in 1903? Lets say they were aware that it somehow contained
important information, and were willing to put unlimited resources
into figuring it out. Presumably one of the first things they'd do is
look at it under a microscope. And that should pretty much suffice,
except that they'd have to do a lot of it.

That will get them the bit pattern. Just how hairy is the encoding?
If it were straight ASCII, they should be able to figure out the ASCII
code using basic cryptographic techniques that have been known for
centuries.

Are data DVDs encoded with CSS, or just video DVDs? Even if so, if
CSS could be broken by a teenager, I think the best minds of 1903
could do it.

Once they know the encoding, how long would it take them to photograph
all of the disc through a microscope, so that the thousands of
photographic plates could be farmed out to separate teams to decode
in parallel?

Instead of using teams of people with pen and paper, they might make
copperplate etchings from the photographic plates, and read them with
an electromechanical machine. They did have punched cards and punched
paper tape in 1903, and machines to produce and read them.

1803 would have been more challenging. Microscopes, but no
photography, and far less familiarity with binary codes.

> (The amount of information needed to convey 'world changing
> technology' is growing like topsy. "Bang the rocks together"
> doesn't cut it any more. 8>.)

I'm not so sure. One page is all it would have taken to convey to
Nazi Germany enough about the atom bomb that they could build one
within a year. And how complicated are the formulas for high
temperature superconductors? How about the genes for making morphine
and cocaine? Splice those genes into common yard grass, and you'll
change the world.

One bit can convey a lot of information. In principle it can halve
the cost of a research program, by specifying which of two equally
plausible lines of development is the most productive. And a single
DVD can hold about twenty billion bits.

phil hunt

unread,
Mar 5, 2003, 9:54:36 PM3/5/03
to
On Wed, 05 Mar 2003 19:42:18 GMT, David Friedman <dd...@daviddfriedman.com> wrote:
>In article <slrnb6cdl8...@cabalamat.uklinux.net>,
> ph...@cabalamat.org (phil hunt) wrote:
>
>> Genetic engineering means looking at the human genome, working out
>> what various genes do, then changing things here and there in the
>> hope of producing cleverer people.
>>
>> Genetic programming means doing the same sorts of things, except
>> everything is being done to 0s and 1s inside a computer.
>
>I disagree. Genetic engineering requires, as you say, "working out what
>various genes do." Genetic programming doesn't require working out what
>various 0's and 1's do. The two processes are fundamentally different.
>One is engineering, one is evolution.

You're right; I should have phrased my comment better.

>> True, but we can make educated guesses. My guess is the singularity
>> is a certainty, provided that technology continues to advance in
>> much the same way it has been doing. (Obviously an anti-technology
>> world state changes everything).
>
>But it also changes everything if the rate of advance slows for
>technical reasons--if, for example, Moore's law runs out of steam.

This isn't happening, in fact if anything it is speeding up. Nor is
there any reason in principle why it should happen. The reason for
Moore's law is that information is just 0s and 1s and is independent
of any particular physical carrier: if can, in principle, be
implemented as mechanical rods, or light, or electricity, or water
moving in pipes, etc. So there will always be a new technology along
that can process information more quickly.

phil hunt

unread,
Mar 5, 2003, 9:55:14 PM3/5/03
to
On Wed, 5 Mar 2003 19:51:15 +0000, GSV Three Minds in a Can <GSV@[127.0.0.1]> wrote:
>Bitstring <m3bs0pb...@khem.blackfedora.com>, from the wonderful
>person Mark Atwood <m...@pobox.com> said
>>ph...@cabalamat.org (phil hunt) writes:
>>>
>>> Bear in ming that human intelligence can't be all that complicated
>>> really: the human genome contains less information than the average
>>> Linux distro, and the genome codes for a lot more than intelligence.
>>
>>One measly CDROM sent back from 2103 to 2003 could give the receiver
>>of that CD the ability to conquer/destroy/remake the world and the species.
>
>I sort of doubt it, any more than one =book= sent back from now to 1903
>would have that effect.

If you wrote a special book for the purpose, it might be useful.

phil hunt

unread,
Mar 5, 2003, 10:03:40 PM3/5/03
to
On Wed, 5 Mar 2003 21:38:20 GMT, Default User <first...@company.com> wrote:
>
>
>phil hunt wrote:
>
>> Allow me to clarify. IMO the dominant species will be something much
>> much more intelligent than humans currently are. Having said that,
>> it may be "us" in the sense that some people alive today will be
>> transformed into ultra-intelligent personalities downloaded into a
>> computer.
>
>Here's the problem. You make sweeping general assumptions and declare
>them to be facts.

I'd call them "highly probable conjectures".

> We don't ever come close to understanding the nature
>of conciousness,

We do know that what goes on in the human brain is information
processing, and we do know that other lumps of atoms can process
information too.

>let alone how to "download" such a thing into a
>machine.

In principle, you'd model the brain's neural network. This is a
non-trivial probelm of course.

>Or even to create an ultra-intelligent concious entity within a
>machine from first principles.


> And we may not within the lifetime of any
>current person.

True, but unlikely. With current technology, some people can live
120 years; therefore some people alive today ought to be alive in
2123, by which time all thgese technologies will be much more
advanced. (And medical technologies will also be more advanced,
possibly leading to life-extension technologies -- it's a sobering
thought that the first immortal people are probsably alive today).

>Similarly, we don't have a good idea of how to interface electronic
>systems with the human brain. It's possible that this will be an
>extremely difficult task.

True. The Internet was difficult; sequencing the human genome was
difficult. Somehow they got done.

>So while very small, very fast, very powerful,
>very high storage computers will almost certainly be a reality,
>"plugging in" such a thing may not be easy at all.

Actually, there are people *today* walking round with electronics
attached to their brains. (To alleviate hearing; but the general
principle holds).

>We may end with a
>human/machine interface more crude (aural or visual) than you talk
>about.

No. We'll *start* with crude systems, which will be improved on
until they are not crude.

>As such, the user would have data and data-crunching available,
>but no increase in intelligence, any more than I'm more intelligent than
>my Grandpa was because I have a computer with google.

I disagree with your implied definition of intelligence.

rmtodd

unread,
Mar 5, 2003, 11:13:55 PM3/5/03
to
"Keith F. Lynch" <k...@KeithLynch.net> writes:

> Interesting concept. Just how difficult would it have been to read
> a DVD in 1903? Lets say they were aware that it somehow contained
> important information, and were willing to put unlimited resources
> into figuring it out. Presumably one of the first things they'd do is
> look at it under a microscope. And that should pretty much suffice,
> except that they'd have to do a lot of it.

> That will get them the bit pattern. Just how hairy is the encoding?
> If it were straight ASCII, they should be able to figure out the ASCII
> code using basic cryptographic techniques that have been known for
> centuries.

I'm pretty sure it's not straight ASCII (or straight rendition of any
other data you put on there). I don't know the exact details for DVD,
so I'll just assume that DVD works like CD, as I am somewhat familiar
with the details for CD, and the design constraints are similar. The
encoding procedure is basically as follows: you take your raw data, in
chunks, and do one pass of Reed-Solomon error-correcting coding,
which gives you the original bits plus some added parity bits. Then you
rearrange the bits in a specified way (the "interleaver") and do another
round of RS encoding, which gives you some more parity bits. Then you do
it *again*. (Audio disks only do two rounds of RS encoding, since you can live
with dropouts in your music more readily than you can in your datafiles.)
Then you add some header bits which tell what block number this is etc., and
then each set of 8 bits goes through a "8/14 modulation encoder", which maps
each of the 256 possible bytes into a sequence of 14 bits in such a way as
to avoid sequences that are hard for the optics to read (sequences of
adjacent zeros or ones that are too short or too long).

Now imagine you're trying to read this disk, and all you have is the final
modulated bit sequence. You don't know the modulation code, and you have
no idea what the interleaver layout looks like. Even if your original
data sequence was just the ASCII code for "A" repeated a thousand times,
identifying the original data is liable to be impossible for anyone less
bright than Mentor of Arisia.

> Are data DVDs encoded with CSS, or just video DVDs? Even if so, if
> CSS could be broken by a teenager, I think the best minds of 1903
> could do it.

Data DVDs don't use CSS, I believe, but as to the "broken by a
teenager" bit: Yeah, it was broken by a teenager with access to a
working DVD player, working DVD software he could study, and either a
working MPEG4 player or enough knowledge of the standard to know when
he'd gotten a valid decode.

Erik Max Francis

unread,
Mar 6, 2003, 12:13:38 AM3/6/03
to
Scott Dubin wrote:

> Or, maybe we will simply find out that the physical laws of our
> universe do not make any of these science fiction ideas possible.
> People used to largly believe that witchcraft and spirit summoning and
> such was possible with the right knowledge, modern belief would seem
> to suggest that the physical laws of the world do not make this
> possible.
>
> I see no evidence that the psysical laws of our world allow for
> cognizant machines, or "augmenting human intelligence."

Well, humans are just cognizant machines, they're just biological
machines -- we are an existence proof that it is not prohibited by the
laws of physics. Suggesting otherwise, in fact, is invoking some sort
of divine prohibition -- that we're really special and transcend
physical law. Augmenting human intelligence may or may not be possible,
depending on exactly what you mean, but there's certainly no reason to
think that humans are optimal thinking machines in any regard, so it
hardly seems impossible.

It may well be that thinking machines are beyond our capabilities to
construct -- for a long time or maybe indefinitely -- but that's an
engineering limitation, not a physical one. There's a big difference
between something that's prohibited by physical law and something which
you just don't know how to do.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE

/ \ People are taught to be racists.
\__/ Jose Abad
Max Pandaemonium / http://www.maxpandaemonium.com/
A sampling of Max Pandameonium's music.

David Friedman

unread,
Mar 6, 2003, 12:39:46 AM3/6/03
to
In article <887734a2.03030...@posting.google.com>,
scott...@yahoo.com (Scott Dubin) wrote:

> Genetic engineering may be possible in the sence that you kill/neuter
> everybody who isn't smart and allow the smart people to breed, but the
> physical laws of the universe may not support a gene manipulation
> process that works in a different way.

Unless you are suggesting that human beings are biologically different
from other terrestrial species, that possibility has already been
eliminated, since we have done genetic engineering, not just selective
breeding, on other species.

--
www.daviddfriedman.com

Michael Ash

unread,
Mar 6, 2003, 12:49:35 AM3/6/03
to
In article <86k7fd1...@amonduul.ecn.ou.edu>,
rmtodd <rmt...@amonduul.ecn.ou.edu> wrote:

> > Are data DVDs encoded with CSS, or just video DVDs? Even if so, if
> > CSS could be broken by a teenager, I think the best minds of 1903
> > could do it.
>
> Data DVDs don't use CSS, I believe, but as to the "broken by a
> teenager" bit: Yeah, it was broken by a teenager with access to a
> working DVD player, working DVD software he could study, and either a
> working MPEG4 player or enough knowledge of the standard to know when
> he'd gotten a valid decode.

Also note the weakness in CSS is not the encryption scheme, which is a
widely-available, proven encryption. (I think they use 3DES or something
like that, but i don't remember.) The weakness is in the way they choose
their keys. They screwed it up so that instead of the 10^big
possibilities, there were only a few hundred or thousand possible keys,
which are easily checked in quick succession. Without computers to check
them in quick succession, and without knowing about the encryption
scheme beforehand, the task becomes many orders of magnitude more
difficult.

(On a side track: data DVDs definitely don't use CSS, and in fact it's
not mandatory for video DVDs to use it either.)

Cyrus Levesque

unread,
Mar 6, 2003, 12:53:42 AM3/6/03
to
ph...@cabalamat.org (phil hunt) wrote in message news:<slrnb6dekc...@cabalamat.uklinux.net>...

> On Wed, 5 Mar 2003 21:38:20 GMT, Default User <first...@company.com> wrote:
> >
> >
> >phil hunt wrote:
> >
> >> Allow me to clarify. IMO the dominant species will be something much
> >> much more intelligent than humans currently are. Having said that,
> >> it may be "us" in the sense that some people alive today will be
> >> transformed into ultra-intelligent personalities downloaded into a
> >> computer.
> >
> >Here's the problem. You make sweeping general assumptions and declare
> >them to be facts.
>
> I'd call them "highly probable conjectures".

Why? Is there any reason your conjectures are more probable than those
of the people who are disagreeing with you? I think that's what's
getting to me the most about all this. I'm just not seeing that.

> [snip]


> > And we may not within the lifetime of any
> >current person.
>
> True, but unlikely. With current technology, some people can live
> 120 years; therefore some people alive today ought to be alive in
> 2123, by which time all thgese technologies will be much more
> advanced. (And medical technologies will also be more advanced,
> possibly leading to life-extension technologies -- it's a sobering
> thought that the first immortal people are probsably alive today).
>

"Some people [today] can live 120 years." Well sure, it's literally
true, but less than one in a million people have. You use the word
"some" as if it means "most" or "half" or "a quarter", not "too few to
be statistically significant."

And "the first immortal people are probably alive today"? Where do you
get that "probably" from? How can you possibly have any idea about the
likelihood - not possibility but LIKELIHOOD - of something like that?

If any more than %1 of humanity has a life expectancy of 120 years by
2100, I hope I make it that long. Just so I last long enough to see
you eat your words about all the rest you will have got wrong. All
I've seen so far in this thread is you talking about ideas from
speculative fiction as if they were historical fact. Then you ignore
so many other possibilities from that same branch of fiction, with no
distinction between them that I can see except that they don't lead to
the conclusions you've already arrived at.

Michael Ash

unread,
Mar 6, 2003, 12:55:51 AM3/6/03
to
In article <614897b5.03030...@posting.google.com>,
JW4...@cp.appstate.edu (James) wrote:

> Michael J Ash <mik...@csd.uwm.edu> wrote in message
> news:<Pine.OSF.3.96.103030...@alpha3.csd.uwm.edu>...
> > On Wed, 5 Mar 2003 how...@brazee.net wrote:
> > No, the advances are just coming in other areas. When you were born, how
> > many people could e-mail their relatives halfway across the world at
> > almost no cost?
>
> Nitpick: This same argument could have been used in reference to
> telephones. Indeed, both telephones and the Internet were predicted
> shortly after their introduction to be inventions that would change
> the world. I think that in the case of the Internet we'll see a
> situation greatly similar to telephones. Yes, it will be a great
> tool, and it will have a major impact on many aspects of our lives.
> But--it won't bring forth the drastic changes predicted by many.

I'll refrain from the "what color is the sky on your planet?" remarks,
but I think you and I see this very differently. The ability to talk to
nearly anyone on the planet, nearly instantly, at nearly no cost, is
without parallel in history, and is actually useful in a whole lot of
different ways. It's useful on a personal level. (I make several
international calls a week for that, send e-mails, and otherwise live a
life that would be impossible without the internet and international
phone networks.) It's immensely useful for research. Phones and internet
don't get the spotlight because they're enabling technologies.
Technological advance would be a lot slower without them, to say nothing
of the personal changes.

Imagine Apollo without phones.

[other stuff snipped, no comment there]

Samuel Barber

unread,
Mar 6, 2003, 1:06:20 AM3/6/03
to
ph...@cabalamat.org (phil hunt) wrote in message news:<slrnb6de3c...@cabalamat.uklinux.net>...

> On Wed, 05 Mar 2003 19:42:18 GMT, David Friedman <dd...@daviddfriedman.com> wrote:
> >But it also changes everything if the rate of advance slows for
> >technical reasons--if, for example, Moore's law runs out of steam.
>
> This isn't happening, in fact if anything it is speeding up. Nor is
> there any reason in principle why it should happen.

Also no reason in principle why it shouldn't.

> The reason for
> Moore's law is that information is just 0s and 1s and is independent
> of any particular physical carrier: if can, in principle, be
> implemented as mechanical rods, or light, or electricity, or water
> moving in pipes, etc. So there will always be a new technology along
> that can process information more quickly.

Uh, no. Not even close. Moore's Law is an empirical observation about
the rate of progress in a particular technology (integrated circuits).

Sam

Malcolm McMahon

unread,
Mar 5, 2003, 5:25:26 PM3/5/03
to
On Wed, 5 Mar 2003 14:39:46 +0000, ph...@cabalamat.org (phil hunt)
wrote:

>>You are assuming that the singularity happens.
>
>Yes.

To my mind the singularity is just the apocalypse for atheists. There's
the same sense of the "end times" which a significant minority of the
human race always have believed they were living in. It's an essentially
religious sentiment.

Even if we develop AI, something which has been promised "real soon now"
for about 30 years, those AIs will get their motivations from humans
because motivations _always_ come from outside. Ours are from two
strands of evolution. Those of the AIs will come from their designers.

The only way they might come to rule us is if we tell them to do so.

Malcolm McMahon

unread,
Mar 5, 2003, 5:26:44 PM3/5/03
to
On 05 Mar 2003 10:46:20 -0800, Mark Atwood <m...@pobox.com> wrote:

>Building a general purpose assembler will probably be insanely difficult
>and uneconomic, yes.

It will be difficult but it can hardly be uneconomic when the ecconomic
rewards are effectively infinite.

George William Herbert

unread,
Mar 6, 2003, 3:48:37 AM3/6/03
to
GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:
>>One measly CDROM sent back from 2103 to 2003 could give the receiver
>>of that CD the ability to conquer/destroy/remake the world and the species.
>
>I sort of doubt it, any more than one =book= sent back from now to 1903
>would have that effect.

Oh, I really don't know about that. What you could fit in a book
is pretty significant...

The key is, you don't have to include every detail, just enough
to explain the theory and how to test and things we've done which
exploit it. The existing technical context in 1903 was fairly
sophisticated. You could hand full special and general relativity
on a page each, the basics of quantum mechanics on a couple of pages.
In biology... there's a flowchart I have seen that has about 1,000
biochemical reactions in it. The structure of DNA and how to look
at it and manipulate it, a few pages. The chemical formulas and
basics of producing the important medicines in the last century.
Electronics... how to make various transistors, integrated circuits,
the fundamentals of logic design, functional circuits, computer
architecture and programming, networking, etc. Nuclear physics,
how to build a nuclear reactor, produce the fuels, how to build
a nuclear bomb. The basics of rocketry and spaceflight, and the
key component technologies.

This might be an amusing thought experiment, to produce a set
or subset of such pages for a hypothetical book going back
100 years.


-george william herbert
gher...@retro.com

Nancy Lebovitz

unread,
Mar 6, 2003, 5:20:42 AM3/6/03
to
In article <614897b5.03030...@posting.google.com>,

James <JW4...@cp.appstate.edu> wrote:
>
>We don't even understand our own intelligence, our own minds yet. One
>could argye, then, that it would be rather difficult to successfully
>create a true artificial intelligence. Personally, I think AI has a
>50-50 chance. It may be that we can't come up with anything more
>intelligent than a slick and sophisticated talking-computer user
>interface. Or we could create AI.

Imho, any significant increase in intelligence will probably be
able to build on itself.

Maybe we're at some sort of limit, either for how much intelligence
is possible or for how much we can do, but I wonder what would happen
if the average intelligence were brought closer to the current high
end.

I have a suspicion that bio-feedback combined with brain-scanning has
a lot of possibilities. It could be possible to show people how
someone who's good at math uses their mind. Just teaching people
to look at problems without panicking would probably help.
--
Nancy Lebovitz na...@netaxs.com www.nancybuttons.com
Now, with bumper stickers

Using your turn signal is not "giving information to the enemy"

Nancy Lebovitz

unread,
Mar 6, 2003, 6:11:49 AM3/6/03
to
In article <f6f6a77b.03030...@posting.google.com>,
Cyrus Levesque <cybis...@hotmail.com> wrote:
>
>I have to disagree - mostly just for the sake of argument :) - that we
>can't write convincingly about post-Singularity societies. First, you
>say that we can't imagine a greater intelligence, but all life (well,
>on earth) is basically similar. A human goes through "mating dances"
>very different from a bird's, but the idea is the same - to attract a
>mate. Red and black ants fight and die over something as trivial as
>the color of their exoskeleton. All these things we can understand. If
>we can think down, why is it impossible to think up?

Because you get new emergent qualities as living things get more
complex. For example, after a certain level of complexity, mating
dances shade over into individually created art instead of
genetically determined display.

All animals eat, but humans also have an economy.

GSV Three Minds in a Can

unread,
Mar 6, 2003, 6:30:35 AM3/6/03
to
Bitstring <slrnb6de4i...@cabalamat.uklinux.net>, from the
wonderful person phil hunt <ph...@cabalamat.org> said

>On Wed, 5 Mar 2003 19:51:15 +0000, GSV Three Minds in a Can
><GSV@[127.0.0.1]> wrote:
>>Bitstring <m3bs0pb...@khem.blackfedora.com>, from the wonderful
>>person Mark Atwood <m...@pobox.com> said
>>>ph...@cabalamat.org (phil hunt) writes:
>>>>
>>>> Bear in ming that human intelligence can't be all that complicated
>>>> really: the human genome contains less information than the average
>>>> Linux distro, and the genome codes for a lot more than intelligence.
>>>
>>>One measly CDROM sent back from 2103 to 2003 could give the receiver
>>>of that CD the ability to conquer/destroy/remake the world and the species.
>>
>>I sort of doubt it, any more than one =book= sent back from now to 1903
>>would have that effect.
>
>If you wrote a special book for the purpose, it might be useful.

'Useful' I could certainly accept, 'World changing'
(conquer/destroy/remake) would require rather a large library of books
(maybe even more than a DVD's worth). As folks point out down-thread,
conveying the science (or enough pointers that someone could work it
out) would be semi-plausible (assuming drastic pruning of what science
you bothered about), but conveying enough of the engineering technology
would be a real challenge .. many of the most useful things can't be
made with 1903 materials technology, and even the machines to make the
materials require technology .. etc. etc..

--

GSV Three Minds in a Can

Outgoing Msgs are Turing Tested,and indistinguishable from human typing.

Bertil Jonell

unread,
Mar 6, 2003, 9:34:11 AM3/6/03
to
In article <slrnb6c31i...@cabalamat.uklinux.net>,

phil hunt <ph...@cabalamat.org> wrote:
>Bear in ming that human intelligence can't be all that complicated
>really: the human genome contains less information than the average
>Linux distro, and the genome codes for a lot more than intelligence.

You'd probably have to include the upbringing and interaction with
environment too. Just video-but-as-good-as-the-eye-can-do-it is a lot of bits
per second. Multiply by a few years.

>|*|*| Philip Hunt <ph...@cabalamat.org> |*|*|

-bertil-
--
"It can be shown that for any nutty theory, beyond-the-fringe political view or
strange religion there exists a proponent on the Net. The proof is left as an
exercise for your kill-file."

phil hunt

unread,
Mar 5, 2003, 10:55:30 PM3/5/03
to
On 5 Mar 2003 22:26:59 -0500, Keith F. Lynch <k...@KeithLynch.net> wrote:
>GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:
>> ... we'd have about as much chance of reading it as 1903 would have
>> with a DVD.
>
>Interesting concept. Just how difficult would it have been to read
>a DVD in 1903? Lets say they were aware that it somehow contained
>important information, and were willing to put unlimited resources
>into figuring it out. Presumably one of the first things they'd do is
>look at it under a microscope. And that should pretty much suffice,
>except that they'd have to do a lot of it.
>
>That will get them the bit pattern. Just how hairy is the encoding?
>If it were straight ASCII, they should be able to figure out the ASCII
>code using basic cryptographic techniques that have been known for
>centuries.
>
>Are data DVDs encoded with CSS, or just video DVDs? Even if so, if
>CSS could be broken by a teenager, I think the best minds of 1903
>could do it.

No, they didn't have the processing power. Cracking CSS takes (IIRC)
a brute force search on a 32-bit search space.

And once they have cracked, they can't watch the pictures anyway!

>Once they know the encoding, how long would it take them to photograph
>all of the disc through a microscope, so that the thousands of
>photographic plates could be farmed out to separate teams to decode
>in parallel?

If it is ascii, they can do this. If a movie, forget about it.

>I'm not so sure. One page is all it would have taken to convey to
>Nazi Germany enough about the atom bomb that they could build one
>within a year. And how complicated are the formulas for high
>temperature superconductors? How about the genes for making morphine
>and cocaine?

What do they need genes for? These are naturally-occuring plant
extracts.

>Splice those genes into common yard grass, and you'll
>change the world.

Why would anyone want to do that?

--

|*|*| Philip Hunt <ph...@cabalamat.org> |*|*|

Karl M Syring

unread,
Mar 6, 2003, 10:23:05 AM3/6/03
to
Scott Dubin wrote on 5 Mar 2003 15:23:21 -0800:
<snip>

> Genetic engineering may be possible in the sence that you kill/neuter
> everybody who isn't smart and allow the smart people to breed, but the
> physical laws of the universe may not support a gene manipulation
> process that works in a different way.
>
> These things may be possible... or they may not.

Uhm, genetic engineering with retroviral vectors is a reality today.
In which universe do you live?

Karl M. Syring

Karl M Syring

unread,
Mar 6, 2003, 10:23:06 AM3/6/03
to
phil hunt wrote on Wed, 5 Mar 2003 20:06:20 +0000:
>
> OTOH, they probably will be using ascii (in its Unicode
> incarnation), several Internet protocols such as ftp and http, etc.

Uhm, ftp is dead *now*. Too many security problems.

Karl M. Syring

Mark Fergerson

unread,
Mar 6, 2003, 11:55:36 AM3/6/03
to
phil hunt wrote:
> On 5 Mar 2003 22:26:59 -0500, Keith F. Lynch <k...@KeithLynch.net> wrote:
>
>>GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:

>>... How about the genes for making morphine


>>and cocaine?
>
> What do they need genes for? These are naturally-occuring plant
> extracts.

In biome-specific, hence controllable plants.

>>Splice those genes into common yard grass, and you'll
>>change the world.
>
> Why would anyone want to do that?

Consider the influence of the Opium Wars on world history
(and the current "cocaine wars"). Now consider what happens
if the horribly addictive substance of interest can grow
_anywhere_, like say marijuana.

The world would be a very different place if a
Victorian-era Bolton housewife could have grown her own
opium. That assumes the British Empire could have held
together, which is arguable.

Mark L. Fergerson

Dr John Stockton

unread,
Mar 6, 2003, 12:22:44 PM3/6/03
to
JRS: In article <3E66934A...@alcyone.com>, seen in
news:rec.arts.sf.science, Erik Max Francis <m...@alcyone.com> posted at
Wed, 5 Mar 2003 16:16:10 :-

>But keep in mind that people who say we'll be unable to predict the
>future in the future are, themselves, attempting to predicting the
>future. There's a certain amount of irony involved.

Seen in someone's signature :

"I don't make predictions. I never have and I never will." - Tony Blair

--
© John Stockton, Surrey, UK. j...@merlyn.demon.co.uk Turnpike v4.00 MIME. ©
Web <URL:http://www.merlyn.demon.co.uk/> - FAQish topics, acronyms, & links.

In MS OE, choose Tools, Options, Send; select Plain Text for News and E-mail.

Al Montestruc

unread,
Mar 6, 2003, 2:00:59 PM3/6/03
to
GSV Three Minds in a Can <GSV@[127.0.0.1]> wrote in message news:<8IaoQSDb...@from.is.invalid>...

In world conquest I do think it would be far more effective to send a
taylored book on military technology, stratagy and tactics. The
object would not be to raise the technology level to that of 2003, but
only 30 odd years or so. Such a book given to the king or dictator of
a proto-industrial state in 1903, say the German Kieser (sic), could
assure him world conquest.

What I would see is no fundamental changes in technology, same basic
tehcnologies as before, but Germany entering WWI in 1914 with arms
comperable to what she had in ~ 1935, perhaps 1918 level aircraft as
in good fast fighters, 1930s light tanks, and good 1930 ish U-boats
with snorkles and plenty of all, and the knowledge of how to use them,
and a better appreciation of just how dangerous British Intelligence
was.

In that case Germany would have walke all over the allies, and imposed
a peace on her terms, and control Europe. As long as they keep this
technical edge, the world will eventually fall to them.

George William Herbert

unread,
Mar 6, 2003, 2:09:32 PM3/6/03
to
GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:
>[...]

>>>>One measly CDROM sent back from 2103 to 2003 could give the receiver
>>>>of that CD the ability to conquer/destroy/remake the world and the species.
>>>
>>>I sort of doubt it, any more than one =book= sent back from now to 1903
>>>would have that effect.
>>
>>If you wrote a special book for the purpose, it might be useful.
>
>'Useful' I could certainly accept, 'World changing'
>(conquer/destroy/remake) would require rather a large library of books
>(maybe even more than a DVD's worth). As folks point out down-thread,
>conveying the science (or enough pointers that someone could work it
>out) would be semi-plausible (assuming drastic pruning of what science
>you bothered about), but conveying enough of the engineering technology
>would be a real challenge .. many of the most useful things can't be
>made with 1903 materials technology, and even the machines to make the
>materials require technology .. etc. etc..

I don't know about that.

There's nothing about a basic nuclear reactor, for example,
which can't be done with 1903 technology. The fundamentals
of radioactivity theory and a few cross sections would be
pretty close to it (well, and the ability to make relatively
pure carbon for the moderator, or separate out industrial
quantities of heavy water). The processing to get plutonium
out of those reactors' output fuel streams is mostly knowing
what chemical processes are needed, and how to remotely process
the materials. The basics of nuclear bomb design technology
are going to be more than a few pages, but the equations and
general descriptions and a dimentioned sketch of a simple and
tested bomb concept or two would be very doable.

Orbital rocketry really wants some materials advantages
over what was available in 1903, but general concepts and
intermediate range missiles would be possible.

There's nothing about basic semiconductor processing and
low end IC production which is impractical in 1903 tech.
The fundamentals of digital design, computer architecture,
computer programming are all going to be easy to explain.

There's nothing about the design concept of a modern tank
which is beyond 1903 technology, though the power to weight
ratios will be poor.

There's nothing about the design concept of discarding sabot
anti-tank gun rounds or naval gun rounds which is going to be
beyond 1903 technology.

There's nothing about say the design of pulsejets which is
beyond 1903 technology, and the fundamental equations of
aerodynamics and fluid flow and aircraft design would all
be easy to transmit.

There's nothing about many new medicines which is beyond
1903 chemistry. Knowing what the compounds are, a summary
of the synthesis process, and what they are useful for is
all you really need.

There's nothing about many new explosives which is
beyond 1903 chemistry. Volume production of RDX would
be trivial with 1903 industrial chemistry, and explaining
the reactions can be done in a paragraph or two.

There's nothing about semi or fully automatic assault
rifles like the AK-47 say which is beyond 1903 technology.

If the prospect of any one of the powers in WW 1 entering
the war with intermediate range nuclear ballistic missiles,
nuclear submarines, early 1960s grade computers, telecommunications
and handheld reliable transistor radios, WW 2 grade aircraft and
tanks, relatively modern assault rifles and artillery, and the
medicines to mitigate the vast majority of illnesses their troops
suffer in the field doesn't qualify as "remaking the world" then I
don't know what does. Any modern armored division could sweep
almost any number of WW 2 technology units from the battlefield,
subject only to ammunition supply and other logistics restrictions.
Roughly that same gap in capabilities is what we're talking about
being available 10 years after "the book" arrives in 1903.


-george william herbert
gher...@retro.com

GSV Three Minds in a Can

unread,
Mar 6, 2003, 3:16:26 PM3/6/03
to
Bitstring <b486dc$8ia$1...@gw.retro.com>, from the wonderful person George
William Herbert <gher...@gw.retro.com> said

<snip more examples>

You haven't convinced me, unless you think this 'book' is going to be
dozens of kilo-pages thick. Even today 'basics of a digital computer' is
a thick book .. 'basics of programming one' is another thick book.
Building a missile is several thick books. Maybe telling them 'it can be
done', or 'this is what you should do' helps some (assuming you are
talking to a genius at the other end), but I still doubt you can get
your 'conquer/destroy/remake' into a single volume. 8>.

--

GSV Three Minds in a Can

Bill Snyder

unread,
Mar 6, 2003, 3:54:48 PM3/6/03
to
On 6 Mar 2003 11:09:32 -0800, gher...@gw.retro.com (George William
Herbert) wrote:

[etc.]

It really seems to me that for several of these you're underestimating
both the materials science involved and the knowhow problem -- the
difference between what a spec says and what you actually need to know
to build to that spec. Maybe you can build a reactor in 1903, but
where do you get the enriched uranium to fuel it? Maybe you can in
theory build a transistor production line quickly, but how long does
it take you to produce the ultra-pure silicon with next-door-to-zero
dislocations, the photoresist, etc., etc.


--
Bill Snyder [This space unintentionally left blank.]

Sean O'Hara

unread,
Mar 6, 2003, 5:30:21 PM3/6/03
to
In the Year of the Goat, the Great and Powerful how...@brazee.net
declared...

>
> On 5-Mar-2003, Michael J Ash <mik...@csd.uwm.edu> wrote:
>
> > No, the advances are just coming in other areas. When you were born, how
> > many people could e-mail their relatives halfway across the world at
> > almost no cost? How many genetically-engineered plants were being grown?
> > How many people were sent home from the hospital mere days after heart
> > surgery?
> >
> > IMO, any of these is more impactful than watching people walk on the moon.
> > That was a good stunt, but until we do something more permanent, that's
> > all it's really be.
>
> But none of those are nearly as impactful as the automobile and phone.
>
ITYM cheap, mass-produced automobiles and phones. I can imagine
an enterprising genius during the Renaissance or Enlightenment
figuring out how to make a steam-driven car, but without any
industrial infrastructure it wouldn't become anything more than
a play-thing for the wealthy. Likewise, if research into
electricity had started earlier, I could imagine the telephone
appearing before it did.

--
Sean O'Hara
Sparks: Under martial law, you could suspend habeas corpus, empower
a Posse Comitatus, an--
Murphy: That's crap. Mars is wild, untamed! I'm forming a cadre
of Martian Knights, charged with enforcing Martian law!

Scott Dubin

unread,
Mar 6, 2003, 5:35:30 PM3/6/03
to
Karl M Syring <syr...@email.com> wrote in message news:<b47p4p$1rn03l$1...@ID-7529.news.dfncis.de>...

I suppose what I meant was genetic engineering with extreme Finesse.
Just because I can throw a baseball with my bare hands ten feet
doesn't mean I have the capability to throw it one thousand.

Scott Dubin

unread,
Mar 6, 2003, 5:38:37 PM3/6/03
to
Erik Max Francis <m...@alcyone.com> wrote in message news:<3E66D902...@alcyone.com>...

> Scott Dubin wrote:
>
> > Or, maybe we will simply find out that the physical laws of our
> > universe do not make any of these science fiction ideas possible.
> > People used to largly believe that witchcraft and spirit summoning and
> > such was possible with the right knowledge, modern belief would seem
> > to suggest that the physical laws of the world do not make this
> > possible.
> >
> > I see no evidence that the psysical laws of our world allow for
> > cognizant machines, or "augmenting human intelligence."
>
> Well, humans are just cognizant machines, they're just biological
> machines -- we are an existence proof that it is not prohibited by the
> laws of physics. Suggesting otherwise, in fact, is invoking some sort
> of divine prohibition -- that we're really special and transcend
> physical law.

We don't know WHY we are cognizant, science has no explanation for
conscousness. Theres no reason to assume that non biological machines
have that special potential, since we have no clue what that special
something is. I don't think any divine prohibition is necessary.

Sean O'Hara

unread,
Mar 6, 2003, 5:46:35 PM3/6/03
to
In the Year of the Goat, the Great and Powerful Malcolm McMahon
declared...

>
> Even if we develop AI, something which has been promised "real soon now"
> for about 30 years, those AIs will get their motivations from humans
> because motivations _always_ come from outside. Ours are from two
> strands of evolution. Those of the AIs will come from their designers.
>
> The only way they might come to rule us is if we tell them to do so.
>
The Singularity doesn't require that AIs rule us, only that they're
beyond our ability to understand -- or, to look at it another way,
that we're particularly slow children as compared to them and not
worth their time.

You're also making the assumption that the Singularity would come
from artificial intelligence and not intelligence augmentation.

Karl M Syring

unread,
Mar 6, 2003, 6:08:43 PM3/6/03
to

The stadium of the baseball bat seems to long over. There
recently were some problems with leukemia cases in clinical
trials which lead to an adjustment of procedures, but I think
the technology is well on the way in treating genetic diseases.

Karl M. Syring

David Friedman

unread,
Mar 6, 2003, 6:20:03 PM3/6/03
to
In article <MPG.18d1960bd...@news.cis.dfn.de>,

Sean O'Hara <darkerthenightth...@myrealbox.com> wrote:

> ITYM cheap, mass-produced automobiles and phones. I can imagine
> an enterprising genius during the Renaissance or Enlightenment
> figuring out how to make a steam-driven car, but without any
> industrial infrastructure it wouldn't become anything more than
> a play-thing for the wealthy.

There is a historical equivalent. Someone made a magazine repeating
rifle--with powder and ball, not cartridges--very early. I'm afraid I
don't remember the date. But it must have been horrendously expensive
and I don't know how well it worked.

--
www.daviddfriedman.com

John Schilling

unread,
Mar 6, 2003, 6:29:40 PM3/6/03
to
Bill Snyder <bsn...@iadfw.net> writes:

>On 6 Mar 2003 11:09:32 -0800, gher...@gw.retro.com (George William
>Herbert) wrote:

[one modern book to sent back to 1903 enabling world conquest]

>>I don't know about that.

>>There's nothing about a basic nuclear reactor, for example,
>>which can't be done with 1903 technology. The fundamentals
>>of radioactivity theory and a few cross sections would be
>>pretty close to it (well, and the ability to make relatively
>>pure carbon for the moderator, or separate out industrial
>>quantities of heavy water). The processing to get plutonium
>>out of those reactors' output fuel streams is mostly knowing
>>what chemical processes are needed, and how to remotely process
>>the materials. The basics of nuclear bomb design technology
>>are going to be more than a few pages, but the equations and
>>general descriptions and a dimentioned sketch of a simple and
>>tested bomb concept or two would be very doable.

>[etc.]

>It really seems to me that for several of these you're underestimating
>both the materials science involved and the knowhow problem -- the
>difference between what a spec says and what you actually need to know
>to build to that spec.

George Herbert is approximately the last person on usenet I would
accuse of that failing.

>Maybe you can build a reactor in 1903, but where do you get the
>enriched uranium to fuel it?

You don't need enriched uranium to fuel a nuclear reactor. You need
enriched uranium *or* very pure graphite *or* heavy water, and the
latter two are easily within reach of 1903 industry.

George and I have both extensively studied the issue of nuclear weapons
production by third-world nations and terrorist groups today; while this
is not exactly the same as a first-world nation of a century ago and a
modern text, it does require a comparable understanding of the sort of
details you talk about. Frankly, the Great Powers of 1903 would have
an *easier* time of it than North Korea or Iraq would now - the big
obstacle is not the technology, but the scale of the industry and the
need for secrecy. A 1903 Great Power has *lots* of industry, and if
they have the only copy of The Book they can do most of the work in
plain sight.

As for the book needing to be many thousands of pages long, no. Just
for example, I can probably shave six months off their bomb program
with one *sentence*.

"When it occurs to you to build a carbon-moderated breeder reactor,
and it will, it is essential that the carbon be very pure and free
of boron".

I don't need to send them a textbook on carbon refining; they know
how to do that already and have lots of sharp people to work on
scaling it up. I don't need to tell them *why* the carbon needs
to be ultra-pure, though I did give them the "boron" hint. They
don't need to know, and their sharp people will figure it out
anyhow. They just need to know the problem exists before they
head down a blind alley.


--
*John Schilling * "Anything worth doing, *
*Member:AIAA,NRA,ACLU,SAS,LP * is worth doing for money" *
*Chief Scientist & General Partner * -13th Rule of Acquisition *
*White Elephant Research, LLC * "There is no substitute *
*schi...@spock.usc.edu * for success" *
*661-951-9107 or 661-275-6795 * -58th Rule of Acquisition *

Erik Max Francis

unread,
Mar 6, 2003, 6:39:18 PM3/6/03
to
Bill Snyder wrote:

> It really seems to me that for several of these you're underestimating
> both the materials science involved and the knowhow problem -- the
> difference between what a spec says and what you actually need to know
> to build to that spec.

And, furthermore, what might be feasible might be horrendously (and thus
prohibitively) expensive, given the technological advances we have today
with computer modelling, plastics, and the like. Even if was
technically possible, the technological infrastructure just wouldn't
make it practical enough to have any significant impact.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE
/ \ He who laughs has not yet heard the bad news.
\__/ Bertolt Brecht
Bosskey.net: Counter-Strike / http://www.bosskey.net/cs/
A personal guide to Counter-Strike.

Erik Max Francis

unread,
Mar 6, 2003, 6:46:40 PM3/6/03
to
Scott Dubin wrote:

> We don't know WHY we are cognizant, science has no explanation for
> conscousness. Theres no reason to assume that non biological machines
> have that special potential, since we have no clue what that special
> something is. I don't think any divine prohibition is necessary.

Biological machines are just machines. What physics tells you is that
the matter that makes up the Earth, and you, is no different from the
matter everywhere else in the Universe, and the rules that govern all
the matter are the same. The fact that we exist is proof that the laws
of physics allow cognizant machines -- we just happen to be biologically
based ones.

It may well be possible that it is practically impossible to build a
cognizant machine, but it's far from prohibited by the laws of physics.
Indeed, the laws of physics make it _obvious_ that it is possible: We
exist.

Besides, who says that artificial intelligence must be purely
"non-biological"? Biology isn't special, it's just chemistry.

GSV Three Minds in a Can

unread,
Mar 6, 2003, 7:15:08 PM3/6/03
to
Bitstring <b48ll4$gm3$1...@spock.usc.edu>, from the wonderful person John
Schilling <schi...@spock.usc.edu> said

I thought D2O was produced mostly by electrolysis .. so all you need is
a pretty big nuclear / hydro / whatever power plant .. which, afaicr,
1903 ain't got .. so we better teach them about that (did they even have
reinforced concrete to construct an appropriate dam for a hydro plant?
If not, that's another chapter. 8>.)

--

GSV Three Minds in a Can

how...@brazee.net

unread,
Mar 6, 2003, 8:43:54 PM3/6/03
to

On 5-Mar-2003, Michael Ash <ma...@mikeash.com> wrote:

> I'll refrain from the "what color is the sky on your planet?" remarks,
> but I think you and I see this very differently. The ability to talk to
> nearly anyone on the planet, nearly instantly, at nearly no cost, is
> without parallel in history, and is actually useful in a whole lot of
> different ways. It's useful on a personal level. (I make several
> international calls a week for that, send e-mails, and otherwise live a
> life that would be impossible without the internet and international
> phone networks.) It's immensely useful for research. Phones and internet
> don't get the spotlight because they're enabling technologies.
> Technological advance would be a lot slower without them, to say nothing
> of the personal changes.

Or imagine ocean liners, trains, cars, and airplanes with their ability to
let people of all economic classes go around the world.

I haven't yet seen these types of impacts with technology from my lifetime.

how...@brazee.net

unread,
Mar 6, 2003, 8:46:24 PM3/6/03
to

On 5-Mar-2003, cybis...@hotmail.com (Cyrus Levesque) wrote:

> Nitpick: You leave out the most likely way humans will be replaced: we
> could be outevolved, probably but not certainly by something that
> evolves from us. :)

Not if we use technology as part of our evolution. This is a new thing, we
don't know what is likely or not.

how...@brazee.net

unread,
Mar 6, 2003, 8:48:18 PM3/6/03
to

On 6-Mar-2003, scott...@yahoo.com (Scott Dubin) wrote:

> I suppose what I meant was genetic engineering with extreme Finesse.
> Just because I can throw a baseball with my bare hands ten feet
> doesn't mean I have the capability to throw it one thousand.

But man/machine combinations have hurled projectiles many times this
distance.

David Hawk

unread,
Mar 6, 2003, 9:53:02 PM3/6/03
to

"Erik Max Francis" <m...@alcyone.com> wrote in message
news:3E67DDE0...@alcyone.com...

> Scott Dubin wrote:
>
> > We don't know WHY we are cognizant, science has no explanation for
> > conscousness. Theres no reason to assume that non biological machines
> > have that special potential, since we have no clue what that special
> > something is. I don't think any divine prohibition is necessary.
>
> Biological machines are just machines. What physics tells you is that
> the matter that makes up the Earth, and you, is no different from the
> matter everywhere else in the Universe, and the rules that govern all
> the matter are the same. >

Um - but this assumes that cognizance is a process that has been defined,
right? Have we defined cognizance in terms of matter/physics/whatever? We
can identify the precise electrochemical means of thought processes etc. -
but have we in fact identified the basis of cognizance?

My cats' brains have many of the same properties, in this regard, as mine -
they use many very similar processes, neurotransmitters, etc. We could
(perhaps?) even map the electrochemical process of a cat brain preparing the
cat to jump on a mouse, and it would probably be very similar - if not
identical - to the electrochemical process of my brain preparing me to
devour a filet mignon in burgundy sauce - but I seriously doubt that
Nicholas (the cat currently sitting at my elbow looking for a bit of
petting) actually appreciates the difference between the well-turned beef
and the raw mouse - nor does he have even the glimmer of a capacity to
contemplate the very question of whether cognizance is a purely
electrochemical process or something not-yet-defined. (Nor can he
penetrate, as he responds to the unique arrangement of phonemes that
identify him, the mystery of why I should choose to name him after a Russian
czar murdered by idealistic and completely mislead Bolsheviks.)

I have seen it suggested that cognizance is merely a result of a threshold
level of interconnection of processes - that if you could get a few thousand
or hundred or billion cat brains working in concert, voila! you have a
collectively cognizant cat. One who can appreciate Jethro Tull played at
volumes appropriate to an arena performance, despite the pain it causes its
delicate ears, instead of running and hiding from the End of the World as
soon as the opening riff of "Aqualung" splits the air.

This suggests to me that perhaps the universe itself is cognizant - surely
there are enough associative linkages in all that fractal/information
theory/cantor dust/strange attractor amalgam of interacting systems and
subsystems and macro-micro-scalar doohicky Big Everything=42 to constitute a
self-aware something-or-other? - if, in fact, cognizance is purely a
function of physics.

Ok- so maybe I should avoid posting to a science/science fiction newsgroup
after having enjoyed a generous helping of Chimay (Cinq Cents) - but this
makes sense to me.

To reiterate the question - have we defined
consciousness/sentience/cognition as a purely electrochemical process? I
suspect not.

Hawk, David
sci-fi writer wannabe and occasional sot


Erik Max Francis

unread,
Mar 6, 2003, 10:02:16 PM3/6/03
to
David Hawk wrote:

> Um - but this assumes that cognizance is a process that has been
> defined,
> right? Have we defined cognizance in terms of
> matter/physics/whatever? We
> can identify the precise electrochemical means of thought processes
> etc. -
> but have we in fact identified the basis of cognizance?

Of course it's unknown. But that's irrelevant. The question was
whether the laws of physics _prohibited_ the construction of cognizant
machines. The fact that we exist demonstrates conclusively that it does
not; if they were physically impossible, we couldn't possibly be here.

It may be practically impossible to ever create them, but that's a
wholly different thing from something being _physically_ impossible.
Suggesting that we exist but it's _physically_ impossible to create an
artificial intelligence is invoking magical uniqueness arguments about
humanity's importance that boils down to religious conviction.

We are proof positive that they can be made, because if they couldn't we
couldn't be here. Biological matter is not somehow "different" from
non-biological matter.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE

/ \ War is cruelty, and you cannot refine it.
\__/ Karl Shapiro
The laws list / http://www.alcyone.com/max/physics/laws/
Laws, rules, principles, effects, paradoxes, etc. in physics.

David Hawk

unread,
Mar 6, 2003, 10:36:05 PM3/6/03
to

"Erik Max Francis" <m...@alcyone.com> wrote in message
news:3E680BB8...@alcyone.com...

> Of course it's unknown. But that's irrelevant. The question was
> whether the laws of physics _prohibited_ the construction of cognizant
> machines. The fact that we exist demonstrates conclusively that it does
> not; if they were physically impossible, we couldn't possibly be here.
>

But I must persist - this assumes that cognizance has anything to do with
the laws of physics. With matter. The assumption here is that cognizance
is a purely physical process/group of processes. Can we establish that
cognizance is a purely physical process?

While the laws of physics do not prohibit (I think the appropriate word is
preclude, actually, but I could be mistaken) cognizant machines, and I
understand the question I address is whether they are _practical_, I submit
that the laws of physics do not _address_ cognizance and therefore have
absolutely no bearing on the question of cognizant machines - unless one
recognizes the validity of my "collectively cognizant cat" illustration of
the problem (which, quite frankly and somewhat drunkenly, I consider rather
clever).
--
Hawk, David - general purpose - qty 1
http://home.insightbb.com/!amhwidner/us.htm

phil hunt

unread,
Mar 6, 2003, 10:58:46 PM3/6/03
to
On Thu, 06 Mar 2003 16:55:36 GMT, Mark Fergerson <mferg...@cox.net> wrote:
>
> Consider the influence of the Opium Wars on world history
>(and the current "cocaine wars"). Now consider what happens
>if the horribly addictive substance of interest can grow
>_anywhere_, like say marijuana.
>
> The world would be a very different place if a
>Victorian-era Bolton housewife could have grown her own
>opium.

Couldn't she? the opium poppy is native to western Europe, and it
certainly used to be cultivated in the UK, e.g. in the Fenland area.

phil hunt

unread,
Mar 6, 2003, 3:31:44 PM3/6/03
to
On Wed, 05 Mar 2003 22:25:26 +0000, Malcolm McMahon <mal...@pigsty.demon.co.uk> wrote:
>On Wed, 5 Mar 2003 14:39:46 +0000, ph...@cabalamat.org (phil hunt)
>wrote:
>
>>>You are assuming that the singularity happens.
>>
>>Yes.
>
>To my mind the singularity is just the apocalypse for atheists.

"The Rapture for Nerds" -- Ken MacLeod.

phil hunt

unread,
Mar 6, 2003, 11:01:03 PM3/6/03
to
On 5 Mar 2003 06:05:44 -0800, Mark <mtied...@earthlink.net> wrote:
>ph...@cabalamat.org (phil hunt) wrote in message news:<slrnb6afj2...@cabalamat.uklinux.net>...
>> Something I've been thinking for some time is that science fiction
>> (in the strict sense, which I'll define below) isn't -- or might not
>> be -- possible.
>>
>
>Except that science fiction isn't predictive.

Some of it does aim to be predictive.

As I explained in my original post, I am discussing that part of SF
that is predictive.

David Dyer-Bennet

unread,
Mar 6, 2003, 11:03:18 PM3/6/03
to
"David Hawk" <davidh...@insightbb.com> writes:

> "Erik Max Francis" <m...@alcyone.com> wrote in message
> news:3E680BB8...@alcyone.com...
>
> > Of course it's unknown. But that's irrelevant. The question was
> > whether the laws of physics _prohibited_ the construction of cognizant
> > machines. The fact that we exist demonstrates conclusively that it does
> > not; if they were physically impossible, we couldn't possibly be here.
> >
> But I must persist - this assumes that cognizance has anything to do with
> the laws of physics. With matter. The assumption here is that cognizance
> is a purely physical process/group of processes. Can we establish that
> cognizance is a purely physical process?

So far, we're merely at the stage of having failed to demonstrate the
existence of anything *other than* purely physical processes.
Obviously that's not conclusive; and we have *very* little
understanding of of "cognizance".

So far as I know, there are no competing theories at all, either.
(Scientific theories, at least theoretically falsifiable, I mean;
there are oodles of philosophical theories, but those aren't germane
to this discussion are they?)
--
David Dyer-Bennet, dd...@dd-b.net / http://www.dd-b.net/dd-b/
John Dyer-Bennet 1915-2002 Memorial Site http://john.dyer-bennet.net
Dragaera mailing lists, see http://dragaera.info

Erik Max Francis

unread,
Mar 6, 2003, 11:21:06 PM3/6/03
to
David Hawk wrote:

> But I must persist - this assumes that cognizance has anything to do
> with
> the laws of physics. With matter. The assumption here is that
> cognizance
> is a purely physical process/group of processes. Can we establish
> that
> cognizance is a purely physical process?

Suggesting that it doesn't invokes magic.

--
Erik Max Francis / m...@alcyone.com / http://www.alcyone.com/max/
__ San Jose, CA, USA / 37 20 N 121 53 W / &tSftDotIotE

/ \ Seriousness is the only refuge of the shallow.
\__/ Oscar Wilde
WebVal / http://www.alcyone.com/pyos/webval/
URL scanner, maintainer, and validator in Python.

Michael Ash

unread,
Mar 6, 2003, 11:22:47 PM3/6/03
to
In article <uPS9a.6033$wJ1.6...@newsread2.prod.itd.earthlink.net>,
how...@brazee.net wrote:

> Or imagine ocean liners, trains, cars, and airplanes with their ability to
> let people of all economic classes go around the world.
>
> I haven't yet seen these types of impacts with technology from my lifetime.

My ability to travel around the world and communicate around the world
very quickly, despite not being anything near rich, has certainly
impacted my life. The communications part, at least, is something that
is new within my lifetime (22 years). Not the theoretical ability, of
course, but the *practical* ability to pick up a phone, punch a few
buttons, and be connected within seconds to someone on another continent
at an extremely affordable price. Or the ability to write a letter, send
it for nearly free, and have it arrive on another continent only a few
minutes later.

From the scientific advancement end of things, the ability to
communicate with your colleague, with full documents, color pictures,
voice, what-have-you, for low cost, is a very recent advance also. It's
hard for me to imagine that this ability has not had a large impact on
research.

Bill Snyder

unread,
Mar 6, 2003, 11:35:03 PM3/6/03
to
On 6 Mar 2003 15:29:40 -0800, schi...@spock.usc.edu (John Schilling)
wrote:

>Bill Snyder <bsn...@iadfw.net> writes:
>
>>On 6 Mar 2003 11:09:32 -0800, gher...@gw.retro.com (George William
>>Herbert) wrote:
>
>[one modern book to sent back to 1903 enabling world conquest]
>
>>>I don't know about that.
>
>>>There's nothing about a basic nuclear reactor, for example,
>>>which can't be done with 1903 technology. The fundamentals
>>>of radioactivity theory and a few cross sections would be
>>>pretty close to it (well, and the ability to make relatively
>>>pure carbon for the moderator, or separate out industrial
>>>quantities of heavy water). The processing to get plutonium
>>>out of those reactors' output fuel streams is mostly knowing
>>>what chemical processes are needed, and how to remotely process
>>>the materials. The basics of nuclear bomb design technology
>>>are going to be more than a few pages, but the equations and
>>>general descriptions and a dimentioned sketch of a simple and
>>>tested bomb concept or two would be very doable.
>
>>[etc.]
>
>>It really seems to me that for several of these you're underestimating
>>both the materials science involved and the knowhow problem -- the
>>difference between what a spec says and what you actually need to know
>>to build to that spec.
>
>George Herbert is approximately the last person on usenet I would
>accuse of that failing.

I don't see that disagreeing constitutes "accusing" him of any
"failing" in particular. It's perfectly possible to be competent,
experienced, to have done one's homework, and still be mistaken.
Honest. (It's even possible to be grossly and tragically wrong, as we
were forcibly reminded a few weeks back.)

OK, let me start with what I've only heard and read about. My
understanding is that NASA has had problems from as far back as the
beginning of the shuttle program, due to machinists and other skilled
craftsmen from the Mercury/Gemini/Apollo days having retired. How can
this matter, if they have the specs for what they want to build?
Well...

That brings us to what I've seen myself. I've struggled for weeks to
extract from a customer the information needed to design/program a
machine to test their products. This wasn't Joe's Fly-by-Nite
Gewgaws, but a vendor to most of the major automakers, who are -- by
their standards -- pretty anal about the test specs for what they buy.
Theoretically, these guys should have been able to hand me on Day 1 a
document spelling out in excruciating detail what the test equipment
should and shouldn't do. In real life, it just doesn't work that way
-- or do you really imagine that software analysts are simply running
an enormous hoax on the rest of the world?

I've seen a company struggle to build and test one of its _own_
products that it hadn't made any of for the last 5 years. Struggle
enough that it had to haul me and two other engineers and a couple of
techs in on a holiday weekend, to try to figure out what the hardware
and software that it had shipped numerous times before was actually
doing. Struggle enough that it had to lure back a couple of retirees
who'd actually worked with the product in question. Again, this isn't
Sam's Discount Computers and Transmission Repair, but a
government/military vendor with 50 years in the trade, demonstrably
capable of convincing both the customers and the ISO buffoons that its
processes are documented.

I simply don't believe that any "how to" writeup equates to
ability-to-build. There's just too much
technology-behind-the-technology, basic technique that "goes without
saying" for those who write the specs -- but doesn't go without saying
at all when you're picking it up step by painful step. Sometimes the
only way to *really* learn how is by f---ing up repeatedly until you
figure out how to do it right.

David Hawk

unread,
Mar 7, 2003, 12:04:55 AM3/7/03
to

"Erik Max Francis" <m...@alcyone.com> wrote
> David Hawk wrote:

. Can we establish
> > that
> > cognizance is a purely physical process?
>
> Suggesting that it doesn't invokes magic.
>

Ah. I see.

Um - sorry, the "man is but animated meat" thinking usually provokes a
rebellious response in me. As does the idea that "that which I cannot
currently verify by logical or empiric test is magic, and therefore
irrelevant".

Let's not ask "can we establish". Can we bring forth a theory of cognizance
that precludes my drunken notions of a collectively cognizant cat; or of a
self-aware universe of which you and I are merely two petty neurons firing
blips across an internet synapse?

Assuming that sentience or whatever is solely a result of observable
electrochemical or other associational processes - what makes the difference
between the nervous tick that has lately been irritating my left eyelid and
the drivel I am now crossposting to two somewhat related newsgroups?

Why does Nicholas lack the ability to comprehend his name as more than a
string of sounds associated with being personally stroked and fed - while I
not only can understand my given name (Allan) and its etymology, and ponder
how that meaning relates to my life habits (either real or perceived); but
also assume a name for literary purposes (Hawk) which is on many levels the
antithesis of that given name?

I return to the only theory I have seen advanced, that the difference lies
in the levels of complexity. Hmmm. Which brings me back to cat and
universe.

David (the other one) points out that "So far, we're merely at the stage of


having failed to demonstrate the existence of anything *other than* purely

physical processes." As well as pointing to the lack of competing
scientific theories.

What I __ask__ is, other than an assertion that man is animated meat, does
science actually present a theory of cognizance? Has science actually
settled on a theory that explains how several hundred thousand cells and
simultaneous transfers of various minute electrical charges and subtly
differing chemical compounds produces - not merely cause-and effect muscle
reflex and sense perception and such - but thought and emotion capable of
standing completely apart from the meat those same reactions and exchanges
animate?

Hawk
can you allow "I, Robot" - and then outlaw "Phase V"?

Mark Fergerson

unread,
Mar 7, 2003, 1:05:03 AM3/7/03
to
phil hunt wrote:
> On Thu, 06 Mar 2003 16:55:36 GMT, Mark Fergerson <mferg...@cox.net> wrote:
>
>> Consider the influence of the Opium Wars on world history
>>(and the current "cocaine wars"). Now consider what happens
>>if the horribly addictive substance of interest can grow
>>_anywhere_, like say marijuana.
>>
>> The world would be a very different place if a
>>Victorian-era Bolton housewife could have grown her own
>>opium.
>
>
> Couldn't she? the opium poppy is native to western Europe, and it
> certainly used to be cultivated in the UK, e.g. in the Fenland area.

Hm, I'm fuzzier on that bit of history than I thought I
was (doesn't it prefer warmer climes?). So up the ante; our
hypothetical genetically-enhanced Uberweed produces not
opium, but raw _heroin_. Or better, not coca, but pure
freebased cocaine.

For that matter, consider that the Uberweed is nearly
impossible to kill; now anyone can get roaring high by
chewing on some stuff that grows by every road and rail.
Which country will take over by seeding all the others first?

Mark L. Fergerson

Scott Dubin

unread,
Mar 7, 2003, 1:27:23 AM3/7/03
to
how...@brazee.net wrote in message news:<CTS9a.6041$wJ1.6...@newsread2.prod.itd.earthlink.net>...

But never at a rate faster than the speed of light.

To sum up my philosophy towards these radical scifi ideas: I'll
believe it when I see it. Just because we can think it up doesn't
make it possible.

George William Herbert

unread,
Mar 7, 2003, 4:04:41 AM3/7/03
to
GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:
>Bitstring <b48ll4$gm3$1...@spock.usc.edu>, from the wonderful person John
>Schilling <schi...@spock.usc.edu> said
>>>Maybe you can build a reactor in 1903, but where do you get the
>>>enriched uranium to fuel it?
>>
>>You don't need enriched uranium to fuel a nuclear reactor. You need
>>enriched uranium *or* very pure graphite *or* heavy water, and the
>>latter two are easily within reach of 1903 industry.
>
>I thought D2O was produced mostly by electrolysis ..

Old method. New (from roughly WW II era on) is preferential
concentration via hydrogen sulfide exchange columns. Hot and
cold vertical columns, much more efficient.

Even with electrolysis, you only need about a factor of 10,000
reduction to get to reactor grade D2O.

>so all you need is
>a pretty big nuclear / hydro / whatever power plant .. which, afaicr,
>1903 ain't got .. so we better teach them about that (did they even have
>reinforced concrete to construct an appropriate dam for a hydro plant?
>If not, that's another chapter. 8>.)

Per:
http://www.ehgtechnology.com/cost_of_h2.htm
...it takes about 33 KWh/kg to electrolyze water.

To produce one kilo of D2O you have to electrolyze 10,000
kilos (10 tons) of H2O, then, which is 330 MWh/kg produced.
Say a reactor requires about 140 tons (5x5x5 meters) of D2O.
That's about 46 TWh to produce the reactorload of heavy water.
Or about 5,300 MW for a year.

I'm having a hard time finding electrical generation histories,
but according to:
http://www.eia.doe.gov/emeu/aer/eh/elec.html
...there were 25 million electric lightbulbs in use in 1900,
along with other uses such as stoves and heaters. If those
were 50 watts each, that's 1,250 MW.

So the scale of electricity needed to produce that much heavy
water is similar to the scale of electrical power used
in the US in that timeframe, though it is apparently close
enough that it might take a few years to produce the full
set of D2O.

What you'd really do is look at a tradeoff curve of developing
either purer graphite, more power to concentrate D2O, and methods
of producing low-enriched uranium. Without working all the numbers,
it looks like it would take no more than 10 years to be able to
produce enough D2O for a full reactor; even mild enrichment of
the uranium would reduce the D2O required by a factor of two
to three... so with a combination of them all going in parallel
it might well be running in as little as five years.

We can look into working the numbers in more detail if you want,
but not tonight 8-)


-george william herbert
gher...@retro.com

George William Herbert

unread,
Mar 7, 2003, 4:27:55 AM3/7/03
to
Bill Snyder <bsn...@iadfw.net> wrote:
>schi...@spock.usc.edu (John Schilling) wrote:
>>Bill Snyder <bsn...@iadfw.net> writes:
>>>It really seems to me that for several of these you're underestimating
>>>both the materials science involved and the knowhow problem -- the
>>>difference between what a spec says and what you actually need to know
>>>to build to that spec.
>>
>>George Herbert is approximately the last person on usenet I would
>>accuse of that failing.
>
>I don't see that disagreeing constitutes "accusing" him of any
>"failing" in particular. It's perfectly possible to be competent,
>experienced, to have done one's homework, and still be mistaken.
>Honest. (It's even possible to be grossly and tragically wrong, as we
>were forcibly reminded a few weeks back.)

A possibility which remains ever-present even with the best
experts in any field. While repetitive challenges get old
fast, asking for justifications and sources for analysies
is never a wrong thing to do. That I do not make many
mistakes doesn't in any way mean that I don't make any...

That is in fact a field in which both John and I are professionals,
so we do know a little about it 8-)

There have always been brain drains and loss of information.
One dirty secret in aerospace is the lists of old books we
pass around as key references, because nobody in recent
history seems to remember how to solve the problem.

With respect to NASA and modern western rocketry, modern western
rockets in general and NASA in particular are driven by what
are clearly not directly engineering concerns towards bleeding
edge vehicle designs for launchers and spacecraft.
There's a perception that weight equals money, so you
try and save weight at any cost. Which, in fact, the industry
has taken far past the actual equilibrium point and things
are now so light they cost far more than they need to.

This leads to very tight margins, a push towards exotic materials
and manufacturing techniques, and doing incredible amounts of
analysis and testing on designs to make sure that they are still
adequate and reliable despite the thin margins and bleeding
edge materials and designs.

The V-2 missile was capable of being built with hand
tools, and was in fact built by slave labor for some time.
Despite the fact that the slave labor pool was trying
to sabotage the missiles, a vast majority of them
worked and delivered their payloads on target.

There have been other recent simpler more robust rockets
built; among other things, the whole Russian launch vehicle
legacy are all incredibly much more robust than US practice.
They think nothing of rolling a rocket out for launch in
the middle of a hard blowing hard frozen snowstorm,
or launching into snowing, high wind conditions.
For which, their rockets are noticably bigger than US
rockets, because stronger and simpler equals less
"efficient". And, despite the fact that they are
heavier and bigger they are about half to a quarter
the cost (in terms of materials, man hours to build
and launch, etc) of equivalent western launchers.

What you are describing here is not a technology problem,
it is a complexity problem. No amount of technology or lack
thereof will fix overly complex and under-managed projects
and programs. And yes... I have been right there, too,
on tech projects with tens of millions of dollars riding
on specs that were required on day zero and hadn't been written
yet a month in to the project.

You solve that by simplifying. You can simplify a nuclear
reactor designed to breed Plutonium; believe it or not, you can
usually design them to need very little in the way of controls
or active systems, if you don't bother to try and extract the
thermal output power and do useful things with it. You can
simplify a rocket; you don't need complex turbopumps and exotic
steels and hundreds of thousands of finely machined parts to
make a medium range ballistic missile. The V-2 was pretty
simple, could have been done simpler (the Scud, for example).
Big Dumb Boosters can be done with remarkably few parts.
You can simplify a computer, a network, anything.

Sure, the costs and capabilities you get suck by modern
efficiency terms. Which is why we spend engineer hours
like mad these days. It drops the cost per unit useful
thing by a huge factor and ends up being a fraction of
the lifetime cost of the lower cost things. But if you
don't have that optimizing engineer skill and technical
environment, you can simplify and make do at higher
cost levels for most products.


-george william herbert
gher...@retro.com

George William Herbert

unread,
Mar 7, 2003, 4:53:01 AM3/7/03
to
GSV Three Minds in a Can <G...@quik.clara.co.uk> wrote:
>Bitstring <b486dc$8ia$1...@gw.retro.com>, from the wonderful person George
>William Herbert <gher...@gw.retro.com> said
>>>'Useful' I could certainly accept, 'World changing'
>>>(conquer/destroy/remake) would require rather a large library of books
>>>(maybe even more than a DVD's worth). As folks point out down-thread,
>>>conveying the science (or enough pointers that someone could work it
>>>out) would be semi-plausible (assuming drastic pruning of what science
>>>you bothered about), but conveying enough of the engineering technology
>>>would be a real challenge .. many of the most useful things can't be
>>>made with 1903 materials technology, and even the machines to make the
>>>materials require technology .. etc. etc..

>>
>>I don't know about that.
>>
>>There's nothing about a basic nuclear reactor, for example,
>>which can't be done with 1903 technology. The fundamentals
>>of radioactivity theory and a few cross sections would be
>>pretty close to it (well, and the ability to make relatively
>>pure carbon for the moderator, or separate out industrial
>>quantities of heavy water). The processing to get plutonium
>>out of those reactors' output fuel streams is mostly knowing
>>what chemical processes are needed, and how to remotely process
>>the materials. The basics of nuclear bomb design technology
>>are going to be more than a few pages, but the equations and
>>general descriptions and a dimentioned sketch of a simple and
>>tested bomb concept or two would be very doable.
>>
>>Orbital rocketry really wants some materials advantages
>>over what was available in 1903, but general concepts and
>>intermediate range missiles would be possible.
>
><snip more examples>
>
>You haven't convinced me, unless you think this 'book' is going to be
>dozens of kilo-pages thick. Even today 'basics of a digital computer' is
>a thick book .. 'basics of programming one' is another thick book.
>Building a missile is several thick books. Maybe telling them 'it can be
>done', or 'this is what you should do' helps some (assuming you are
>talking to a genius at the other end), but I still doubt you can get
>your 'conquer/destroy/remake' into a single volume. 8>.

To establish some baseline here...

My day job is in the technical end of the computer industry
(well, at this point, technical management and salesish things
to a large degree, but I do some programming, research, and can
build systems and assemble hardware still...). My night job /
own business is an aerospace consulting business, associated
with which I do R&D on space launch vehicles, manned spaceflight,
and a bunch of other related aerospace subfields (and a few defense
technology areas). I can, in fact, design and build a modern
missile, and have business plans and development projects to
do so in progress.

One of my hobbies, along with sleeping every now and then,
is historical what-if fiction, time travel what's possible
analysies, etc. I have looked at a bunch of technical scenarios
over various timescales in depth. Some of the stuff, I have
built, to see how it works.

One of the scenarios I looked at was an alternate tech-development
heavy version of the Napoleonic wars, which is still simmering.
As part of that I looked at what it would take to make short and
intermediate range ballistic missiles with 1800 technology.
It is, in fact, quite doable. Guidance is the hard part.
Pressure fed rocket motors with ablative liners are easy.
Wrought iron propellant tanks, wrapped in wire rope for
hoop stress reinforcement, are easy and nearly contemporary
technologies. Nitric acid and turpentine are both a reasonable
fuel combination (hypergolic even, so no ignition required)
and available in quantity in the time period. The guidance
problem is that you have to mechanically translate a gyroscope
orientation to actuating valves to inject propellant into the
rocket nozzle to steer the rocket. This problem is equivalent
to a fairly nasty mechanical clock problem; doable, but it's
a major pain to design and debug. I ended up with using hydraulic
valves to actuate hydraulic valves which actuate hydraulic valves
which inject the steering thrust vector propellant. Debugging
that shows some promise of being fairly annoying, but the
mechanism details seem to work.

If I think I can build 3-500 mile range ballistic missiles shortly
after Trafalgar, then anyone can build them 100 years later,
if they know what they're looking to try and do.

Computers... were you aware that you can make transistors with a
few chemicals (dopants, photoresist / etch process stuff, etc)
in a home kitchen and oven? This is documented in several
introductory semiconductor fab classes, and I have seen
the class notes for one such lab project for making NPN
transistors. They ran to about five pages. I haven't done
it myself, but several people I know have.

You can adapt those techniques to simple ICs and
mass producing discrete transistors as bars of
doped material which you then slice up every few
mm down the bar...

Were you aware that you can build a relatively modern
computer CPU with only a few thousand transistors?
That's perhaps only a week's worth of time in the
kitchen... certainly not much more than a month.
Plus, of course, the circuit designs and the circuit
boards and solder and assembly time.

Languages and computer architectures like stack computers
and forth provide the possibility for very small and
easy to design and program computer systems.
Not well suited to modern complex GUI environments
no, but something which you could describe the basics
of in a few pages in a book.

The hard part is figuring out when to stop providing
the detailed cookbook and transition into "here's the
formula or key theory for the next step, we don't have
the space to explain why it's true but you can figure
that out on your own."


-george william herbert
gher...@retro.com

George William Herbert

unread,
Mar 7, 2003, 4:59:36 AM3/7/03
to
Bill Snyder <bsn...@iadfw.net> wrote:
>It really seems to me that for several of these you're underestimating
>both the materials science involved and the knowhow problem -- the
>difference between what a spec says and what you actually need to know
>to build to that spec. Maybe you can build a reactor in 1903, but
>where do you get the enriched uranium to fuel it? Maybe you can in
>theory build a transistor production line quickly, but how long does
>it take you to produce the ultra-pure silicon with next-door-to-zero
>dislocations, the photoresist, etc., etc.

Reactor: pure boron-free graphite, or heavy water, or very lightly
enriched uranium, or combination of all three will do.
With enough good graphite or heavy water, natural uranium
will do just fine. U enrichment helps, of course, but is
not required. Heavy water production is mostly an industrial
scale problem. Graphite... the purification techniques
aren't that hard.

Transistors: don't need large scale integrated circuits.
Early computers were discrete components soldered on
boards. It's just bigger and more expensive that way.
You can build transistors in a kitchen, with an oven
and some lab chemicals, and a few pages of instructions.
Showing someone how to make small thin single crystals of silicon
is pretty darn easy. Dip starter crystal in melt, withdraw
slowly, cooling end... not good enough for large scale
integrated chips, but more than enough for discrete logic,
and very primitive integrated logic.


-george william herbert
gher...@retro.com

Frank Scrooby

unread,
Mar 7, 2003, 5:26:46 AM3/7/03
to
Hi all


"George William Herbert" <gher...@gw.retro.com> wrote in message
news:b49q5t$bft$1...@gw.retro.com...
<<<previous post and cool stuff about GWH's life and hobbies snipped>>>

> Computers... were you aware that you can make transistors with a
> few chemicals (dopants, photoresist / etch process stuff, etc)
> in a home kitchen and oven? This is documented in several
> introductory semiconductor fab classes, and I have seen
> the class notes for one such lab project for making NPN
> transistors. They ran to about five pages. I haven't done
> it myself, but several people I know have.

Do you have any references or web-links to this? I don't currently own an
oven (microwave has proven more useful) but I'd buy one to play around with
stuff like this.


>
>
> -george william herbert
> gher...@retro.com
>

This is very cool stuff, George. Any chance of you publishing some fiction
about Napoleon's ballistic missile program one day soon?

Thanks and regards
Frank

Sea Wasp

unread,
Mar 7, 2003, 7:40:36 AM3/7/03
to
rmtodd wrote:
Even if your original
> data sequence was just the ASCII code for "A" repeated a thousand times,
> identifying the original data is liable to be impossible for anyone less
> bright than Mentor of Arisia.

So they could do it ONLY if they got Doc Smith
on the team. ;)

Malcolm McMahon

unread,
Mar 7, 2003, 8:25:25 AM3/7/03
to
On Thu, 6 Mar 2003 17:46:35 -0500, Sean O'Hara
<darkerthenightth...@myrealbox.com> wrote:

>The Singularity doesn't require that AIs rule us, only that they're
>beyond our ability to understand -- or, to look at it another way,
>that we're particularly slow children as compared to them and not
>worth their time.

Well, if that happens then they are irrelevant to us.

But we have something they won't have unless we give it to them: Goals.

Malcolm McMahon

unread,
Mar 7, 2003, 8:32:13 AM3/7/03
to
On Thu, 06 Mar 2003 15:46:40 -0800, Erik Max Francis <m...@alcyone.com>
wrote:

>Biological machines are just machines.

And consciousness is just consciousness. It's important to understand
the ultimate limits of reductionist explanation. Reductionist
explanation explains composite things in terms of the properties of
their components.

But you can only pursue reductionism a finite number of levels before
you hit elementary phenonina which are, by definite, not composite and
therefore not subject to reductionist explanation.

Elementary phenonomina can be descriped and modelled, but explanation is
not possible. Quantum mechanics doesn't explain the behaviour of, say,
an electron, merely provides a metaphore for it.

Consciousness itself (not stuff like intelect, memory, personality etc.)
may be such an elementary phenonina. Where such a phenominum would
attach itself to an AI we can't know, even after we construct such an
AI.

Consciousness can only even be experienced, not detected.

It is loading more messages.
0 new messages