What exactly does he mean by that? I could ask him, but it's more fun to go
through some possibilities first, since in my experience "the Singularity" has
become slippery, with many possible (if related) meanings.
To go from the most extreme to the most sensible root: first we have the
"Technorapture", where something -- intelligence, connections, technology --
increases exponentially over the course of a few hours, culminating in a mass
upload of everybody on Earth and possibly their subsequent disappearance from
the physical universe. The core Vingean example is presumed in _Marooned in
Realtime_ (where the Extinction/disappearance was a *plot device*, lest we
forget.) Preceding that as an image was _Childhood's End_, but psionics and
the Singularity don't mix well. Explosive transcensions also happened in _A
Fire Upon the Deep_, though at least the Powers stayed in our universe.
Extreme Drexlerian nanotech tends to be involved, to do the uploading and
provide computational substrate.
Saying you don't want this is perfectly reasonable to me; Drexlerian nanotech
is something to be skeptical of, as are the claims to extreme speed and
universality of the process, and we can forget about the mystical
disappearance.
(Though in fairness I'll note that the concept of making "basement universes"
to disappear into has some attention from physicists. But mass nanotech is
not obviously a tool for quickly making black holes, which would be needed.)
(I'll also note that while Ken MacLeod is sometimes quoted for his "The
Rapture for Nerds!" line in _The Cassini Division, the nerds in the book
turned out to be *right*. They had their Rapture, and they were conscious,
not zombie programs.)
Dropping down a level, "post-Singularity" has been applied to various
settings, not always with the superhuman intelligence, but with cool
technology which some might think makes storytelling challenging, generally
nanotech and AI, with the key functions being the copyability of almost
everything, including human minds. "Post-scarcity" and immortality
("post-mortality", perhaps?) Examples: MacLeod's _The Sky Road_, where a near
future with nanotech immortality pills and some modestly transhuman minds was
internally called "post-Singularity". Vinge's "Just Peace" had mental
copying; his "Original Sin" had immortality and weird probability tech;
MacLeod's _Newton's Wake_ had another Rapture but also various normal humans
running around with backups. Wil McCarthy's Collapsium books seem
post-Singularity in this sense. Also Cory Doctorow's _Down and Out in the
Magic Kingdom_. Some readers seem to be getting tired of all this, so "no
Singularity" seems interpretable as "no nanotech! No copies, backups, or
uploads! No biological immortality!" Again, there's grounds for being
skeptical of the nanotech, though my impression is that if other ways were
found of doing the same things the readers would still be unhappy.
Dropping further, we get closer to what I think of as Vinge's original ideas.
One form is that when we learn to increase our intelligence, those smarter
results will be able to increase their own even faster, in an exponential
growth for at least a while, with accompanying exponential growth in other
technologies. The Technorapture is just an extreme manifestation of the
process; the post-scarcity, post-mortality technologies are sometimes a
spinoff, sometimes an enabler (that level of nanotech being considered useful
for the mental technologies.) Skepticism: since the smarter beings are
probably even more complex, they may need all their enhanced brains to
accomplish a similar increase. Whether a continued linear increase in
intelligence is really less worldshaking (especially for the SF writer) than
an exponential one is a question I leave to the reader.
Which brings us to the root of it all in John Campbell's rejection of Vinge's
sequel to "Bookworm, Run!", with a human enhanced like the chimp of the
original story: "You can't tell this story, and neither can anyone else."
This has cousins in Niven's own writings on superintelligent beings; the basic
idea is that you can't plausibly write about someone much smarter than
yourself, let alone a society of them. Sure, they might still have human
motivations, but can you portray their thoughts, their actions, the
technologies or social arrangements they would produce? Can you understand
them? -- Which actually leads me to a common conflation: difficulty of
*prediction* is not the same as impossibility of *understanding*; we might
well be able to understand a posthuman world (even if not quickly enough to
keep up) without being able to create one. Loose analogies fly around at this
point; some say "they'll be to us as we are to dogs", I invoke
Turing-completeness and say dogs just aren't that good at understanding each
other, in the sense we mean it. It's not that we're too complex for dogs, but
dogs are complex enough to understand anything.
But anyway, at this point the cry of "no more Singularity" sounds like "don't
show enhanced intelligence!" which I think gets to be a problem, if we're
pretending to be talking about hard SF. As Vinge wrote in his classic essay,
there are multiple routes to enhancing intelligence. Developing human level
AI and making it better is just one of them, not even that frequent in Vinge's
writing. A second was IA, "intelligence augmentation", itself with multiple
forms; one was computer prosthetics to human brains, as shown in _The Peace
War_ and _Marooned in Realtime_ and "True Names". A third was connecting
human minds together, mentioned in _Marooned in Realtime_. Usually as direct
neural links, but I note that if we take "mind as society of agents"
seriously, that leads to "society of humans as a crude mental process", and
perhaps such thinking would lead to better organizations of corporations and
governments than the recursive primate hierarchies we tend to use.
The final approach was genetic enhancement, and I think skepticism must here
run aground. Even if you don't believe in AI, for philosophical or practical
reasons, even if plugging network jacks into the brain doesn't seem all that
transformative, even if simple chemical interventions don't help much and
nanotech rewiring is impossible, there are still the facts that the brain is a
biological machine constructed by our genes (with, yes, environmental
influence), that IQ varies, and seems to vary largely with genes (80%
hereditable -- and if that's high, it would make IQ easier to control, not
harder!) Twiddle with the right genes and you can get results, even if you
don't know exactly how it works. (Vingean result: _Tatja Grimm's World_)
We're not far from it now. Find correlations of genes with IQ and personality
traits, create multiple embryos and select the most preferred gene combination
among them, and you can create quite strong selection pressures in the
direction you want; this could well be next-decade tech. (Fiction: the movie
"Gattaca", though I think they selected among 8 eggs, while I could see a
whole ovary being used.) Learn how the homeobox and development genes work
and you can try tricks such as creating bigger brains, and bigger brain-body
ratios, and perhaps controlling sizes of areas within the brain. Mice with
bigger brains have already been created (though the article didn't mention
whether they seemed any smarter.) This is a bit further off (need to learn
more, and test it on animals) but possibly not that far; there are practical
and ethical problems to worry about (physiological side effects of a large
head; necessity for C-sections; possibility of this causing speciation and
lack of breeding partners), but there's also the temptation of making 300+
IQs.
At this level, what I see the Singularity really saying is "our tools don't
only work out there, or in our bodies for simple healing. Our bodies and
brains are machines we can take apart, understand, improve, and perhaps copy;
this will happen, this will happen soon, and this will have consequences."
This is what unifies the nanotech immortality pill with the brain-computer
interface or the gene-selected supermind or the upload, namely the treatment
of the human condition in all its aspects as a controllable material process.
The future doesn't just offer more food, water, energy, and neat clothes; it
offers people who are, as a result of their parents' will if not their own,
smarter and saner and stronger and healthier and more beautiful than we are.
At the very least.
So, please, specify which Singularity it is that you're tired of!
"Scottish SF" is a disputable term, but here I have in mind the authors Iain
Banks, Ken MacLeod, Charles Stross (these three being a real social group),
Alastair Reynolds, and non-Scots Wil McCarthy and Greg Egan. Frequent themes
in the groups' writings are AI, uploads, nano-immortality, extensive genetic
engineering, mental control (nanite, implants, Culture drug glands) along with
the usual "post-scarcity" economics. Their writings tend to be a materialist
revel in the possibilities (even if the world as a whole is sometimes bleak).
Very transhumanist, in the sense of using our understanding (acquired as
humans) to go past our human limits.
As opposed, say, to the writings of Lois Bujold and Terry Pratchett, which are
comparatively very humanist, centered on the human condition. Bujold has a
lot of advanced biotech in her SF, and some perfectly nice people have come
out of the labs (quaddies, hermaphrodites, Taura), but I note that extensive
human manipulation is not something any of the core, Good, characters do. Not
even intensive selection of embryos, beyond minimal needs. A quaddie and a
herm will select the attributes of their children, but their biologies make
that mandatory. [Research: did they actually select more, such as
appearance?] Miles and Ekaterin select the sex and timing of their children,
and being clean of nasty recessives is taken for granted, but they didn't make
many embryos and select the smartest or most social one. The attitude is
still "take your chances and do your best with what you've got", vs. pushing
for the top. That's left for the Cetagandan haut, who are kind of creepy.
And no one, not even the haut, lives all that long -- 120 years for the
Betans, most "like us", 150 years for the quaddies and old haut. Nice, but
MacLeod had people born before the moon landing living to be 350, both as
software and as continuous human bodies.
Pratchett is fantasy, so it may seem odd to mention him, but I think it works.
Magic is usually kept off to the side, not used extensively as a technology,
and the Igors are equivalent to advanced transhuman biotech, but on-page kept
as simple but good doctors and comic relief, not allowed to resurrect Watchmen
or even deliver Vimes's baby.[1]
At any rate, what's the putative Singularity-Scottish connection? Just that
at least some of the Scots are openly socialist (with some market memes thrown
in), and subculturally heir to a strong Marxist materialism, with roots
perhaps going back to the Scottish Enlightenment. So perhaps they're more set
up to embrace and exploit the possibilities of seeing humans as material
objects and processes.
[1] Which was unfair. Dr. Lawn said Igor could only approach if he was boiled
first, since Igors look so unhygienic, but while Lawn may know about boiling,
Igors actually know why -- about germs. And if there was a real hemorrhage,
I'd prefer having an Igor on the spot...
I just wanted to admire the phrase "recursive primate hierarchies".
--
Aaron Denney
-><-
Also known as: it's monkeys all the way down.
--Z
"And Aholibamah bare Jeush, and Jaalam, and Korah: these were the borogoves..."
*
I'm still thinking about what to put in this space.
I'm not sure I fully agree. A lot of singularity-fu is
just prosthetic psionics. All the gear in Vinge's "fast times"
setting, or the "advanced" folk (and Tunc most advanced of all)
in the peacewar/realtime setting, for examples. People have
invisible communications that are functionally indistinguishable
from telepathy, and manipulate autons and such in wasy functionally
indistinguishable from telekinesis. Pham Nguyen's interaction
with the Countermeasure, and its effects on the local star sure
smacked of spooky mystic action at a distance, reponding to
thought alone.
Or, consider Gordon Dickson's "Wolfling"; a very nice scene at
the end of that, demonstrating just how outclassed the earth military
would be against a single soldier with nigh-singularity-level support.
"Adok! Remove that wall. No flying debris, no side effects,
I just want it gone."
<adok turns to face the wall, and a peculiar noise and flash occur,
as if they were so loud / bright that they'd deafen/blind you, but cut
off before you could even perceive them... the wall is gone>
"Adok! Those clouds! Remove them!"
<adok turns his attention to the sky, but otherwise does not move;
a vast whistle, cut short before blasting everybody to deafness...
the clouds are gone>
"Adok!" <points to the Alpha Centauri governor,
who gibbers in terror...>
(Though of course, Wolfling was written before
there was a Singularity... but still, a nice scene.)
Wayne Throop thr...@sheol.org http://sheol.org/throopw
> Here, Aaron Denney <wno...@ofb.net> wrote:
>> I just wanted to admire the phrase "recursive primate hierarchies".
> Also known as: it's monkeys all the way down.
Judging from the newspapers, it's monkeys all the way up, too.
--
Realize that life is a situation comedy that will never be canceled. A
laugh track has been provided, and the reason why we are put in the
material world is to get more material.
From "Swami Beyondananda's Guidelines for Enlightenment"
>
> (I'll also note that while Ken MacLeod is sometimes quoted for his "The
> Rapture for Nerds!" line in _The Cassini Division, the nerds in the book
> turned out to be *right*. They had their Rapture, and they were conscious,
> not zombie programs.)
>
Well, conscious and lively up to when Ellen May zapped them....
Which raises a point, how likely is it that a posthuman society,
explicitly hostile to plain ol' humans, would leave themselves open to a
relatively crude, genocidal attack, such as Ellen May's?
Cheers -- Pete Tillman
In the case of the Vorkosigan books, embryo selection seems to be
ruled out for moral reasons, at least fot the major good guys.
Cordelia's ethics on the boundaries of life are as close to the
Catholic Church's as her sexual morality is distant, and this
likewise applies at minimum to anyone whose upbringing she's
influenced. There's at least evidence that this extends further
than just her-- the elaborate means by which the Escobarans rid
themselves of embryos rather than disposing of them themselves, for
example-- though of course not to places like Jackson's Whole or,
for that matter, Barrayar.
On the other hand, embryo selection probably isn't necessary with
higher-end tech in that universe-- they can do genetic and somatic
search-and-replace on adults, after all, and even failing that they
could presumably choose the gametes they wanted to use. It's true
that they don't do as much optimizing as they probably could.
The attitude is still "take your chances and do
> your best with what you've got", vs. pushing for the top.
> That's left for the Cetagandan haut, who are kind of creepy.
And who would stand as a warning for those who are bothered by
things like their grandchildren being effectively a different
species from them, except that almost nobody knows what the haut are
doing.
And
> no one, not even the haut, lives all that long -- 120 years for
> the Betans, most "like us", 150 years for the quaddies and old
> haut.
Jacksonian nobles last longer, but their method presents certain
ethical issues for most other people. And given that they still
need the brain to be working, it probably doesn't get them that much
more time than the haut. (And in practice, they probably don't even
get that long given their propensity to die by violence or
intrigue.) Mark is also working on life extension, but those
effects will probably be left (along with the genetic time bomb set
up in _Ethan of Athos_) beyond the scope of the books.
Mike
--
Michael S. Schiffer, LHN, FCS
msch...@condor.depaul.edu
>Well, conscious and lively up to when Ellen May zapped them....
>
>Which raises a point, how likely is it that a posthuman society,
>explicitly hostile to plain ol' humans, would leave themselves open to a
>relatively crude, genocidal attack, such as Ellen May's?
Not very, especially if we ask the Orion's Arm folks.
But this was an odd situation; the posthumans weren't the originals who'd made
the wormhole and such. Those had gone totally mad (likelihood of that... ssh)
and fallen into Jupiter. The newly sane descendants may have had some
resources problems -- did that society have fusion? Did the Jovians have
access to more metals? It might have been a "we've fallen and can't get up"
situation.
I suppose we could ask whether they could have hardenred themselves bigger, or
just how big is Jupiter anyway, even to a comet swarm? OTOH, if you were them
and had survived a comet attack, would you announce it?
-xx- Damien X-)
Or bishops.
True, and we have had psychic transcensions since Childhood's End -- maybe the
energy beings of Star Trek, definitely Jason Ironheart and the Vorlons etc. of
Babylon-5, or the Lylmik of Julian May. There's similarity of effect, and
convergence to a godpoint of manipulating much of reality at will.
OTOH, the difference in mechanism and philosophy feels important to me,
especially from the hard SF viewpoint. The implausibility of "the technology
won't stretch that far" isn't the same as that of "that effect doesn't even
exist". The Singularity could happen -- or maybe it can't, but it's largely
an engineering issue, vs. psychic powers not existing.
If people were writing Singularity fiction using the idea as a black box
excuse to have cool magical effects, then they'd be similar. And maybe
there's stuff like that. But since I can envision how the 'magic' is supposed
to happen, it's different. Maybe the authors using psychic powers believed in
those, in which case it's similar at the authorial level, but for a modern
reader it's not. Electrode-controlled robots aren't the same as pure
telekinesis.
>indistinguishable from telekinesis. Pham Nguyen's interaction
>with the Countermeasure, and its effects on the local star sure
>smacked of spooky mystic action at a distance, reponding to
>thought alone.
Yeah, but it was supposed to be appealing to Zone mechanisms somewhere. And
since the Zones themselves are pretty much magical, *shrug*.
-xx- Damien X-)
>In the case of the Vorkosigan books, embryo selection seems to be
>ruled out for moral reasons, at least fot the major good guys.
>Cordelia's ethics on the boundaries of life are as close to the
>Catholic Church's as her sexual morality is distant, and this
Indeed.
>On the other hand, embryo selection probably isn't necessary with
>higher-end tech in that universe-- they can do genetic and somatic
Point.
>could presumably choose the gametes they wanted to use. It's true
>that they don't do as much optimizing as they probably could.
Barrayar has haut genetic material as of _DI_; think they'll use it?
Barrayar, passing up the chance to create supersoldiers? Okay, this is
Gregor's Barrayar, and he's Cordelia's, and they have that mutant phobia...
>Jacksonian nobles last longer, but their method presents certain
>ethical issues for most other people. And given that they still
>need the brain to be working, it probably doesn't get them that much
>more time than the haut. (And in practice, they probably don't even
>get that long given their propensity to die by violence or
And the procedure doesn't always work. Also it depends when they do it; if
someone with sub-Betan life expectancy does it at age 80, they might end up
living to 160 in the second body.
It's not just nobles; other people use the service. Which raises a question,
how old is the oldest person in the Nexus? Hey Lois, do your worldbuilding...
These people can also clone and replace any organ; one wonders why the
body-theft is even necessary. I suppose refreshing the whole vascular or
lymph system would be harder. But it also seems like they'd have
head-in-a-jar capability, with what we've seen even from Barrayar's medicine
for Aral; is there any life support function they couldn't provide
mechanically indefinitely? Of course, the brain may conk out. Though stuff
like Illyan's memory chip seemed not that far from uploading-quality tech.
Which gets back to the humanist model; I suspect Lois doesn't want to deal
with 400 year old people or AI. After 1000 years things are better than now,
but not too much better. Enough to offer hope and inspiration, not enough to
disturb, except in selected places.
-xx- Damien X-)
I'm tired of the Singularity depicted as inevitable and imminent (unless
disaster and/or deliberate suppression interferes). E.g. the presumably
intended as nonfiction _The Singularity is Near_ by Ray Kurzweil. I may be
unfair to him; I didn't read the book since the title alone made me laugh.
I'm tired of the aptly-named Rapture of the Nerds scenarios, where the l33t
are uploaded to become as gods, while the slow-adopters, the doubters, and
the technophobes are disassembled and converted to computronium. _Left
Behind_ for geeks.
>Skepticism: since the smarter beings are probably even more complex, they
>may need all their enhanced brains to accomplish a similar increase.
For instance, the corrolary to Moore's Law known as Rock's Law: the price of
a chip fab doubles every 4 years.
>Whether a continued linear increase in intelligence is really less
>worldshaking (especially for the SF writer) than an exponential one is a
>question I leave to the reader.
Assuming IQ actually measures intelligence, then I remind you that the Flynn
effect suggests that such a linear increase has been ongoing for some time
now. No Singularity has yet occurred.
>The final approach was genetic enhancement, and I think skepticism must here
>run aground. Even if you don't believe in AI, for philosophical or practical
>reasons, even if plugging network jacks into the brain doesn't seem all that
>transformative, even if simple chemical interventions don't help much and
>nanotech rewiring is impossible, there are still the facts that the brain is a
>biological machine constructed by our genes (with, yes, environmental
>influence), that IQ varies, and seems to vary largely with genes (80%
>hereditable -- and if that's high, it would make IQ easier to control, not
>harder!) Twiddle with the right genes and you can get results, even if you
>don't know exactly how it works. (Vingean result: _Tatja Grimm's World_)
If you don't know exactly how it works, then the results you get are likely
to not be the ones you want. I really wouldn't want to be an early adopter--
or more precisely, the parent of an early adoptee--on this one.
IANA molecular biologist, but it seems to me that you'd at least want to have
solved the protein folding problem before tackling any of this stuff.
>We're not far from it now. Find correlations of genes with IQ and personality
>traits, create multiple embryos and select the most preferred gene combination
>among them, and you can create quite strong selection pressures in the
>direction you want; this could well be next-decade tech.
Unless you discover that while the various alleles you selected for all
increase intelligence taken individually or in most combinations, putting
them all together leads to an anti-synergistic effect.
>whole ovary being used.) Learn how the homeobox and development genes work
>and you can try tricks such as creating bigger brains, and bigger brain-body
>ratios, and perhaps controlling sizes of areas within the brain. Mice with
>bigger brains have already been created (though the article didn't mention
>whether they seemed any smarter.) This is a bit further off (need to learn
>more, and test it on animals) but possibly not that far; there are practical
>and ethical problems to worry about (physiological side effects of a large
>head; necessity for C-sections; possibility of this causing speciation and
>lack of breeding partners), but there's also the temptation of making 300+
>IQs.
Is a 300+ IQ score even defined? Anyway, if intelligence boosts become
common enough, the IQ tests will simply be renormalized.
>At this level, what I see the Singularity really saying is "our tools don't
>only work out there, or in our bodies for simple healing. Our bodies and
>brains are machines we can take apart, understand, improve, and perhaps copy;
>this will happen, this will happen soon, and this will have consequences."
>This is what unifies the nanotech immortality pill with the brain-computer
>interface or the gene-selected supermind or the upload, namely the treatment
>of the human condition in all its aspects as a controllable material process.
>The future doesn't just offer more food, water, energy, and neat clothes; it
>offers people who are, as a result of their parents' will if not their own,
>smarter and saner and stronger and healthier and more beautiful than we are.
>At the very least.
Prospective parents will disagree wildly on what is "smarter and saner and
stronger and healthier and more beautiful". Personal observation suggests
that what many parents want is kids which are just like them, only more so.
I suspect that the interaction of that with designer kids results in a lot
of kids who are messed up in various ways. Plus a lot who turn out just fine,
of course.
--
Justin Fang (jus...@panix.com)
>>So, please, specify which Singularity it is that you're tired of!
>
>I'm tired of the Singularity depicted as inevitable and imminent (unless
Fair enough, especially since those typically are about the extreme forms.
At the other end, having cognitive science accomplish nothing of interest
seems about as implausible.
>I'm tired of the aptly-named Rapture of the Nerds scenarios, where the l33t
How many books has this actually happened in? _Marooned_, kind of; _Fire Upon
the Deep_, implied; _Cassini Division_ and _Stone Canal_, partly -- the nerds
had their fun, no one got disassembled that I remember; Accelerando, maybe?
>For instance, the corrolary to Moore's Law known as Rock's Law: the price of
>a chip fab doubles every 4 years.
http://firstmonday.org/issues/issue7_11/tuomi/index.html
says Moore's Law doesn't hold up in any rigorous form, though that may not
matter for our purposes.
>>Whether a continued linear increase in intelligence is really less
>>worldshaking (especially for the SF writer) than an exponential one is a
>>question I leave to the reader.
>
>Assuming IQ actually measures intelligence, then I remind you that the Flynn
>effect suggests that such a linear increase has been ongoing for some time
>now. No Singularity has yet occurred.
Good point. OTOH, such a gradual rise is ripe for the "we're living through
it" argument, where we don't notice. How would the average person from N
years back, plucked forward, cope? For that matter the average person now
doesn't understand the tech and economy they live in... and no one understands
all of it, leading to the "Singularity came and went a long time ago"
argument.
>>We're not far from it now. Find correlations of genes with IQ and
>>personality traits, create multiple embryos and select the most preferred
>>gene combination among them, and you can create quite strong selection
>>pressures in the direction you want; this could well be next-decade tech.
>
>Unless you discover that while the various alleles you selected for all
>increase intelligence taken individually or in most combinations, putting
>them all together leads to an anti-synergistic effect.
Maybe, but how likely is that? And even then, people can assure themselves of
passing on the good alleles or combinations. People will gain the knowledge
to bias the results of reproduction in a way the human race has never seen
outside of plagues. Is there any reason not to expect that?
>Is a 300+ IQ score even defined? Anyway, if intelligence boosts become
>common enough, the IQ tests will simply be renormalized.
Probably not, and quite possibly, but that's hardly the point; the point would
be the much smarter person.
>Prospective parents will disagree wildly on what is "smarter and saner and
>stronger and healthier and more beautiful". Personal observation suggests
>that what many parents want is kids which are just like them, only more so.
True, somewhat, and if it changes a lot from generation to generation that
would keep the gene pool from lurching far in one direction. OTOH, the traits
themselves aren't that arbitrary, what would vary more is the tradeoffs among
them to make, if necessary. The other thing parents want is probably "kids
who will be successful"; who's likely to attract a mate and get a good income
and be happy? (Some of those might bias against going for really high IQ.)
-xx- Damien X-)
Curse you, Justin Fang, for stealing my line. Except I was going to
suggest "_Left Behind_ for geeks" as a blurb for Walter Jon Williams's
_Metropolitan_ series. The un-Transcended aren't disassembled, though;
they're merely sequestered.
David Tate
We've been using these SRBs with the same O-rings for some time now,
and no catastrophe has occurred.
>Is a 300+ IQ score even defined?
Sure, a 30-year-old with a 300 IQ is as smart as a 90-year-old.
Now get off my lawn! Twenty-three skidoo!
>Prospective parents will disagree wildly on what is "smarter and saner and
>stronger and healthier and more beautiful".
I would wonder if smarter and saner aren't mutually exclusive to
some degree, y'know?
--
Joe Bay Leland Stanford Junior University
www.stanford.edu/~jmbay/ Program in Cancer Biology
The white zone is for loading and unloading only. If you have to load
or unload, go to the white zone. You'll love it. It's a way of life.
:: A lot of singularity-fu is just prosthetic psionics.
: pho...@ofb.net (Damien Sullivan)
: True, and we have had psychic transcensions since Childhood's End --
: maybe the energy beings of Star Trek, definitely Jason Ironheart and
: the Vorlons etc. of Babylon-5, or the Lylmik of Julian May. There's
: similarity of effect, and convergence to a godpoint of manipulating
: much of reality at will. OTOH, the difference in mechanism and
: philosophy feels important to me, especially from the hard SF
: viewpoint. The implausibility of "the technology won't stretch that
: far" isn't the same as that of "that effect doesn't even exist". The
: Singularity could happen -- or maybe it can't, but it's largely an
: engineering issue, vs. psychic powers not existing.
I don't disagree here, but I note the boundary can get blurry.
For example, Flux and Anchor. Or Bowman in the 2001 sequels. And,
wrt the "psychic powers not existing", that can get very blurry, too,
which hyperspace and exotic particles and fifth forces and so on and on
being used in hard-seeming SF. Doubly blurry when you consider, oh, say,
McGill Feighan's access to the "energy dimension" to do his tricks... is
that all that much different than Asimov's use of energy and dimenisions
in The Gods Themselves? Or, is the use of diagrams and "psychic machines"
in Schmitz's Hub setting, or Harisson's Deathworld (first book) setting
not treating psychic powers as "a matter of engineering" (once you grant
hyperspace or psychon particles, or whatever)?
So anyways. Yes, they *aren't* *often* *mixed*, in a practical,
catalogue-of-what-is sense. But they *could* mix well, and sometimes
there's a bit of difussion at the boundary; enough to quote a handfull
of instances anyways, and in some of those, it's done pretty well.
Perhaps that's a fiddly distinction to make,
but it seems to me to be worth keeping in mind.
: pho...@ofb.net (Damien Sullivan)
: Fair enough, especially since those typically are about the extreme forms.
: At the other end, having cognitive science accomplish nothing of interest
: seems about as implausible.
Hear, hear. Both ends, I mean.
> "Michael S. Schiffer" <msch...@condor.depaul.edu> wrote:
>...
>>could presumably choose the gametes they wanted to use. It's
>>true that they don't do as much optimizing as they probably
>>could.
> Barrayar has haut genetic material as of _DI_; think they'll use
> it? Barrayar, passing up the chance to create supersoldiers?
> Okay, this is Gregor's Barrayar, and he's Cordelia's, and they
> have that mutant phobia...
One thing that runs thoroughly through Bujold's work is the
importance of other inputs than genes to getting something
worthwhile at the end of development. Even super-soldiers need
parents, and the record in that universe of producing and retaining
loyalty in children one is using as tools isn't great. (One of
Mark's rescuees did go back of her own accord, but the opposite
result is a recurring theme of the books, from the Quaddies to Taura
to Mark.) The haut have a solution, but Miles hasn't IIRC seen much
of Star Creche childrearing, and wouldn't want to duplicate it if he
had.
Producing super-soldiers who are actually loyal to Barrayar *and*
more useful to it than the equivalent expenditure's worth of trained
Barrayaran recruits is no small trick, even if Gregor or ImpSec
doesn't care about the moral issues or the induced mutations. Add
the *source* of these particular mutie genes, and I think Barrayar
is safe from the temptation this generation.
>...
> It's not just nobles; other people use the service. Which
> raises a question, how old is the oldest person in the Nexus?
> Hey Lois, do your worldbuilding...
> These people can also clone and replace any organ; one wonders
> why the body-theft is even necessary. I suppose refreshing the
> whole vascular or lymph system would be harder. But it also
> seems like they'd have head-in-a-jar capability, with what we've
> seen even from Barrayar's medicine for Aral;
I'd guess that there are limits to how many emergency systems can be
stacked on top of one another. (Miles' cryorevival was clearly
pushing things.) So you can live without most of your body for a
little while, or without some of it for a long while (or
indefinitely, given a replacement) but at some point you're trying
to run an unwieldy and failure-prone set of substitutes 24/7 long
enough to get the cloned organs ready, and then there's the matter
of getting them attached, up, and running without killing the
patient.
To some extent, this is an extension of our own medical technology
experience, where you can replace a heart for a while, but not yet
indefinitely. Not because it's an engineering impossibility, but
because working the bugs out is takes more years than we've yet been
able to devote to the problem. I don't know if that could delay
artificial bodies with better-than-human lifespans a thousand years,
but it could push it back a century or more without straining my
credulity much, and I don't know how far the enabling technologies
are. (I'm guessing cloning organs is a lot closer, but unless you
can keep a full set on hand you still need to be able to stay alive
somehow while your next liver is grown.)
is there any life
> support function they couldn't provide mechanically
> indefinitely? Of course, the brain may conk out. Though stuff
> like Illyan's memory chip seemed not that far from
> uploading-quality tech.
I'm not sure about that-- it had a problem-prone interface, and it's
not at all clear that it has the necessary processing power. (He
clearly had a heck of a search engine to usefully sort through all
that data, but where that is versus emulating human intelligence
isn't clear to me.) Of course, it's also decades-old tech by the
current books, but the fact that it's not standard issue on Beta
Colony suggests that the interface bugs may still not be worked out.
> Which gets back to the humanist model; I suspect Lois doesn't
> want to deal with 400 year old people or AI.
Agreed.
After 1000 years
> things are better than now, but not too much better. Enough to
> offer hope and inspiration, not enough to disturb, except in
> selected places.
That analysis sounds about right to me. I'm not sure that the
Scottish school is likely to be any more right about the future, but
they generally are doing a more comprehensive attempt at
extrapolation.
Though they have their own arbitrary limitations-- the Culture's
deliberate choice to have 500 year lifespans and humanoid body plans
and the like for their mainstream citizens, the Zones, the
Eschaton's rule about messing with causality, etc. (Or in the Fall
Revolution books, just killing a few quillion AIs whenever there's
any danger they might transcend.)
And while I admire the speculative bravura of the "Scots", I
generally *enjoy* the "humanist" books better. Vinge is the only
one of the (near-)Singularity school whose books I consistently like
as well as admire, and he generally keeps the camera focused on
fairly human-level affairs. (I've enjoyed some books or stories by
most of the others, but for me they're fairly rarefied air to
breathe for too long.)
You ever going to fix your newsreader, Wayne?
>: pho...@ofb.net (Damien Sullivan)
>: Singularity could happen -- or maybe it can't, but it's largely an
>: engineering issue, vs. psychic powers not existing.
>
>I don't disagree here, but I note the boundary can get blurry.
Indeed. Consider one of Mr. Sullivan's earlier examples: Julian May's
Galactic Milieu. If you sweep aside the theology for a moment,
everything else is "explained" (in the story-internal sense, not
necessarily to the readers) by the "Dynamic Field Theory" (some sort
of super-GUT attributed to a Chinese physicist). Not only the
traditional "skiffy" stuff -- force-fields, the Inertialess Drive, the
Subspace Translator, and the time-gate itself -- but all of the "psi"
stuff is explained by the same theory. (It's an interesting halfway
point between mechanism and vitalism: some species just evolved the
ability to manipulate these fields with their minds, but how that
could be, biologically speaking, is swept under the rug, since it's
immaterial to the story.) There's no reason machines can't be built
to do the same thing, and in fact they are, although for some reason
(unknown to me) they are either very uncommon or proscribed.
>So anyways. Yes, they *aren't* *often* *mixed*, in a practical,
>catalogue-of-what-is sense. But they *could* mix well, and sometimes
>there's a bit of difussion at the boundary; enough to quote a handfull
>of instances anyways, and in some of those, it's done pretty well.
Indeed.
-GAWollman
--
Garrett A. Wollman | As the Constitution endures, persons in every
wol...@csail.mit.edu | generation can invoke its principles in their own
Opinions not those | search for greater freedom.
of MIT or CSAIL. | - A. Kennedy, Lawrence v. Texas, 539 U.S. 558 (2003)
> Which brings us to the root of it all in John Campbell's rejection of
> Vinge's sequel to "Bookworm, Run!", with a human enhanced like the
> chimp of the original story: "You can't tell this story, and neither
> can anyone else." This has cousins in Niven's own writings on
> superintelligent beings; the basic idea is that you can't plausibly
> write about someone much smarter than yourself, let alone a society
> of them. Sure, they might still have human motivations, but can you
> portray their thoughts, their actions, the technologies or social
> arrangements they would produce? Can you understand them?
Seems to me that Vinge came along well after Olaf Stapledon's _Odd
John_ and _Star Maker_ were published.
And I believe Vinge had read Poul Anderson's _Brain Wave_.
--
Dan Goodman
Journal http://www.livejournal.com/users/dsgood/
Clutterers Anonymous unofficial community
http://www.livejournal.com/community/clutterers_anon/
Decluttering http://decluttering.blogspot.com
Predictions and Politics http://dsgood.blogspot.com
All political parties die at last of swallowing their own lies.
John Arbuthnot (1667-1735), Scottish writer, physician.
< looks at headers, looks at upthread headers >
Um. Which bug?
What he said.
Except I think we've passed through a couple of small-s singularities
and might be in the middle of one (urbanization).
And I hate Batman's Cave Singularity stories, where everyone
runs around looking for the singularity.
--
http://www.cic.gc.ca/english/immigrate/
http://www.livejournal.com/users/james_nicoll
The one where it keeps on putting the invalid distribution "world"
into every article you post.
> Except I think we've passed through a couple of small-s singularities
>and might be in the middle of one (urbanization).
>
> And I hate Batman's Cave Singularity stories, where everyone
>runs around looking for the singularity.
Examples?
-xx- Damien X-)
And it's universal: part of the plot involves aliens who also
want to control singularities.
Oh, that bug. That was fixed as of Fri Nov 4 00:07:40 UTC 2005.
thr...@sheol.org (Wayne Throop) wrote:
>: far" isn't the same as that of "that effect doesn't even exist". The
>: Singularity could happen -- or maybe it can't, but it's largely an
>: engineering issue, vs. psychic powers not existing.
>
>I don't disagree here, but I note the boundary can get blurry.
>For example, Flux and Anchor. Or Bowman in the 2001 sequels. And,
Haven't read Flux. Bowman... off to the side, but I'd say closer to the
technology end, perhaps because I've read the book where Clarke talks about
the monolith's makers and how they'd transferred their minds into ships before
learning how to be stable vortices of energy or something. And in 2010 (book)
Bowman sets off a nuke to recharge his batteries. So it's more like really
advanced engineering, with a nod to energy conservation (and someone I just
realized was too lazy to go play in the Sun.)
>wrt the "psychic powers not existing", that can get very blurry, too,
>which hyperspace and exotic particles and fifth forces and so on and on
>being used in hard-seeming SF. Doubly blurry when you consider, oh, say,
>McGill Feighan's access to the "energy dimension" to do his tricks... is
>that all that much different than Asimov's use of energy and dimenisions
Haven't read Feighan, don't remember much of Gods Themselves, do remember the
Foundation series 'psychic' powers. Explicitly described as manipulating
electromagnetic fields very very subtly. Somewhat less plausible than
Drexlerian nanotech but again feeling different to me than classic psychic
powers. I'll grant it's blurry though, especially when Gaia comes in. But
hey, Brain Eater.
Also, psychic transcendance tends to be some big power rampup, not the
intelligence->technology cycle of Vinge.
I think you've simply read more of this cross-boundary stuff. For me psychic
powers and technological imitations thereof are quite distinct. Though I'll
mention one fuzzy piece you haven't: Lord of Light. "Mutations" leading to
various electromagnetic manipulation powers, and the aliens who'd pulled the
same trick as Bowman's masters, but all seeming pretty foofy.
Hmm. Gradation:
emergent psychic powers proper: X-Men, Liaden
emergent psychic powers with vague explanation: Zelazny, Julian May, with some
limited mechanical access to the same forces or powers
psychic (direct brain effect) powers with a technological cause: Asimov's
telepathy, where there's no hint that this is a justification of psychic
claims today;
things that look psychic but are backed up by a whole toolchain and the
handwaving is engineering more than scientific: Singularity, implant
telepathy, robotic "telekinesis".
I still see a much bigger jump to the last one (ignoring the Singularities
where everyone just vanishes, which I think tend to be plot devices anyway).
Enough that I never made a connection before, and am not particularly swayed
by it now. Similar effects up to a point, but as Banks said the point of
technology is wish fulfillment. Hard SF tries to show how the wishes get
fulfilled. "Dynamic field theory" ain't it.
-xx- Damien X-)