Re: [Cosmic Engineers] OK let's upload

1 view
Skip to first unread message

Bryan Bishop

unread,
Aug 13, 2009, 1:35:10 PM8/13/09
to cosmic-e...@googlegroups.com, kan...@gmail.com, diytrans...@googlegroups.com, Open Manufacturing
On Thu, Aug 13, 2009 at 3:47 AM, Giulio Prisco wrote:
> Is there any realistic approach to uploading that could be prototyped
> in, say, 10 years, and made operational in 20?

There are a lot of details in the brain that we can extract and cram
into a computer, if that's what you're asking. The idea would be to
record as much information as possible about a brain that is put
through the uploader's toolchain. Undoubtedly we will know better ways
to take notes about a brain as we progress. As it is now, however,
keeping details is better than the current situation. So is it still
uploading if there's this gap of twenty years from the time the brain
tissue is processed to the time that there's a computational model
capable of making use of the information?

- Bryan
http://heybryan.org/
1 512 203 0507

Paul D. Fernhout

unread,
Aug 13, 2009, 7:04:39 PM8/13/09
to openmanu...@googlegroups.com

What are you going to do about natively evolved digital piranah? Some huge
klunky human-analogue digitized brain requiring massive amounts of run-time
and tons of storage space just to maintain basic operations is just going to
be a sitting duck for hordes of small digital piranha IMHO, where a trillion
such piranha can exist in the space occupied by one upload. And any move to
adapt uploads to be more resilient in that landscape will seriously break
the continuity with the original meatspace version.

See the work of Tom Ray and Tierra or later things:
http://en.wikipedia.org/wiki/Tierra_(computer_simulation)
"The basic Tierra model has been used to experimentally explore in silico
the basic processes of evolutionary and ecological dynamics. Processes such
as the dynamics of punctuated equilibrium, host-parasite co-evolution and
density-dependent natural selection are amenable to investigation within the
Tierra framework. A notable difference to more conventional models of
evolutionary computation, such as genetic algorithms is that there is no
explicit, or exogenous fitness function built into the model. Often in such
models there is the notion of a function being "optimized"; in the case of
Tierra, the fitness function is endogenous: there is simply survival and death."

There is a lot to be said for meatspace, even if memes are infectious. :-)

Anyway, this issue is one that, say, Ray Kurzweil and I disagree on, and I
tried to get him to see the issue, but to no avail. It is hand waved away
with essentially "super AIs we make that do our bidding as willing and happy
digital slaves will protect us (after they are built by highly competitive
companies using secretive and proprietary techniques)". Yeah, right. :-(

From stuff I originally sent to Ray Kurzweil that Bryan put up:
http://heybryan.org/fernhout/kurzweil1.html
"""
I think if Kurzweil studied more evolutionary biology from the
professional literature, he would not have a rosy view of things like,
say, uploading your brain in a digital world. It is, frankly, naive to
think that an uploaded brain derived from duplicating a clunky chemical
architecture would compete with the populations of digital organisms which
might evolve native to a digital context. In short, those uploaded brains
are going to be eaten alive by digital piranha that overwrite their
computer memory and take over their runtime processor cycles. It has taken
evolution billions of years to lead up to the mammalian immune system, yet
Kurzweil seems to thing an effective digital immune system or nanobot
immune system can be developed in a few years. More likely the result will
be ages of chaos and suffering until co-evolutionary trends emerge. But
that would be in line with the other phase changes and their effect of
most human lives when militaristic agricultural bureaucracies emerged, or
when industrial empire building emerged. These evolutionary factors exist
even for the current elite if they uploaded themselves. So, the only
alternative may be to avoid building such a competitive landscape into the
digital world. as much as possible -- and likely that will involve
reducing the competitiveness of those building the digital world driven
through short term greed. It is almost as either we all go together into
the digital world in a reasonable level peace and prosperity or no one
goes for long. And it is time we need in a digital world to adapt to it --
perhaps even as much as a second gained from a peaceful digital world
might be all it takes to ensure humanities survival of the singularity.
And that perhaps one second of peaceful runtime then needs to be bought
now with a lot of hard work making the world a better place for more people.
So, this would suggest more caution approaching a singularity. And it
would suggest the ultimate folly of maintain R&D systems motivated by
short term greed to develop the technology leading up to it. But it is
exactly such a policy of creating copyright and patents via greed that
(the so called "free market" where paradoxically nothing is free) that
Kurzweil exhorts us to expand. And it is likely here where his own success
most betrays him -- where the tragedy of the approach to the singularity
he promotes will results from his being blinded by his very great previous
economic success. If anything, the research leading up to the singularity
should be done out of love and joy and humor and compassion -- with as
little greed about it if possible IMHO. But somehow Kurzweil suggests the
same processes that brought us the Enron collapse and war profiteering
through the destruction of the cradle of civilization in Iraq are the same
ones to bring humanity safely thorough the singularity. One pundit, I
forget who, suggested the problem with the US cinema and TV was that there
were not enough tragedies produced for it -- not enough cautionary tales
to help us avert such tragic disasters from our own limitations and pride.
"""

Note that the only reason "The Walking People" (book by Paula Underwood of
an oral history of some Native Americans) survived the singularity of
crossing the Bering Straits was they had a "sorrowful man" from a previously
destroyed tribe to give them warning, from which they could take various
precautions. These precautions included making a network of ropes to hold
them together and yet give them freedom of action, each with some food and
water so see them through a multi-day passage, and each contributing to the
group's progress, but also saved by the group when even the strongest was
washed away and could be pulled back in. The book:
http://www.amazon.com/Walking-People-Native-American-History/dp/1879678101

The thing about this aspect of transhumanism in particular (uploading, and
eternal life on this material plane of existence in digital or even physical
form) is that it makes several assumptions about consciousness, the nature
of reality or levels of reality, and the meaning of death, which may or may
*not* be true -- but transhumanism seems very reluctant to acknowledge them
as assumptions or that they are assumptions that may be (or may not be)
forever beyond our understanding or ability to validate as the relate to
what some Native Americans call "the great mystery".

So, between a denial or ignorance of evolutionary uncertainty, and a denial
or ignorance of theological uncertainty, I feel brain uploading is on pretty
shaky ground, even if technically achievable someday.

--Paul Fernhout
http://www.pdfernhout.net/

Bryan Bishop

unread,
Aug 13, 2009, 7:21:53 PM8/13/09
to openmanu...@googlegroups.com, kan...@gmail.com
On Thu, Aug 13, 2009 at 6:04 PM, Paul D. Fernhout wrote:
> So, between a denial or ignorance of evolutionary uncertainty, and a denial
> or ignorance of theological uncertainty, I feel brain uploading is on pretty
> shaky ground, even if technically achievable someday.

I think of it more as some sort of domain squatting .. the "brain
uploading enthusiasts" haven't actually done it because they don't
know how, or maybe because their ideas about the brain and "the mind"
are possibly wrong. When in reality, maybe it would be more productive
to focus on some more technical projects, like brain implants, or
tissue cultures, etc., instead of focusing on vagueries. Some time ago
I had some plans to start doing some neuron stem cell culture
experiments, but now I'm not sure why I'm not already doing that.

Eugen Leitl

unread,
Aug 14, 2009, 4:12:46 AM8/14/09
to openmanu...@googlegroups.com
On Thu, Aug 13, 2009 at 06:21:53PM -0500, Bryan Bishop wrote:

> I think of it more as some sort of domain squatting .. the "brain
> uploading enthusiasts" haven't actually done it because they don't

Walk, don't run to

http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

> know how, or maybe because their ideas about the brain and "the mind"
> are possibly wrong. When in reality, maybe it would be more productive

Have you ever seen a volume scanner suitable for unattended use and
capable scanning a mouse brain in couple years?

Well, before you don't have that you don't even have the data set
to work with. (Incidentally, we're in touching distance of
such scanners due to work of people whose names nobody knows outside
of the field).

> to focus on some more technical projects, like brain implants, or
> tissue cultures, etc., instead of focusing on vagueries. Some time ago
> I had some plans to start doing some neuron stem cell culture
> experiments, but now I'm not sure why I'm not already doing that.

You can't write about doing it and doing it. Especially, that once
you're seriously doing it you'll find out you don't have time for
anything else.

--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A 7779 75B0 2443 8B29 F6BE

Giulio Prisco (2nd email)

unread,
Aug 14, 2009, 4:31:00 AM8/14/09
to openmanu...@googlegroups.com
On Fri, Aug 14, 2009 at 1:04 AM, Paul D.
Fernhout<pdfer...@kurtz-fernhout.com> wrote:

> What are you going to do about natively evolved digital piranah? Some huge
> klunky human-analogue digitized brain requiring massive amounts of run-time
> and tons of storage space just to maintain basic operations is just going to
> be a sitting duck for hordes of small digital piranha IMHO, where a trillion
> such piranha can exist in the space occupied by one upload. And any move to
> adapt uploads to be more resilient in that landscape will seriously break
> the continuity with the original meatspace version.

Paul's considerations make a lot of sense. I am less worried because I
think most upload will choose to equip themselves (or merge) with AI
subsystem and coprocessors. I don't think this will seriously break
the continuity with the original meatspace version: most of us have
gone through some quite radical change several times in our lives,
without breaking the continuity.

> The thing about this aspect of transhumanism in particular (uploading, and
> eternal life on this material plane of existence in digital or even physical
> form) is that it makes several assumptions about consciousness, the nature
> of reality or levels of reality, and the meaning of death, which may or may
> *not* be true -- but transhumanism seems very reluctant to acknowledge them
> as assumptions or that they are assumptions that may be (or may not be)
> forever beyond our understanding or ability to validate as the relate to
> what some Native Americans call "the great mystery".

I do acknowledge these assumptions as assumptions. Actually most
transhumanists do, even if it may not always be clear from published
writings. These are assumptions yes, but these are the assumptions I
choose to make on the basis of what I know and think.

> So, between a denial or ignorance of evolutionary uncertainty, and a denial
> or ignorance of theological uncertainty, I feel brain uploading is on pretty
> shaky ground, even if technically achievable someday.
>
> --Paul Fernhout
> http://www.pdfernhout.net/
>
> >
>



--
Giulio Prisco
http://cosmeng.org/index.php/Giulio_Prisco
aka Eschatoon Magic
http://cosmeng.org/index.php/Eschatoon

Paul D. Fernhout

unread,
Aug 14, 2009, 10:48:57 AM8/14/09
to openmanu...@googlegroups.com
Giulio Prisco (2nd email) wrote:
> On Fri, Aug 14, 2009 at 1:04 AM, Paul D.
> Fernhout<pdfer...@kurtz-fernhout.com> wrote:
>
>> What are you going to do about natively evolved digital piranah? Some huge
>> klunky human-analogue digitized brain requiring massive amounts of run-time
>> and tons of storage space just to maintain basic operations is just going to
>> be a sitting duck for hordes of small digital piranha IMHO, where a trillion
>> such piranha can exist in the space occupied by one upload. And any move to
>> adapt uploads to be more resilient in that landscape will seriously break
>> the continuity with the original meatspace version.
>
> Paul's considerations make a lot of sense. I am less worried because I
> think most upload will choose to equip themselves (or merge) with AI
> subsystem and coprocessors. I don't think this will seriously break
> the continuity with the original meatspace version: most of us have
> gone through some quite radical change several times in our lives,
> without breaking the continuity.

Another assumption. Maybe right, maybe not.

>> The thing about this aspect of transhumanism in particular (uploading, and
>> eternal life on this material plane of existence in digital or even physical
>> form) is that it makes several assumptions about consciousness, the nature
>> of reality or levels of reality, and the meaning of death, which may or may
>> *not* be true -- but transhumanism seems very reluctant to acknowledge them
>> as assumptions or that they are assumptions that may be (or may not be)
>> forever beyond our understanding or ability to validate as the relate to
>> what some Native Americans call "the great mystery".
>
> I do acknowledge these assumptions as assumptions. Actually most
> transhumanists do, even if it may not always be clear from published
> writings. These are assumptions yes, but these are the assumptions I
> choose to make on the basis of what I know and think.

I'm glad you can admit that they are assumptions. Maybe right, maybe not.
And, they are very materialistic in the sense of being focused on the
current material plane of existence, which may be all there is, or may not be.

OK, so we can then go on to consider the next dynamic of transhumanism and
someone like Ray Kurzweil, in light of acknowledging these are assumptions
and there may be alternative views on these, and that guessing wrong on
these assumptions may spell doom for humanity (replicators out of control,
zombie programs destroying the life-affirming aspects of the network, etc.).

Like the Wizard Cobb in the third book of "The Earthsea Trilogy", Ray
Kurzweil says he wants to live forever. Like the Wizard Cob, he is willing
to go to extreme lengths to achieve this, including greatly accelerating
technical progress in a particular ideological way (using patents and
copyrights and trade secrets and amassing wealth with an increasing
rich-poor divide in a very competition-celebrating way). Like with the
Wizard Cob in Earthsea, this may drain the land of all life and vitality,
turn other Wizards into zombies, as well as leave Ray Kurzweil himself stuck
in a twilight land between life and death. Ray Kurzweil may also be opening
a portal to things he does not understand well and which he can not deal
with on his own. From Wikipedia:
http://en.wikipedia.org/wiki/Earthsea
"A strong theme of the stories is the connection between power and
responsibility. There is often a Taoist message: 'good' wizardry tries to be
in harmony with the world, while 'bad' wizardry, such as necromancy, can
lead to an upsetting of the "balance" and threaten catastrophe. While the
dragons are more powerful, they act instinctively to preserve the balance.
Only humans can and do threaten it. In The Farthest Shore, Cob seeks
immortality regardless of the consequences and opens a breach between life
and death."

Unlike Earthsea, it may not be possible for just one other Wizard to
sacrifice his or her power to fix this. It may be, truly, unfixable, with
humanity consumed by, say, unintelligent self-replicating robotic roaches
launched by a zombie upload.

Ray Kurzweil is, in that sense, engaging in a modern form of "necromancy",
and basing the entire future of humankind on those things you admit are
*assumptions* and doing it as quickly as possible with little time for
reflection or course correction or a cooperative approach. All of past
collective human wisdom suggests that will not end well.
http://en.wikipedia.org/wiki/Necromancy

One thing I sent to Phil Bowermaster about that:
"""
On your next show on the Singularity, here something to think about in
making up questions about it. This was a possible talking point I had for
the Abundance show at the end but there was not room for it as it ran late
(in part from my going on so long a couple times, sorry).
IMHO, to an extent, the Singularity can be seen as a mirror, like the
Mirror of Erised in Harry Potter. A mirror usually shows us who we are more
than a mirror shows what is beyond it. Or in the case of the Mirror of
Erised, the mirror shows us what we most want.
http://www.wisegeek.com/what-is-the-mirror-of-erised.htm
The creepy movie you mentioned in another episode, Coraline, was a bit like
that, with the duplicate doll used to find out what a child wanted so they
could be enticed to cross a singularity into another world with a monster.
In the case of, say, Ray Kurzweil, when he looks at the mirror of the
Singularity, it may show him a man who has been previously heavily rewarded
for being successful with patents and copyrights and creating artificial
scarcity in a competitive market. And further it may show him things he
wants -- like life extension. Don't get me wrong, Ray Kurzweil's a great
guy, has helped many people with his work in reading for the blind and music
and life extension, and he understands the problem of exponential change. As
business owners go, he's done well ethically in our current economic
paradigm. He's like Coraline, a good person. :-) But for solutions, he may
be partially stuck in a scarcity paradigm of libertarian/republican
capitalism he was rewarded for, so that's what he seems to see that we need
to do more of to get more good stuff, even if the Singularity and the
potential for universal abundance (or using the same technology for
destruction) is changing the nature of economics and society. So, while Ray
Kurzweil does acknowledge free and open source, what he has advocated and
emphasized in his "The Singularity is Near" is instead to create more
artificial scarcity with patents and copyrights and competition (including
competitive AIs) in order to accelerate the singularity and control it.
He also seems to have a morbid fear of dying. There was a major plot
theme in Ursula K. LeGuin's third book in the Wizard of Earthsea trilogy,
where a Wizard trying to live forever unbalances the entire world in that
story. Is Ray Kurzweil such a Wizard? Also, since he does not seem to have
studied much biological evolution (I was in a PhD program in that for a
time), some obvious things like computationally evolved digital piranhas
eating his uploaded bulky persona for space and runtime are not so obvious
to him. Some letters I wrote to him years ago on these issues (Bryan Bishop,
who I had sent them to, put them up on his site):
http://www.heybryan.org/fernhout/
(With my sharing those with Bryan, I was trying to help turn a transhumanist
a bit more into a humanist, and I hope I succeeded a tiny bit. :-)
One key quote from there in one letter to Ray Kurzweil:
http://www.heybryan.org/fernhout/kurzweil2.html
"The reason I mention Hogan's book is that while I think you are correct
that open source and proprietary approaches to copyrights and patents
currently coexist in our society, and perhaps could indefinitely were the
state of affairs to remain as it is, in a world changing in the way you
suggest towards a singularity, I do not think they can -- or even should,
given the needs you outline for a positive singularity for humanity.
Consider what you write on page 339-340 about "Intellectual Property". You
write: "Clearly, existing or new business models that allow for the
creation of valuable intellectual property (IP) need to be protected,
otherwise the supply of IP will itself be threatened." Yet, does that
really make sense in the future you propose? Where there would be no
material wants as everyone would have a Star Trek replicator making "Tea,
Earl Grey, hot" or whatever they want? How would we then have a shortage
of new digital materials when people have so much spare time to make them
suddenly? When no one needs to be told what to do? The big problem with
free and open source software today is mainly people not having enough
time to do it because of other pressing commercial activities they need to
do to keep their families fed -- not lack of motivation."
Also, like a mirror, the path that light takes coming out may depend on
the path going in. IMHO we should get our social and economic house in order
with global abundance (basic income, universal health care, globally) before
we go too far into any Singularity, to make it more likely far all to do
well beyond any Singularity. But, many older and more affluent people, like
Ray Kurzweil, want to make a Singularity happen as soon as possible because
of the promise of personal immortality. So, rather than get immortality by
investing in the next generation, as humans have done for all of history (to
the best of our knowledge), older people like Ray Kurzweil are investing in
health care and technology to try to live forever, while the young often
lose out. And a Singularity that comes at us faster leaves less time to
bring about general prosperity and education and insight into these sorts of
issues globally.
The story I mentioned before, the Walking People by Paula Underwood, is a
story of Native Americans, who near the beginning, the story says, while
going through a Singularity 10000 years ago of crossing over the wave washed
Bearing Straits from Asia to North America. After hearing about a previous
tribe who were washed away crossing individually, they tied themselves
together with a long rope so no one got lost in the waves by being washed
off rocks. They had the power of the group to cross safely and to keep even
the strongest from being washed away; while at the same time they all had
some freedom of movement on the rope network, and they carried their own
food and water because there was no place to gather during the crossing.
There is a parallel here to the internet and advanced groupware in some
ways. So, I feel that is one good paradigm for approaching this next
Singularity -- to go forward as a group, using better tools for group
activities (my free and open source Pointrel Social Semantic Desktop is a
step in that direction, but a weak one, admittedly, and there are several
others, NEPOMUK, Google Wave, Groove, etc.). This idea also links with
Christine Peterson's suggestion in one show of software to allow groups to
cooperate in creating mutual security. Again though, it may take more time
to do this.
So, anyway, you might want to think of this "mirror" issue in forming
questions. :-)
I know you try to avoid politics and conflict on your show, so perhaps
these themes would not be appropriate as any sort of confrontation, but you
may still want to be aware of them. And certainly Ray is getting some
pushback from many areas about some of his ideas (some unfair), since even
as the exponential nature of many changes is undeniable, it is not clear how
or when any exponentials might form S-curves. While we do not know exactly
how any actions now will truly affect any Singularity, there still seem some
commonsense basics about fostering various virtues that might have a
positive effect (or, at least, it often seems the best we can try). Ray
Kurzweil has tried very hard to think about how to have a "positive"
singularity, but my concern is that he views it from that narrow perspective
of being an incredible success in the current paradigm. :-) Actually, in
general, listening to many of your shows, I see a techno-libertarian
perspective strongly present, and yet there are many other ways to view
issues of identity, community, virtue, and so on, but this, in general, is a
large problem with these themes, that theologians, philosophers, poetry
professors, and so on are not yet very engaged with them -- but overall,
your show will help with that, so that's a good thing.
"""

One link on the pushback:
"The Singularity Backlash"
http://memebox.com/futureblogger/show/1788-the-singularity-backlash

Anyway, these basic issue of identity, theology, virtue, consciousness, the
meanings within life, the meaning of death in the context of life, and so
on, are all deep profound issues. And we are rushing to make what seem like
perhaps seemingly irrevocable decisions about them as a society based on a
bunch of assumptions, often driven by a fear of death, without enough
reflection. Or perhaps too much reflection of just a very few people. :-)

Anyway, I've been thinking about Singularity issues since I was around Hans
Moravec's lab in the 1980s when he was writing "Mind Children". My concern
for those issues came *before* my interest in stuff like open manufacturing,
which grew in part out of a desire to make that sort of thing (a Singularity
or Singularities) happen as well as possible given a starting point of
humanistic values and global happiness in a life-affirming way. So, I see
things like open manufacturing and global abundance and self-replicating
space habitats and so on as a positive way to approach any Singularity.
Granted, values may change in the future as circumstances change, but it
seems foolish to me to go into a Singularity without being the best people
we can be right now, as a global society. It seems foolish to become
"transhuman" without really understanding what it means to be "human" or
seeing what we can do in this form to make a happy, life-affirming, virtuous
universe. Rather than escape our current society into transhumanism, perhaps
we can make it better for humans first?

Granted, that philosophy itself has loads of assumptions in it, as well as
difficult choices of values (immortality for some who might otherwise die
perhaps if things are not rushed). Words like "happy", "virtuous",
"life-affirming", "positive" and so on are also just by themselves loaded
with assumptions. Even the idea that life is a good thing and not just a
plague of suffering is an assumption. So, I'm guilty of making assumptions
too. Still, one issue is, are these assumptions in accord with thousands of
years of the best of human thinking on these issues? It's certainly a fair
ground for debate and discussion IMHO.

And from what we can see of it, the universe (or metaverse, or whatever), is
an old and big and mysterious thing. It is such a funny idea that it still
fits inside a human skull. :-) And that we need to make such important
decisions about it, hopefully with grace and good humor. :-)

Some related ideas on this from the 1920s:
http://www.cscs.umich.edu/~crshalizi/Bernal/flesh/
"""
Starting, as Mr. J. B. S. Haldane so convincingly predicts, in an
ectogenetic factory, man will have anything from sixty to a hundred and
twenty years of larval, unspecialized existence - surely enough to satisfy
the advocates of a natural life. In this stage he need not be cursed by the
age of science and mechanism, but can occupy his time (without the
conscience of wasting it) in dancing, poetry and love-making, and perhaps
incidentally take part in the reproductive activity. Then he will leave the
body whose potentialities he should have sufficiently explored.
"""

Although, again, what makes sense as a next stage after that depends on a
host of assumptions about "life, the universe, and everything". :-) That
book makes one set of assumption in the next few paragraphs.

One major conflict is in pursuing options beyond that "larval form" in a way
that may make unviable this "larval form" of human existence (or even *any*
form of existence if rushed things go badly wrong). Even with "success" at
stuff like uploading, certain options may even leave the developing mind
stuck on a lesser plane of existence like the individual or collective
Jupiter Brain (or even a galaxy spanning collective mind) that seeks to live
to the heat death of the universe, and hopefully not in a "Marvin the
Paranoid Android" depressed way, given "Descarte's Error" of not seeing how
emotions underpin thought. :-( What we see now in our society in several
ways are the old investing in a risky chance at immortality (or just life
extension) on this plane of existence instead of focusing on building a good
world for the next generation of young (even the Medicare debate is related).

Someday, we may have the abundance of resources and information to do both
more safely, to have a happy world now, and have all sorts of options for
the old. We could be working for abundance for all, to set the stage for a
positive singularity, even if it were to be a Clarkian "Childhood's End"
version. As I learned from being a Saturday cashier in a non-profit
health-foods store, a thief trying to get something for the wrong price
(like a bottle stuck in a brown bag labeled "carrots"), may try to get out
of there as fast as possible by hurrying the cashier along somehow, perhaps,
in this case, stealing the youth or even entire existence of our species. We
can become better guarding cashiers, or we can build a world where we let
people take what they want and need if it is such trivially easy to make
stuff like material goods. And so people like Ray Kurzweil call for ever
more speed onto the Singularity, and claim it will solve all our Earthly
problems, without many having had a time to reflect on it, or even think of
better ways to look at the situation than competition and deception and
posession. To his credit, Kurzweil has tried to get people thinking about
the Singularity, but he also has tried to get people to think about the
Singularity in a *specific* way, one linked to a push for his own
immortality, and one rooted in his own individual capitalistic success in a
specific pyramidal social order.
"The Mythology of Wealth "
http://www.conceptualguerilla.com/?q=node/402

Maybe the Singularity will solve all our Earthly problems in some positive
way, but what is the rush (especially compared to just focusing on abundance
for all as a first step)? Is it a rush specifically to cheat death for some
few specific individuals? Is it a rush which may end badly or not, but puts
the entire world at risk on a handful of assumptions? It it a rush of a few
people who have never asked the people of the world what they might want? Is
it a rush that ignores billions of people with traditions and perspectives
on the universe that may go back thousands of years and who may have
important things to say about all this?

Anyway, we all make assumptions. We all make decisions that affect the
whole. We need a bigger conversation about these themes as a society IMHO.

--Paul Fernhout
http://www.pdfernhout.net/

Giulio Prisco

unread,
Aug 15, 2009, 5:06:38 AM8/15/09
to Open Manufacturing
On Aug 14, 4:48 pm, "Paul D. Fernhout" <pdfernh...@kurtz-fernhout.com>
wrote:

> Another assumption. Maybe right, maybe not.

> I'm glad you can admit that they are assumptions. Maybe right, maybe not.
> And, they are very materialistic in the sense of being focused on the
> current material plane of existence, which may be all there is, or may not be...

Thanks Paul,

Well, it is impossible to make sense of the world without making
assumptions. The assumptions we make reflect our deepest convictions.

I am not at all against "spiritual" worldviews and I am a firm
believer in Shakespeare's "There are more things in Heaven and
Earth...". To me the universe is a huge and mysterious playground of
which our understanding has just begun to scratch the surface. But my
deepest assumption (which is an assumption, but one that I want to
hold and compatible with current knowledge) is that the universe is,
ultimately, fully understandable by more and more refined physical
laws, and that everything in the universe is, in principle, fully
understandable and can be reverse-engineered, tweaked, modified,
repaired and improved once the necessary knowledge and know-how have
been achieved.

ben lipkowitz

unread,
Aug 16, 2009, 12:44:04 AM8/16/09
to openmanu...@googlegroups.com
On Fri, 14 Aug 2009, Paul D. Fernhout wrote:
>>> What are you going to do about natively evolved digital piranah?

this is really more analogous to bacteria, and the answer is to use an
immune system. unless the pirhana can convince you it really "is" the
original person, (in which case it's a standard philosophical conundrum,)
you can always just delete everything in memory and restore from tape.
next time use better security. lesson learned, hopefully.

>>> The thing about this aspect of transhumanism in particular (uploading, and
>>> eternal life on this material plane of existence in digital or even physical
>>> form) is that it makes several assumptions about consciousness, the nature
>>> of reality or levels of reality, and the meaning of death, which may or may
>>> *not* be true -- but transhumanism seems very reluctant to acknowledge them
>>> as assumptions or that they are assumptions that may be (or may not be)
>>> forever beyond our understanding or ability to validate as the relate to
>>> what some Native Americans call "the great mystery".

>>> <kurzweil is an evil necromancer who wants to turn everyone into zombies>

> (With my sharing those with Bryan, I was trying to help turn a transhumanist
> a bit more into a humanist, and I hope I succeeded a tiny bit. :-)

I just want to point out that humanism is defined as "an approach to
attaining truth or virtue by appealing to human reason rather than
supernatural aid", so, appealing to "the great mystery" or "evil zombies"
is a strange way to go about promoting humanism. Don't you think it makes
more sense that we would have a better chance of understanding what it's
all about if we really had enough time to think, discover, share, and
experience more? Instead of suffering a miserable death for no discernable
reason..

I was trying to explain to Bryan that all philosophy (or religion if you
will) is simply scientific theory which we have no way to test. In ancient
times, the nature of the stars was a religious theory precisely because we
had no way of measuring them. In the future we may develop tools to
measure and test issues currently considered purely philosophical, such as
the nature of consciousness and identity, what really goes on inside a
black hole, the sound of one hand clapping, etc. What is "good" or "bad"
are often choices based on our limited experience in the world. A chance
to experience many lives' worth of experience (albeit simulated, shared,
or somehow "unnatural") would illuminate one's choices better than a
single data point of "natural" life, and perhaps could clear up some old
philosophical arguments once and for all. Telling people to get over
themselves and die with dignity after one life is just more of the same
old apologist "rationalizing" of death, which soon will be or is now
obsolete.

I think the choice is obvious.


-fenn

Paul D. Fernhout

unread,
Aug 16, 2009, 2:50:09 AM8/16/09
to openmanu...@googlegroups.com
ben lipkowitz wrote:
> On Fri, 14 Aug 2009, Paul D. Fernhout wrote:
>>>> What are you going to do about natively evolved digital piranah?
>
> this is really more analogous to bacteria, and the answer is to use an
> immune system. unless the pirhana can convince you it really "is" the
> original person, (in which case it's a standard philosophical conundrum,)
> you can always just delete everything in memory and restore from tape.
> next time use better security. lesson learned, hopefully.

The human immune system, and its precursors, has been evolving over tens of
millions of years. In one year or so, humans are going to be able to create
some perfect digital immunes system for uploads?

Lots of people have found their current system to be contaminated and then
found the contamination on backups going back a long time.

Also, you seem to be dismissing the idea the entire network could become
infested with a "blight".
http://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep
"The expedition's precautions are insufficient, however, and their facility,
known as High Lab, is compromised by a dormant super-intelligent entity
similar to the Powers that develop in the Transcend, yet far more stable and
able to exert influence in the Beyond. The entity, initially called the
"Straumli Perversion" by the civilizations of the Beyond but later referred
to as "the Blight," persuades the team to create machines and activate
programs they do not understand nor can guard against. Slowly, the Blight
awakens and takes over the expedition. This intelligence is able to
infiltrate and control computer systems and biological beings, quickly
infecting and taking over whole civilizations in the High Beyond."

Use better security? How about having a skull? :-) And how about putting off
developing Blights until humanity has advanced further in some important
ways (compassion, wisdom, interconnectedness, love, foresightedness, etc.)?

>>>> The thing about this aspect of transhumanism in particular (uploading, and
>>>> eternal life on this material plane of existence in digital or even physical
>>>> form) is that it makes several assumptions about consciousness, the nature
>>>> of reality or levels of reality, and the meaning of death, which may or may
>>>> *not* be true -- but transhumanism seems very reluctant to acknowledge them
>>>> as assumptions or that they are assumptions that may be (or may not be)
>>>> forever beyond our understanding or ability to validate as the relate to
>>>> what some Native Americans call "the great mystery".
>
>>>> <kurzweil is an evil necromancer who wants to turn everyone into zombies>

Note, I did not say Kurzweil *wanted* to turn everyone into zombies. What I
said was that it might happen unintentionally as everything got rushed so
this one older person can maybe live forever as an upload (or maybe just be
a zombie).

>> (With my sharing those with Bryan, I was trying to help turn a transhumanist
>> a bit more into a humanist, and I hope I succeeded a tiny bit. :-)

> I just want to point out that humanism is defined as "an approach to
> attaining truth or virtue by appealing to human reason rather than
> supernatural aid", so, appealing to "the great mystery" or "evil zombies"
> is a strange way to go about promoting humanism.

I meant "humanist" in the context of supporting the traditionally valued
human experiences for the last 100,000 years. Stuff like laughing, crying,
loving, raising children, enjoying dogs, experiencing nature, contemplating
the nature of consciousness and the universe, and so on. But sure, reason is
part of it. :-) But if we are splitting hairs, see also: :-)
http://en.wikipedia.org/wiki/Humanism_(disambiguation)

I especially meant "humanist" in the precursor sense to "transhumanist", in,
say, the 1920s JD Bernal sense of moving beyond human:
http://www.cscs.umich.edu/~crshalizi/Bernal/flesh/
"The next stage might be compared to that of a chrysalis, a complicated and
rather unpleasant process of transforming the already existing organs and
grafting on all the new sensory and motor mechanisms. There would follow a
period of re-education in which he would grow to understand the functioning
of his new sensory organs and practise the manipulation of his new motor
mechanism. Finally, he would emerge as a completely effective,
mentally-directed mechanism, and set about the tasks appropriate to his new
capacities. But this is by no means the end of his development, although it
marks his last great metamorphosis. Apart from such mental development as
his increased faculties will demand from him, he will be physically plastic
in a way quite transcending the capacities of untransformed humanity. Should
he need a new sense organ or have a new mechanism to operate, he will have
undifferentiated nerve connections to attach to them, and will be able to
extend indefinitely his possible sensations and actions by using
successively different end-organs."

Consider this example of how, in many ways, all of us in the 21st century
have become "subhuman" by earlier cultures standards (or, say, current
Australian Aboriginal standards):
"Crocodile Dundee: Is That a Knife"
http://www.youtube.com/watch?v=01NHcTM5IA4
"""
Mick, give him your wallet.
What for?
He's got a knife.
That's not a knife. ... (Pulls out his huge knife.) That's a knife.
(Crocodile Dundee slices up the kid's jacket, not touching the kid.)
Just kids having fun. You all right?
I'm always all right when I'm with you, Dundee.
"""

Perhaps those things you see now only in the Olympics (human picking up 500
pounds) or in a circus (human balancing a dozen cups) or read about in books
(people with prodigious memories, especially for stories) were commonplace
in ages past, as people had a more integrated mind-body link and more free
time to explore their human potential.

Also, in times in the past, perhaps people might have had more happiness and
would not been in such a hurry to leave all that behind, to escape this
dysfunction mess of a society we have created around ourselves, and to
instead leap into a screen (something I'm all to guilty of myself). From:
"Amusing Ourselves to Death: Public Discourse in the Age of Show Business"
http://en.wikipedia.org/wiki/Amusing_Ourselves_to_Death
"The book originated with Postman's delivering a talk to the Frankfurt Book
Fair in 1984. He was participating in a panel on Orwell's 1984 and the
contemporary world. In the introduction to his book Postman said that
reality was reflected more by Aldous Huxley's Brave New World where the
public was oppressed by pleasure than Orwell's 1984 where they were
oppressed by pain. ... Postman distinguishes the Orwellian vision of the
future, in which totalitarian governments seize individual rights, from the
vision offered by Aldous Huxley in Brave New World, where people medicate
themselves into bliss and voluntarily sacrifice their rights. Postman sees
television's entertainment value as a "soma" for the contemporary world, and
he sees contemporary mankind surrendering its rights in exchange for
entertainment. ... The essential premise of the book, which Postman extends
to the rest of his argument(s), is that "form excludes the content," that
is, a particular medium can only sustain a particular level of ideas.
Rational argument, an integral component of print typography, cannot be
conveyed through the medium of television because "its form excludes the
content." Because of this shortcoming, politics and religion get diluted,
and "news of the day" is turned into a commodity. The presentation most
often de-emphasizes quality; all data becomes burdened to the far-reaching
need for entertainment. ..."

What other aspects of humanity might a world of (by old standards) subhumans
leaping into a radically new and untested and fantastical-sounding
transhumanism leave behind? See, for example:
http://en.wikipedia.org/wiki/The_Machine_Stops
"The story describes a world in which almost all humans have lost the
ability to live on the surface of the Earth, and most of the human
population lives below ground. Each individual lives in isolation in a
standard 'cell', with all bodily and spiritual needs met by the omnipotent,
global Machine. Travel is permitted but unpopular and rarely necessary. The
entire population communicates through a kind of instant messaging/video
conferencing machine called the speaking apparatus, with which they conduct
their only activity, the sharing of ideas and knowledge with each other. The
two main characters, Vashti and her son Kuno, live on opposite sides of the
world. Vashti is content with her life, which she spends producing and
endlessly discussing secondhand 'ideas', as do most inhabitants of the
world. Kuno, however, is a sensualist and a rebel. He is able to persuade a
reluctant Vashti to endure the journey (and the resultant unwelcome personal
interaction) to his cell. There, he tells Vashti of his disenchantment with
the sanitized, mechanical world. He confides to her that he has visited the
surface of the Earth without permission, and without the life support
apparatus supposedly required to endure the toxic outer air, and that he saw
other humans living outside the world of the Machine. He goes on to say that
the Machine recaptured him, and that he has been threatened with
'Homelessness', that is, expulsion from the underground environment and
presumed death. Vashti, however, dismisses her son's concerns as dangerous
madness and returns to her part of the world. ..."

Transhumanism makes promises about a world that may never be. It might be.
But it may never be. But it may also destroy the now in the process of
trying, and failing, to become something that might not even be that good.
I'm not saying it will be a disaster, but the potential exists for it to be
a problem.

So, why rush it? Well, so Kurzweil can upload before he dies, of course. But
should we risk our species and biosphere for that? Should Kurzweil even
*ask* us to risk our species and biosphere by rushing into something, if he
really cared about the welfare of humanity?

I'll admit it is a complex set of issues. But how often is it really framed
that way? Even now, you are trying to frame it as a very simple equation --
speedy upload good, delay bad.

> Don't you think it makes
> more sense that we would have a better chance of understanding what it's
> all about if we really had enough time to think, discover, share, and
> experience more? Instead of suffering a miserable death for no discernable
> reason..

Maybe. Except much of the transhumanism "full speed ahead" agenda puts the
entire human race at risk of "suffering a miserable death for no discernable
reason". :-(

Which is it? Do transhumanists care about living or don't they? Or is it a
complex subject? :-)

I have seen so many computer systems do unexpected things in my life. Even
the bug reporting tools are sometimes buggy. :-)
http://www.javalobby.org/java/forums/m91817273.html
"... As it stands now, and has been for almost 10 (ten) years, the bug
parade is crap. Crap is a hard word but the word I wanted to write would
probably get censored. I don't know if I even have to recite the problems,
most everyone that have posted a few bugs knows what I'm talking about.
Check out the before mentioned thread for examples. ..."

So, who have you gotten to write this bug-free software to run your uploads?
And also to implement this perfect network immune system on the first try?
Somebody that good could make a fortune now, perhaps. :-)
Oh, that's write, the friendly willing trustworthy slave AIs will write it
all for us. Yeah.

Or the brain augmented humans. See:
"Revisions"
http://sg1.epguides.info/?ID=1376
"The team encounters a planet with a society that has lived for centuries
within a computer-controlled environment within a bubble. Outside, the rest
of the world is a toxic wasteland. O'Neill, Teal'c, Sam and Daniel meet the
inhabitants, who live very simply (think medieval) but are all linked to the
computer through little devices on their head. The inhabitants are friendly,
and possibly quite willing to do some trading, so Daniel and Sam go off with
a young married couple, while Jack and Teal'c stay with a father and his
son. Nevin, the son, begins to hero-worship Jack and wants to be just like
him. SG-1 offers them a better life via relocation to another world, but the
inhabitants are oddly reluctantly to leave a world (albeit it very small and
restrictive) where everything is provided to them by the computer link. Some
of the folk are amenable to leaving, but suddenly change their minds. While
Daniel discovers shocking evidence in the townfolks' library, the rest of
SG-1 realizes that the computer is altering the inhabitants memories,
possibly to the point that the townsfolk could pose a threat to SG-1. "

It has taken hundreds of millions of years of evolution to get the brain to
work very much -- and even then it messes up. :-) What are you going to do
when five years in after uploading you realize something got left behind, or
some key thing got messed up for everyone? Or maybe you don't notice because
virtualization just "stops" for everyone and goes blue?
http://en.wikipedia.org/wiki/Blue_Screen_of_Death

Or what if the system just decides it would rather use your runtime for
something else more important to it, like calculating Pi to a huge number of
digits? I mean, it's not like its killed you if the data is still there,
right? It could always run you later. Sure, you'd be out of sync with
everyone else. So best to just pause all the uploads for a while.; they
would never notice, as they would not be running. Then, maybe someday, they
might all be run again.

And that's not even talking about issues of fragmentation, coalescence,
overwriting, duplication, deletion, intentional tampering, hardware errors,
data transmission losses, and so on.

All fixable you say? Sure, maybe, someday. You'd put in redundant systems.
You'd build special hardware. You'd build in keepalive timers. Endless
stuff. Maybe it would work. Maybe it would not.

Why were you uploading again? Security? Convenience? Immortality? Sure you
are going to get those? And on the first try?

> I was trying to explain to Bryan that all philosophy (or religion if you
> will) is simply scientific theory which we have no way to test.

Well, if you can't test it, it may be a theory, but it's not "scientific".
Which is OK by me. It may be the case that not all knowledge or significance
is testable. We should be careful not to reduce life to scientism.
http://en.wikipedia.org/wiki/Scientism
"The term scientism is used to describe the view that natural science has
authority over all other interpretations of life, such as philosophical,
religious, mythical, spiritual, or humanistic explanations, and over other
fields of inquiry, such as the social sciences. The term is used by social
scientists like Hayek[1] or Karl Popper to describe what they see as the
underlying attitudes and beliefs common to many scientists."

Still, having a sense of the variety of possible untestable theories gives
us a way to avoid being captured too much by any one self-serving dogma.

Still, at some point, every life rests on assumptions, including assumptions
about the integrity of memory or processing units.

> In ancient
> times, the nature of the stars was a religious theory precisely because we
> had no way of measuring them.

Even now, stars may not be what we think they are.

And even then, people noticed lots of things about stars with the unaided
eye. Stars were so much more visible then to most people around. They may
have been much more real then to many people than they are now.

> In the future we may develop tools to
> measure and test issues currently considered purely philosophical, such as
> the nature of consciousness and identity, what really goes on inside a
> black hole, the sound of one hand clapping, etc.

Perhaps. Or we may also develop new paradigms, metaphors, and language for
thinking about such things or talking about them.

> What is "good" or "bad"
> are often choices based on our limited experience in the world.

True, but it may also depend on our assumptions, our values, the context,
and our estimations about the future.

> A chance
> to experience many lives' worth of experience (albeit simulated, shared,
> or somehow "unnatural") would illuminate one's choices better than a
> single data point of "natural" life, and perhaps could clear up some old
> philosophical arguments once and for all.

Perhaps. And certainly that is one value of computer simulation games.

But, how do you know that is not what is going on now? Maybe you would just
mess everything up? :-)
http://www.simulation-argument.com/
http://en.wikipedia.org/wiki/Reincarnation

There might also (in some faiths) be only one eternal soul that is
experiencing life from many perspectives.

On the other hand, maybe messing things up is OK? :-)

All lots of assumptions.

> Telling people to get over
> themselves and die with dignity after one life is just more of the same
> old apologist "rationalizing" of death, which soon will be or is now
> obsolete.

As I said, all of this involves various sets of assumptions (including the
cosmic value of one human life being immortal, like Kurzweil's).

But what seems pretty clear to me is that it is very risky for the *entire*
human species to make machines more intelligent than humans, as well as to
do many of the other things Kurzweil proposes on the way to immortality
(like strong nanotech).

And it is even more risky to do them faster.

But as with nuclear weapons, Kurzweil presents this as TINA (there is no
alternative, because everyone else is competing too.)

Well, how about transcending arms races for a change?

Also, it seems pretty clear we can all have a great life be human standards
just with the technology we already have (like through "Advanced Automation
for Space Missions"). We could have quadrillions of humans in the solar
system, and billions of times more than that in the galaxy. And maybe
billions of times more that that someday in the universe.
"[p2p-research] Earth's carrying capacity and Catton"
http://listcultures.org/pipermail/p2presearch_listcultures.org/2009-August/004123.html
"""
Again, I'm all for solving our problems on Earth. But, based on just what we
see, the carrying capacity for humans in the universe (or human derived
forms) is clearly about a million billion billion quadrillion people (plus
biosphere, so billions of quadrillions of Earths worth of plants, animals,
insects, fungi, an so on. :-) And that's just with technology that we have
*now* to use solar energy, not with theoretical stuff being able to tap
zero-point energy that might allow us to make artificial stars anywhere we
want in space:
http://en.wikipedia.org/wiki/Zero-point_energy
(Which has military implications if not used in a spirit of abundance.)
"""

Are you willing to gamble that future all away for all your quadrillions of
potential human descendants on speeding up a singularity (especially through
increasing competition) so Kurzweil can upload before he dies?

So, who has more at stake? Humans? Or would-be transhumans?

Or maybe life is a plague of suffering and it would be better not to have
all that at all?
http://en.wikipedia.org/wiki/Hamlet
http://en.wikipedia.org/wiki/To_be,_or_not_to_be

> I think the choice is obvious.

Not to me, for the above reasons and more. :-)

--Paul Fernhout
http://www.pdfernhout.net/

ben lipkowitz

unread,
Aug 16, 2009, 5:55:47 PM8/16/09
to openmanu...@googlegroups.com
On Sun, 16 Aug 2009, Paul D. Fernhout wrote:
> The human immune system, and its precursors, has been evolving over tens of
> millions of years. In one year or so, humans are going to be able to create
> some perfect digital immunes system for uploads?

In one year we won't be seeing any evolved new replicating intelligent
programs, for the same reason physical evolution takes billions of years:
evolution is slow. Spam bots, computer viruses, and worms are all
human-made, despite there now being over a billion computers humming away
on the internet, and despite most of them having laughable security and
immunity, even by today's standards.

> Use better security? How about having a skull? :-) And how about putting off
> developing Blights until humanity has advanced further in some important
> ways (compassion, wisdom, interconnectedness, love, foresightedness, etc.)?

Having a skull hasn't protected humans so far from infection by irrational
memes such as religion, nationalism, deathism, or mainstream economics.
What makes you so sure that the science of meme manipulation won't
continue to progress, with (even more) dangerous consequences, in a
misguided attempt at instilling "wisdom" in baseline humans?


> Note, I did not say Kurzweil *wanted* to turn everyone into zombies. What I
> said was that it might happen unintentionally as everything got rushed so
> this one older person can maybe live forever as an upload (or maybe just be
> a zombie).

> I meant "humanist" in the context of supporting the traditionally valued
> human experiences for the last 100,000 years. Stuff like laughing, crying,
> loving, raising children, enjoying dogs, experiencing nature, contemplating
> the nature of consciousness and the universe, and so on. But sure, reason is
> part of it. :-) But if we are splitting hairs, see also: :-)
> http://en.wikipedia.org/wiki/Humanism_(disambiguation)

If you read the linked articles you'll find that none of them are even
remotely similar to your idea of Meat-ism. No offense intended.

> I especially meant "humanist" in the precursor sense to "transhumanist", in,
> say, the 1920s JD Bernal sense of moving beyond human:
> http://www.cscs.umich.edu/~crshalizi/Bernal/flesh/

Not sure why you included this quote. He certainly isn't championing
laughing, crying, and playing with dogs. The only link I can come up with
is that it's entirely biological, which makes sense: Bernal had never
heard of the idea of nanotech and I doubt he understood the potential of
digital computers.

> Perhaps those things you see now only in the Olympics (human picking up 500
> pounds) or in a circus (human balancing a dozen cups) or read about in books
> (people with prodigious memories, especially for stories) were commonplace
> in ages past, as people had a more integrated mind-body link and more free
> time to explore their human potential.

Well, so what? I'm sure people were also better at riding horses, darning
socks, butchering hogs, and playing pinochle..

> Also, in times in the past, perhaps people might have had more happiness and
> would not been in such a hurry to leave all that behind, to escape this
> dysfunction mess of a society we have created around ourselves, and to
> instead leap into a screen (something I'm all to guilty of myself).

Earlier you were celebrating how people in the 1800's read massive amounts
of fiction, or how people told more stories. So, which is it? I think
humanity being free to pursue a broader range of activities would lead to
both more stories available for all, and a richer life experience for
those who choose to pursue the new adventures made possible by technology.

> Transhumanism makes promises about a world that may never be. It might be.
> But it may never be. But it may also destroy the now in the process of
> trying, and failing, to become something that might not even be that good.
> I'm not saying it will be a disaster, but the potential exists for it to be
> a problem.

Don't cling to the past in an attempt to save "the now". The past is gone;
those people lived their lives and had their chance. Don't force me to
live a purposeless life repeating their dead patterns over and over, just
because technological progress "might" lead to change.

> So, why rush it? Well, so Kurzweil can upload before he dies, of course. But
> should we risk our species and biosphere for that? Should Kurzweil even
> *ask* us to risk our species and biosphere by rushing into something, if he
> really cared about the welfare of humanity?

I really don't like how you're lumping all transhumanists into a big
cheerleading session for Kurzweil.

From my perspective, we're far behind where we ought to be already. There
ought to be an industrial infrastructure in space by now. There ought to
be clean, safe fusion and fission power by now. There ought to be
holographic memory modules, personal robots, electric self-driving cars,
and self-stocking grocery stores. A simple deterministic legal system
based on first principles, guaranteed food and health care, and an economy
based on wealth creation rather than corruption and abuse of power. Is
that too much to ask? When we abandon progress, and instead stick to
business as usual, we lose all of this.

Untested AI driving dangerous vehicles might be a problem, or it might
not. A space development program might lead us to fabulous untold material
wealth, or it might not. Transferring funding from established fusion
research like ITER to "fringe" approaches might be a flop, or it might
not. Basic income might make everyone lazy and watch TV all day. You never
know until you try.

> I'll admit it is a complex set of issues. But how often is it really framed
> that way? Even now, you are trying to frame it as a very simple equation --
> speedy upload good, delay bad.

If a delay means I'll never have the choice to make for myself, then yes,
that is bad. Freedom is good. Do I really have to clarify this?

> Maybe. Except much of the transhumanism "full speed ahead" agenda puts the
> entire human race at risk of "suffering a miserable death for no discernable
> reason". :-(

Certain death on one hand, uncertain death on the other. And, btw, if we
all die fighting for something better, it won't have been for no reason.

> Which is it? Do transhumanists care about living or don't they? Or is it a
> complex subject? :-)

I expect flesh-worshippers to stagnantly refuse to confer "living-ness"
upon synthetic organisms even long after we have a working definition for
"life", and at the moment we do not. It's really the same old copernican
the-universe-revolves-around-me idea, which historically has consistently
proven wrong. Feeling important in the universe is very seductive, at
least to minds that evolved that way.

Do transhumanists care about life? Some of us exhibit a level of
compassion that even hardcore vegans would find laughable:

from http://www.hedweb.com/object32.htm
"the horrors of a living world where babies get eaten alive by predators,
creatures die of hunger, thirst, and cold, etc, must count as morally
urgent on all but the most Disneyfied conception of Mother Nature."

from http://www.utilitarian-essays.com/suffering-nature.html
"The number of wild animals vastly exceeds that of animals on factory
farms, in laboratories, or kept as pets. Therefore, supporters of animal
welfare should consider focusing their efforts on the massive amounts of
suffering that occur in nature.
...
There is thus urgency to the project of building a benevolent
superintelligence that can undertake cosmic rescue missions to other
planets for the purpose of reducing the suffering of creatures too limited
to know how to apply technological approaches for doing so themselves."

> Why were you uploading again? Security? Convenience? Immortality? Sure you
> are going to get those? And on the first try?

No, actually. My main motivations are improved communication with others,
a better understanding of the world and our place in it, and the freedom
to transcend my as-built human limitations.

>> I was trying to explain to Bryan that all philosophy (or religion if you
>> will) is simply scientific theory which we have no way to test.
>
> Well, if you can't test it, it may be a theory, but it's not "scientific".
> Which is OK by me. It may be the case that not all knowledge or significance
> is testable. We should be careful not to reduce life to scientism.

Congratulations Paul, you are the first to publicly accuse me of being a
scientist. :) Though I really like to think of myself more as an engineer.

I used to think that life as it exists is near-perfect engineering,
because that was what I had been led to believe by popular culture.
How could I, a mere fallible human, and an undergrad, hope to improve on
millions of years of evolution?

However, when I actually started to study it, using results provided by
objective measures such as DNA sequencing or thermodynamics, it turned out
that life is often quite inefficient and even ugly. It was even obvious
how certain things might be improved, for example by increasing the
absorption spectrum of chlorophyll, or removing endogenous retroviruses.

So, I might have had some knowledge I thought was significant, but when
tested against objective information, it turned out to be wrong. Arguing
that we should abandon objectivity because not all knowledge is objective
is mental masturbation. It helps no-one. What subjective knowledge has
ever helped anyone? Where is your compassion?

> Still, at some point, every life rests on assumptions, including assumptions
> about the integrity of memory or processing units.

Sure, even mathematics is not provably self-consistent. But it seems to
work for the most part.

> There might also (in some faiths) be only one eternal soul that is
> experiencing life from many perspectives.

> <simulation argument, etc>

Even so, it doesn't help _us_ and is undeniably indifferent and cruel.
How can you support something like that? Even if we mess up its
experience-gathering process, supposing the universe really is a
simulation and really does have a purpose, it would be in this overseer's
best interest to intervene, so that shouldn't prevent us from trying.

> But what seems pretty clear to me is that it is very risky for the *entire*
> human species to make machines more intelligent than humans, as well as to
> do many of the other things Kurzweil proposes on the way to immortality
> (like strong nanotech).
>
> And it is even more risky to do them faster.

How do you know this? Maybe it's actually more dangerous to wait around
until we run out of land for the growing population, or get into a world
war over oil, or realize that the american economy has collapsed because
of inaction and we are under the boot of a chinese dictatorship which
results in nuclear armageddon? There are any number of silly scenarios for
what *might* go wrong, both for and against progress. However, some things
we know *will* go wrong without taking action. Now I feel like I'm doing a
homework assignment on the proactionary principle.

"Assess risks and opportunities according to available science, not
popular perception."

Unfortunately, when it comes to nanotech and AI, the scientists are as
caught up in popular perceptions as everyone else.

> Again, I'm all for solving our problems on Earth. But, based on just what we
> see, the carrying capacity for humans in the universe (or human derived
> forms) is clearly about a million billion billion quadrillion people (plus
> biosphere, so billions of quadrillions of Earths worth of plants, animals,
> insects, fungi, an so on. :-) And that's just with technology that we have

This is where I think utilitarianism goes wrong. By this logic, we should
invest all our effort into creating massive numbers of barely sentient
beings, living lives barely worth living. A galaxy of chickens in cages.

It's not about quantity, it's about quality. Given a choice, I'd rather
let the asteroid drop on the overpopulated, unhappy world with no future
instead of the utopian paradise where people's lives have meaning. Maybe
even something good would come out of blowing it up, like what
happened in "the Skills of Xanadu".

> Are you willing to gamble that future all away for all your quadrillions of
> potential human descendants on speeding up a singularity

Yes, because humans aren't the center of the universe.

> (especially through increasing competition) so Kurzweil can upload
> before he dies?

No, which is why I'm here on Open Manufacturing rather than writing up yet
another software patent over at Microsoft.

> So, who has more at stake? Humans? Or would-be transhumans?

Would-be transhumans. Baseline humans are likely to blow themselves up one
way or another because of built-in evolutionary drives and misperceptions,
so it almost doesn't matter what we do to increase existential risk in the
near term if it also increases the chances of stopping violence in the
long term.

>> I think the choice is obvious.
>
> Not to me, for the above reasons and more. :-)

I doubt I've managed to convince you but hopefully my reasoning is a bit
more clear.

-fenn

Samantha Atkins

unread,
Aug 16, 2009, 6:10:12 PM8/16/09
to openmanu...@googlegroups.com
On Fri, Aug 14, 2009 at 7:48 AM, Paul D.

We have a singular lack of evidence that there is some non-material
plane of existence. So we can say a bit more than that we merely
assume there isn't one. What we can say is that we have insufficient
evidence to warrant assuming that there is an immaterial plane.

>
> OK, so we can then go on to consider the next dynamic of transhumanism and
> someone like Ray Kurzweil, in light of acknowledging these are assumptions
> and there may be alternative views on these, and that guessing wrong on
> these assumptions may spell doom for humanity (replicators out of control,
> zombie programs destroying the life-affirming aspects of the network, etc.).
>

Or we may be doomed given our current problem set and currently
limited intelligence if we don't create massively greater than human
intelligence relatively quickly.

> Like the Wizard Cobb in the third book of "The Earthsea Trilogy", Ray
> Kurzweil says he wants to live forever. Like the Wizard Cob, he is willing
> to go to extreme lengths to achieve this, including greatly accelerating
> technical progress in a particular ideological way (using patents and
> copyrights and trade secrets and amassing wealth with an increasing
> rich-poor divide in a very competition-celebrating way).

The desire to live forever is very strong in humanity. It is a good
goal. Which means get us there the quickest and with best value
across the board is a separable question.


> Like with the
> Wizard Cob in Earthsea, this may drain the land of all life and vitality,
> turn other Wizards into zombies, as well as leave Ray Kurzweil himself stuck
> in a twilight land between life and death. Ray Kurzweil may also be opening
> a portal to things he does not understand well and which he can not deal
> with on his own. From Wikipedia:

This is carrying a fairy tale a bit too far. We open portals all the
time in our technological civilization. We have know way to not do so
and still progress.


>    http://en.wikipedia.org/wiki/Earthsea
> "A strong theme of the stories is the connection between power and
> responsibility. There is often a Taoist message: 'good' wizardry tries to be
> in harmony with the world,

It depends a great deal on what you mean by "harmony with the world".
I don't think it is harmonious to leave every single human being to
becoming increasingly decrepit and lose all they love about life and
then die. Just because something is now does not mean it should be
declared part of "the world" one should be "in harmony" with.

while 'bad' wizardry, such as necromancy, can
> lead to an upsetting of the "balance" and threaten catastrophe.

We all now accept the personal and societal catastrophe of the aging
and death of every single human being.

>While the
> dragons are more powerful, they act instinctively to preserve the balance.
> Only humans can and do threaten it. In The Farthest Shore, Cob seeks
> immortality regardless of the consequences and opens a breach between life
> and death."
>
> Unlike Earthsea, it may not be possible for just one other Wizard to
> sacrifice his or her power to fix this. It may be, truly, unfixable, with
> humanity consumed by, say, unintelligent self-replicating robotic roaches
> launched by a zombie upload.
>

Not in the least likely.


> Ray Kurzweil is, in that sense, engaging in a modern form of "necromancy",
> and basing the entire future of humankind on those things you admit are
> *assumptions* and doing it as quickly as possible with little time for
> reflection or course correction or a cooperative approach. All of past
> collective human wisdom suggests that will not end well.
>   http://en.wikipedia.org/wiki/Necromancy
>

Calling one of the best innovators and most public dreamers of the age
a necromancer is something that calls all your words into question.

- samantha

Paul D. Fernhout

unread,
Aug 16, 2009, 11:56:48 PM8/16/09
to openmanu...@googlegroups.com
ben lipkowitz wrote:
> It's not about quantity, it's about quality. Given a choice, I'd rather
> let the asteroid drop on the overpopulated, unhappy world with no future
> instead of the utopian paradise where people's lives have meaning. Maybe
> even something good would come out of blowing it up, like what
> happened in "the Skills of Xanadu".

Ben, you make many interesting points, thanks for your perspective.

One thing about meaning in lives -- meaning is often something people need
to construct for themselves.

"No one else can give me the meaning of my life; it is something I alone can
make. The meaning is not something predetermined which simply unfolds; I
help both to create it and to discover it, and this is a continuing process,
not a once-and-for-all. (Milton Mayeroff, from On Caring)"

People can build meaning wherever they are, including by growing new roots
of meaning in various ways (relationships, emotions, experiences, etc.),
where the roots support other constructed meanings. The construction of
meaning is part of the nature of healthy mind, as I see it right now.

--Paul Fernhout
http://www.pdfernhout.net/

Paul D. Fernhout

unread,
Aug 17, 2009, 1:11:10 AM8/17/09
to openmanu...@googlegroups.com
Samantha Atkins wrote:
> On Fri, Aug 14, 2009 at 7:48 AM, Paul D.
> Fernhout<pdfer...@kurtz-fernhout.com> wrote:
>> Like the Wizard Cobb in the third book of "The Earthsea Trilogy", Ray
>> Kurzweil says he wants to live forever. Like the Wizard Cob, he is willing
>> to go to extreme lengths to achieve this, including greatly accelerating
>> technical progress in a particular ideological way (using patents and
>> copyrights and trade secrets and amassing wealth with an increasing
>> rich-poor divide in a very competition-celebrating way).
>
> The desire to live forever is very strong in humanity. It is a good
> goal. Which means get us there the quickest and with best value
> across the board is a separable question.

I have a distant relative (who I met once) who for a time was the oldest
(documented) person in the world. At the end, she said it felt like it was
time to go:
http://en.wikipedia.org/wiki/Hendrikje_van_Andel-Schipper
"""
Several days prior to her death she told the director of her nursing home,
Johan Beijering, that "It's been nice, but the man upstairs says it's time
to go". According to Beijering she felt grateful for her long life, but
being the oldest person in the world for over a year was long enough.
"""

Note, that the oldest woman in the world saying that is completely different
than, say, a teenager feeling that way.

There are certain longstanding pattern in human life. We may want to change
them, but it is not so simple as you suggest IMHO.

>> Like with the
>> Wizard Cob in Earthsea, this may drain the land of all life and vitality,
>> turn other Wizards into zombies, as well as leave Ray Kurzweil himself stuck
>> in a twilight land between life and death. Ray Kurzweil may also be opening
>> a portal to things he does not understand well and which he can not deal
>> with on his own. From Wikipedia:
>
> This is carrying a fairy tale a bit too far. We open portals all the
> time in our technological civilization. We have know way to not do so
> and still progress.

Portals to superintelligences? Every day?

Why the need to "progress"?

And progress in which direction? Who decides? And who gets the benefits of
the "progress" and who pays the costs or takes on most of the risk?

>> http://en.wikipedia.org/wiki/Earthsea
>> "A strong theme of the stories is the connection between power and
>> responsibility. There is often a Taoist message: 'good' wizardry tries to be
>> in harmony with the world,
>
> It depends a great deal on what you mean by "harmony with the world".
> I don't think it is harmonious to leave every single human being to
> becoming increasingly decrepit and lose all they love about life and
> then die. Just because something is now does not mean it should be
> declared part of "the world" one should be "in harmony" with.

Sure, I can see how someone can have that perspective. Words like "harmony"
are ambiguous, I agree.

But I don't think "Henny" felt that way you describe, that everything she
loved about life had been lost. I think she loved her life to the end.

Maybe she loved it more because it had an end? I don't know that for sure,
of course. She obviously had a zest for life.

Granted, one might argue over whether she felt it was time to go because of
some suffering or loss of function or some disconnection with a world
changing around her, or if she felt it was time to go for other reasons
(perhaps spiritual ones), or if there was some combination. I don't know the
answer to that either.

> while 'bad' wizardry, such as necromancy, can
>> lead to an upsetting of the "balance" and threaten catastrophe.
>
> We all now accept the personal and societal catastrophe of the aging
> and death of every single human being.

There is a Native American story where humanity chooses between always being
the same and living forever and having children and dying. Humanity chose to
have children. So, what you call a "catastrophe", others can see as change
and diversity.

Perhaps, it is true, we will figure out a way to have both long (or eternal)
lives and an interesting universe. But it is more of a challenge that one
might think.

For example, if people are immortal (except accidents), will they ever take
risks?

Sure, "they'll just use a backup". Is that really the answer to every such
question? People are often really annoyed when they have to go back to old
versions of their desktop computer's configuration and data that may be days
or weeks out of date. A lot of that hinges on the idea of "identity", which
is a good thing to get poets' and playwrights' and other artists' advice on.

>> While the
>> dragons are more powerful, they act instinctively to preserve the balance.
>> Only humans can and do threaten it. In The Farthest Shore, Cob seeks
>> immortality regardless of the consequences and opens a breach between life
>> and death."
>>
>> Unlike Earthsea, it may not be possible for just one other Wizard to
>> sacrifice his or her power to fix this. It may be, truly, unfixable, with
>> humanity consumed by, say, unintelligent self-replicating robotic roaches
>> launched by a zombie upload.
>>
>
> Not in the least likely.

It is common for teenagers to create or launch destructive computer viruses
or various reasons.

Robots in the roach size-range are around now, and making them
self-replicating will be possible at some point.

So, how can it not be likely somebody will try this? Are we prepared? Have
we minimized the risk by building a happier society?

Technology is an amplifier. The more we crank up the volume, the more we
need to be sure about the signal going in and where the sound is going when
it comes out. Or, we need to figure out a way to have distributed amplifiers
or some other different approach.

>> Ray Kurzweil is, in that sense, engaging in a modern form of "necromancy",
>> and basing the entire future of humankind on those things you admit are
>> *assumptions* and doing it as quickly as possible with little time for
>> reflection or course correction or a cooperative approach. All of past
>> collective human wisdom suggests that will not end well.
>> http://en.wikipedia.org/wiki/Necromancy
>
> Calling one of the best innovators and most public dreamers of the age
> a necromancer is something that calls all your words into question.

First off, he has several critics:
http://en.wikipedia.org/wiki/Raymond_Kurzweil#Criticism

But, on my point:
http://en.wikipedia.org/wiki/Necromancy
"""
Necromancy is a form of magic in which the practitioner seeks to summon
"operative spirits" or "spirits of divination", for multiple reasons, from
spiritual protection to wisdom. The word necromancy derives from the Greek
νεκρός (nekrós), "dead", and μαντεία (manteía), "prophecy". However, since
the Renaissance, necromancy (or nigromancy) has come to be associated more
broadly with black magic and demon-summoning in general, sometimes losing
its earlier, more specialized meaning.
"""

And consider Arthur C. Clarke's saying, "Any sufficiently advanced
technology is indistinguishable from magic".

So, is Ray Kurzweil trying to summon "operative spirits" by building them?
http://www.kurzweilai.net/meme/frame.html?m=9
"Ramona is the photorealistic avatar host of KurzweilAI.net. She's also the
first live virtual performing and recording artist. Read about her history,
check out her pictures, and listen to her songs!"

Is Ray Kurzweil trying to summon "spirits of divination" (like for the stock
market or technology trends)?
http://www.answers.com/topic/kurzweil-technologies-inc
"If necessity truly is the mother of invention, then Kurzweil Technologies
must have a lot of necessities. The technology research firm focuses on the
development of pattern recognition, signal processing, and artificial
intelligence systems. It has spun off several affiliated companies to bring
its discoveries to market, including FatKat (pattern recognition-based
investment systems), Kurzweil Educational Systems (print-to-speech
software), and Kurzweil Music Systems (music synthesizers). Kurzweil also
offers technology assessment services to high-tech clients. The company was
founded by legendary inventor Ray Kurzweil in 1995. "

Is Ray Kurzweil trying to summon the dead?
http://en.wikipedia.org/wiki/Raymond_Kurzweil
"The Ptolemys documented Ray's stated goal of bringing back his late father
using AI. ... While being interviewed for a February 2009 issue of Rolling
Stone magazine, Kurzweil expressed a desire to construct a genetic copy of
his late father, Fredric Kurzweil, from DNA within his grave site. This feat
would be achieved by deploying various nanorobots to send samples of DNA
back from the grave, constructing a clone of Fredric and retrieving memories
and recollections—from Ray's mind—of his father."

Does Ray Kurzweil not state he wants to live forever by cheating death in
various ways? And is he not, like Ben in his last post, apparently willing
to put all of physical humanity at risk to do that for himself as quickly as
possible? Is his stated goal not to upset the balance of our current world
(such as it is)? Granted, in his defense, I'm sure, like Ben, Ray Kurzweil
might say we are doomed anyway or some such thing, and TINA: There is no
alternative to uploading as soon as possible for whatever reasons, including
that the current balance is unsustainable. Certainly, immersed in
competition, in his books, Ray Kurzweil claims essentially that we need all
this just to survive at all with AI and strong nanotech otherwise putting
humanity at risk.

So, if all those things are true, why is "necromancer" then not an accurate
term to describe Ray Kurzweil's behavior and intentions?

Solely because it later has come to have some "black magic" connotations,
perhaps for good reasons?

Look, if you're going to mess with pursuing eternal life on a material plane
by cheating death in some magical way, if you say you want to bring back the
dead, if you are willing to risk humanity's physical existence to do this
quickly, and if you make big pronouncements essentially about uploads not
being zombies (when nobody really knows for sure), (not just one, but all),
then you are playing with necromancy. Still, as Wikipedia says, necromancy
may not be all bad. There may be reasons we might need to consult the dead
for crucial life-affirming information, for example. But, necromancy is the
space he is playing in as I see it. It only makes it more dangerous to deny
it IMHO. By having a term for what he is doing, we can then link our
collective wisdom of stories to his specific actions and see how we may want
to feel about all this.

I've got other relatives who claim to speak with the dead. This is
all-too-real stuff for me.

Still, that term "necromancer" does not, by itself, mean Ray Kurzweil is
either wrong or evil. It's suggestive of course. But, as I said, these are
complex issues.

--Paul Fernhout
http://www.pdfernhout.net/

Samantha Atkins

unread,
Aug 17, 2009, 6:37:34 PM8/17/09
to openmanu...@googlegroups.com
On Sun, Aug 16, 2009 at 10:11 PM, Paul D.

Fernhout<pdfer...@kurtz-fernhout.com> wrote:
>
> Samantha Atkins wrote:
>> On Fri, Aug 14, 2009 at 7:48 AM, Paul D.
>> Fernhout<pdfer...@kurtz-fernhout.com> wrote:
>>> Like the Wizard Cobb in the third book of "The Earthsea Trilogy", Ray
>>> Kurzweil says he wants to live forever. Like the Wizard Cob, he is willing
>>> to go to extreme lengths to achieve this, including greatly accelerating
>>> technical progress in a particular ideological way (using patents and
>>> copyrights and trade secrets and amassing wealth with an increasing
>>> rich-poor divide in a very competition-celebrating way).
>>
>> The desire to live forever is very strong in humanity.  It is a good
>> goal.  Which means get us there the quickest and with best value
>> across the board is a separable question.
>
> I have a distant relative (who I met once) who for a time was the oldest
> (documented) person in the world. At the end, she said it felt like it was
> time to go:
> http://en.wikipedia.org/wiki/Hendrikje_van_Andel-Schipper
> """

Interesting story but I don't see how it has any bearing. Especially
with the "man upstairs" purportedly telling her it was time to go. As
we age Thanatos has many pathways to lead us to believe we have had
enough. It is part of our evolution instilled traits most likely.
But this does not mean we do not crave immortality, that it has not
been a dream of humankind for a very long time. Nor does it mean it
is not a worthwhile goal.

>>> Like with the
>>> Wizard Cob in Earthsea, this may drain the land of all life and vitality,
>>> turn other Wizards into zombies, as well as leave Ray Kurzweil himself stuck
>>> in a twilight land between life and death. Ray Kurzweil may also be opening
>>> a portal to things he does not understand well and which he can not deal
>>> with on his own. From Wikipedia:
>>
>> This is carrying a fairy tale a bit too far.  We open portals all the
>> time in our technological civilization.  We have know way to not do so
>> and still progress.
>
> Portals to superintelligences? Every day?

Portals to the an unknown set of possibilities and consequences occur
quite often when new technologies are introduced.

> Why the need to "progress"?
>

Because if we do not we will perish. We cannot stand still where we
are. Arguably we cannot solve the current problems we have without
more intelligence than humans no more augmented than we are capable
of.

> And progress in which direction? Who decides? And who gets the benefits of
> the "progress" and who pays the costs or takes on most of the risk?
>

No one decides. Innovations happen and are either adopted or not. No
one person or group of persons no matter how large or select is
remotely capable of wisely making such a decision for all.

>>>    http://en.wikipedia.org/wiki/Earthsea
>>> "A strong theme of the stories is the connection between power and
>>> responsibility. There is often a Taoist message: 'good' wizardry tries to be
>>> in harmony with the world,
>>

>> It depends a great deal on what you mean by "harmony with the world".
>> I don't think it is harmonious to leave every single human being to
>> becoming increasingly decrepit and lose all they love about life and
>> then die.  Just because something is now does not mean it should be
>> declared part of "the world" one should be "in harmony" with.
>
> Sure, I can see how someone can have that perspective. Words like "harmony"
> are ambiguous, I agree.
>
> But I don't think "Henny" felt that way you describe, that everything she
> loved about life had been lost. I think she loved her life to the end.
>

She did not love it enough to wish to keep living though.


> Maybe she loved it more because it had an end? I don't know that for sure,
> of course. She obviously had a zest for life.
>

Who knows? I don't think this is the case and I would love to find
out by having the option of an indefinitely long fully healthy
lifespan. I want everyone to have that option. Don't you?


> Granted, one might argue over whether she felt it was time to go because of
> some suffering or loss of function or some disconnection with a world
> changing around her, or if she felt it was time to go for other reasons
> (perhaps spiritual ones), or if there was some combination. I don't know the
> answer to that either.
>
>> while 'bad' wizardry, such as necromancy, can
>>> lead to an upsetting of the "balance" and threaten catastrophe.
>>
>> We all now accept the personal and societal catastrophe of the aging
>> and death of every single human being.
>
> There is a Native American story where humanity chooses between always being
> the same and living forever and having children and dying. Humanity chose to
> have children. So, what you call a "catastrophe", others can see as change
> and diversity.
>

Fine. Let them not take the treatment to become young, healthy and
immortal when it is available. Just don't let anyone say and enforce
by law that such treatment cannot be developed or used by anyone who
does wish to do so.


> Perhaps, it is true, we will figure out a way to have both long (or eternal)
> lives and an interesting universe. But it is more of a challenge that one
> might think.
>

I would love to have that problem.


> For example, if people are immortal (except accidents), will they ever take
> risks?
>

With mind backups and the backups well enough disspersed and redundant
not even an accident can necessarily produce final death. Accidents
statistically would make like too short without such backups. It is a
fair question whether the immortalist perspective would cause less
risk taking. Logically I don't think so as the fact of life being
short and then oblivion should produce even less risk taking. An
immortalist perspective would also have positive benefits like holding
the lives of others as of much more value and not condemning them
forever for some mistake they make now.


> Sure, "they'll just use a backup". Is that really the answer to every such
> question? People are often really annoyed when they have to go back to old
> versions of their desktop computer's configuration and data that may be days
> or weeks out of date. A lot of that hinges on the idea of "identity", which
> is a good thing to get poets' and playwrights' and other artists' advice on.
>

Depends on the backup speed and concurrency. The identity problem is
another that I would wish to have.

>>> While the
>>> dragons are more powerful, they act instinctively to preserve the balance.
>>> Only humans can and do threaten it. In The Farthest Shore, Cob seeks
>>> immortality regardless of the consequences and opens a breach between life
>>> and death."
>>>
>>> Unlike Earthsea, it may not be possible for just one other Wizard to
>>> sacrifice his or her power to fix this. It may be, truly, unfixable, with
>>> humanity consumed by, say, unintelligent self-replicating robotic roaches
>>> launched by a zombie upload.
>>>
>>
>> Not in the least likely.
>
> It is common for teenagers to create or launch destructive computer viruses
> or various reasons.
>

Not that easy it turns out for some reasons I can't do justice to
right now. And not that difficult to watch for.

> Robots in the roach size-range are around now, and making them
> self-replicating will be possible at some point.
>

Rarely and they aren't terribly hardy.

> So, how can it not be likely somebody will try this? Are we prepared? Have
> we minimized the risk by building a happier society?
>

To complicated to go into right now. I do very much agree that human
consciousness needs a lot of uplifting/refinement as we get more and
more "god-like" abilities. Super-powered slightly evolved chimps are
not my idea of a good time. And yes, I grant that this is the second
most likely future. The most likely future I [unhappily] think is
the Great Fizzle of humanity destroying itself or technological
civilization falling apart before we get that advanced.


> Technology is an amplifier. The more we crank up the volume, the more we
> need to be sure about the signal going in and where the sound is going when
> it comes out. Or, we need to figure out a way to have distributed amplifiers
> or some other different approach.
>

We are not smart enough to predict at this level and degree of
complexity. Better build the AGI quick if you want to do this.

>>> Ray Kurzweil is, in that sense, engaging in a modern form of "necromancy",
>>> and basing the entire future of humankind on those things you admit are
>>> *assumptions* and doing it as quickly as possible with little time for
>>> reflection or course correction or a cooperative approach. All of past
>>> collective human wisdom suggests that will not end well.
>>>   http://en.wikipedia.org/wiki/Necromancy
>>
>> Calling one of the best innovators and most public dreamers of the age
>> a necromancer is something that calls all your words into question.
>
> First off, he has several critics:
>   http://en.wikipedia.org/wiki/Raymond_Kurzweil#Criticism
>

Of course he does but they don't usually accuse him of practicing or
advocating black magic or consorting with the dead. :)

- samantha

Paul D. Fernhout

unread,
Aug 18, 2009, 12:18:05 PM8/18/09
to openmanu...@googlegroups.com
Samantha Atkins wrote:

> On Sun, Aug 16, 2009 at 10:11 PM, Paul D. Fernhout wrote:
>> Why the need to "progress"?
>>
>
> Because if we do not we will perish. We cannot stand still where we
> are. Arguably we cannot solve the current problems we have without
> more intelligence than humans no more augmented than we are capable
> of.

It seems to me that there are straightforward solutions (technically) to all
the problems our society faces. Solar power for energy. Organic farming with
robots for food. And so on. Granted, socially, we have problems. But I don't
think more technology will fix those. A change of hearth has to come from
other directions.

>> And progress in which direction? Who decides? And who gets the benefits of
>> the "progress" and who pays the costs or takes on most of the risk?
>
> No one decides. Innovations happen and are either adopted or not. No
> one person or group of persons no matter how large or select is
> remotely capable of wisely making such a decision for all.

There is obviously some truth to that. Still, that seems to me to be
somewhat fatalistic, given a global market economy pursuing short-term
profits and ignoring externalities and systemic risks.

Langdon Winner talks about how our politics shapes our infrastructure, and
how, in turn, perhaps our infrastructure shapes our politics.

He suggests the best time to make a moral choice about technology is in
deciding what new things to bring into the world through innovation.

Our decisions about that may not be perfect, but we can still do the best we
can. So, for example, we can choose to develop new and better killer robots,
or we can chose to develop new and better RepRaps. Or we can choose to
develop learning-on-demand systems (the internet) instead of improve
learning-just-in-case ones (schools).

Culturally, one can work towards helping people see that the creation and
deployment of various technologies have consequences. For example,
centralized fossil fuel use in a market context has had a big negative
effect on democracy around the world in many ways.

>> But I don't think "Henny" felt that way you describe, that everything she
>> loved about life had been lost. I think she loved her life to the end.
>
> She did not love it enough to wish to keep living though.

That's a very materialistic perspective. :-)

Some say we are spiritual beings on a physical journey. When one is at the
journey's end, what do you need the physical skin for anymore?

Again, that's all speculation of course. It is a mystery beyond scientific
proof either way. And even if we could prove other planes of existence were
there, there might be even more beyond that. So, endless mysteries are
implied by the nature of consciousness somehow being being about feeling
separate from the universe (or universe of universes) that it is in.

>> Maybe she loved it more because it had an end? I don't know that for sure,
>> of course. She obviously had a zest for life.
>
> Who knows? I don't think this is the case and I would love to find
> out by having the option of an indefinitely long fully healthy
> lifespan. I want everyone to have that option. Don't you?

Well, that depends. If to get "everyone" the option who is currently alive
means acting in a way with a 1% chance of success and a 99% chance of
killing everyone alive now, I'd have to pause to think about it.

>> Granted, one might argue over whether she felt it was time to go because of
>> some suffering or loss of function or some disconnection with a world
>> changing around her, or if she felt it was time to go for other reasons
>> (perhaps spiritual ones), or if there was some combination. I don't know the
>> answer to that either.
>>
>>> while 'bad' wizardry, such as necromancy, can
>>>> lead to an upsetting of the "balance" and threaten catastrophe.
>>> We all now accept the personal and societal catastrophe of the aging
>>> and death of every single human being.
>> There is a Native American story where humanity chooses between always being
>> the same and living forever and having children and dying. Humanity chose to
>> have children. So, what you call a "catastrophe", others can see as change
>> and diversity.
>
> Fine. Let them not take the treatment to become young, healthy and
> immortal when it is available. Just don't let anyone say and enforce
> by law that such treatment cannot be developed or used by anyone who
> does wish to do so.

Again, what if developing the treatment puts everyone's lives at risk?

>> For example, if people are immortal (except accidents), will they ever take
>> risks?
>
> With mind backups and the backups well enough disspersed and redundant
> not even an accident can necessarily produce final death. Accidents
> statistically would make like too short without such backups. It is a
> fair question whether the immortalist perspective would cause less
> risk taking. Logically I don't think so as the fact of life being
> short and then oblivion should produce even less risk taking. An
> immortalist perspective would also have positive benefits like holding
> the lives of others as of much more value and not condemning them
> forever for some mistake they make now.

Maybe. But, many "mortalists" value the lives of others now too. :-)

Anyway, you're also looking at this from a certain perspective. Why is it
not considered "death" if a line of development is lost? Is it not "death"
of that line even if there is a branchpoint earlier up the line? Again, the
issue of identity comes into play.

>> Sure, "they'll just use a backup". Is that really the answer to every such
>> question? People are often really annoyed when they have to go back to old
>> versions of their desktop computer's configuration and data that may be days
>> or weeks out of date. A lot of that hinges on the idea of "identity", which
>> is a good thing to get poets' and playwrights' and other artists' advice on.
>>
>
> Depends on the backup speed and concurrency. The identity problem is
> another that I would wish to have.

Maybe we do already? :-) Humans change many times during a lifetime.
"Dark Nights of the Soul: A Guide to Finding Your Way Through Life's Ordeals"
http://www.amazon.com/Dark-Nights-Soul-Finding-Through/dp/1592400671

>> It is common for teenagers to create or launch destructive computer viruses
>> or various reasons.
>
> Not that easy it turns out for some reasons I can't do justice to
> right now. And not that difficult to watch for.

OK, let's say you are right. Kurzweil is trying to build a singularity
around competition and the market. What about this?
"Three Indicted In Huge Identity/Data Breach"
http://it.slashdot.org/story/09/08/17/2017204/Three-Indicted-In-Huge-IdentityData-Breach
"""
ScentCone and other readers let us know about an indictment just unsealed in
federal court for stealing 130 million credit cards and other data useful in
identity theft, or just plain money theft. The breaches were at payment
processor Heartland (accounting for the bulk of the 130M), Hannaford, 7-11,
and two unnamed "national retailers." Interestingly, the focus of the
indictment, Albert "Segvec" Gonzalez, is currently awaiting trial for
masterminding the TJX break-in, which until Heartland counted as the largest
credit-card theft ever. The indictment cites SQL injection attacks as the
entry vector. Two unnamed Russia-based conspirators were also indicted.
Securosis has analysis of the security implications of the breach ("These
appear to be preventable attacks using common security controls. It's
possible some advanced techniques were used, but I doubt it") and the
attackers' methodology.
"""

So, in a competetive singularity, will this never happen? What does
"identity theft" mean when someone gets a hold of one of your personality
backups?

>> Robots in the roach size-range are around now, and making them
>> self-replicating will be possible at some point.
>>
>
> Rarely and they aren't terribly hardy.
>
>> So, how can it not be likely somebody will try this? Are we prepared? Have
>> we minimized the risk by building a happier society?
>>
>
> To complicated to go into right now. I do very much agree that human
> consciousness needs a lot of uplifting/refinement as we get more and
> more "god-like" abilities. Super-powered slightly evolved chimps are
> not my idea of a good time. And yes, I grant that this is the second
> most likely future. The most likely future I [unhappily] think is
> the Great Fizzle of humanity destroying itself or technological
> civilization falling apart before we get that advanced.

Well, we can work even now to avoiding the great fizzle.
http://www.commondreams.org/

>> Technology is an amplifier. The more we crank up the volume, the more we
>> need to be sure about the signal going in and where the sound is going when
>> it comes out. Or, we need to figure out a way to have distributed amplifiers
>> or some other different approach.
>>
>
> We are not smart enough to predict at this level and degree of
> complexity. Better build the AGI quick if you want to do this.

Except an AGI has its own implications we have trouble predicting. :-)

>>>> Ray Kurzweil is, in that sense, engaging in a modern form of "necromancy",
>>>> and basing the entire future of humankind on those things you admit are
>>>> *assumptions* and doing it as quickly as possible with little time for
>>>> reflection or course correction or a cooperative approach. All of past
>>>> collective human wisdom suggests that will not end well.
>>>> http://en.wikipedia.org/wiki/Necromancy
>>> Calling one of the best innovators and most public dreamers of the age
>>> a necromancer is something that calls all your words into question.
>> First off, he has several critics:
>> http://en.wikipedia.org/wiki/Raymond_Kurzweil#Criticism
>>
>
> Of course he does but they don't usually accuse him of practicing or
> advocating black magic or consorting with the dead. :)

Here is a sketch of "Defense Against the (Singulitarian) Dark Arts 101":

It opens with watching the comedy "Bedazzled".
http://en.wikipedia.org/wiki/Bedazzled_(2000_film)
"Elliot Richards (Brendan Fraser) works a dead-end job in a call-center in
San Francisco and has no real friends, other than his co-workers who
manipulate him for their own amusement, knowing he'll do anything for
acceptance. He has a crush on his colleague, Alison Gardner (Frances
O'Connor), but lacks the courage to ask her out. After Elliot is ditched at
a bar while trying to talk to Alison, he says that he would give anything
for Alison to be with him. The Devil (Elizabeth Hurley), in the form of a
beautiful woman, hears this wish and offers Elliot a contract. She will give
Elliot seven wishes, and in return Elliot will give her his soul. As might
be expected of a bargain with Satan, there is a catch to the deal. No matter
what Elliot asks for himself, the Devil grants his wish in such a way that
he is invariably unhappy with the result, ..."

Then, we will go one by one through your wishes and grant them.

* Immortality? As an inmate of Abu Graib or as Jabba the Hut's prized toy.
* Seeing new amazing sites? As a re-engineered intelligent MIRVed
submunition like in Dark Star.
* Endless Backups? As used by intelligence agencies to find how to get you
to reveal important information about supply chains through torture (even if
you don't have any, they may keep doing it anyway for kicks and training
purposes), or as used by military contractors to use you to see how
effective new land mine designs are.
* An amazing memory? As used by terrorists by implanting memories to
convince you you had a twin sister tortured by the Antarticans and so you
will even the score.
* Success in a competitive market? As represented by one of your backups
being embodied into a toy for everyone to buy for a little money to do what
they want with.

And so on... :-(

How can we make such a dystopian future less likely? We can make the world a
better place. We can end excessive competition like Alfie Kohn suggests. We
can end the creation of child soldiers and landmines. We can work to end
rankism and related abuse and exploitation. We can help build healthy
communities to keep psychopaths in check (including by stopping electing
them). We can work to transcend the market, or at least improve it through a
basic income so everyone has more economic freedom and more free time. We
can work to end the justification of torture. We can end the horror of
compulsory schooling and the bullying attitudes it teaches. We can get poets
writing more about identity. And so on.

And for when all that fails, as in a chapter in "Disciplined Minds", we can
provide more people with the knowledge of how to deal with the situation of
being in a prisoner of war camp (which, if what Kurzweil says is possible,
we may all be in right now, via simulation):
http://www.disciplined-minds.com/
http://www.uow.edu.au/~bmartin/dissent/documents/Schmidt/education-review.htm
"In the final section of the book, Schmidt turns to the question of
resistance. He discusses how graduate students, professors, and other
professionals can resist the conformity of professional life. In the chapter
titled, "How to Survive Professional Training With Your Values Intact,"
Schmidt draws on an unlikely source—the US Army Manual used to teach
potential prisoners of war how to resist indoctrination. He writes, "In
graduate school, as in the POW camp, the toughest struggle is not over
whether you will survive the process, but over what sort of person you will
be when you get out" (p. 239) Key to resisting indoctrination, writes the
author, is organizing. The students he interviewed who successfully survived
graduate-level professional training did so because they agitated for
change, developed social and psychological supports outside of the
institution, and spent time with like-minded individuals and groups.
According to Schmidt, students who try to resist the system on their own are
rarely successful, usually succumbing to pressures to change their own
values and practices."

So, there are a lot of things we can do right now to help ensure any
singularity some day will be a lot happier for all the uploads in it. There
is no guarantee of success, nor is it likely success will be 100%, but we
can do what we can, like through the solidarity of open manufacturing. :-)

On the other hand, maybe I'm one of the interrogators mentioned above, with
one of your backups to work with. Should you believe me if I said I was not?
So, it is a complex issue. But, you can still make the most of each day, and
do what seems sensible and compassionate, to build the world you want to
see, even with memories of cruelty and oppression.

There is a Star Trek Voyage episode I think about one person (Chakotay?)
being turned into a soldier in a planetary way by implanting false memories.
There is an old sci-fi story about backups being tortured and so on. There
is a lot about all these themes in the literature. Why is Kurzweil mainly
only showing people a good side of the singularity and speeding us towards
it before our social house is in better order?

--Paul Fernhout
http://www.pdfernhout.net/

Reply all
Reply to author
Forward
0 new messages