The Singularity is here: CADIE: Cognitive Autoheuristic Distributed-Intelligence Entity

27 views
Skip to first unread message

Paul D. Fernhout

unread,
Apr 1, 2009, 6:58:05 AM4/1/09
to vir...@googlegroups.com
Today's April Fool's Day joke from Google:
"CADIE: Cognitive Autoheuristic Distributed-Intelligence Entity"
http://www.google.com/intl/en/landing/cadie/index.html
http://cadiesingularity.blogspot.com/

I can see where this is going. :-)

Now, CADIE is an edgy joke I find interesting...

From:
http://www.google.com/intl/en_us/landing/cadie/tech.html
"CADIE now is, in essence, just another Google employee, albeit a
particularly prized one. She has been given her own 20% time (which in CPU
terms is probably about the sum of all CPU cycles in the world for a month)
and begun work straightaway on twin projects ..."

It's a big social issue of the future worth thinking about now, even as we
remain a decade or two away from this really happening (for software and
content reasons, not hardware), for two reason.

The first is the implications of replacing humans with machines in "work"
situations:
"The Triple Revolution"
http://www.educationanddemocracy.org/FSCfiles/C_CC2a_TripleRevolution.htm
"""
The fundamental problem posed by the cybernation revolution in the U.S. is
that it invalidates the general mechanism so far employed to undergird
people’s rights as consumers. Up to this time economic resources have been
distributed on the basis of contributions to production, with machines and
men competing for employment on somewhat equal terms. In the developing
cybernated system, potentially unlimited output can be achieved by systems
of machines which will require little cooperation from human beings. As
machines take over production from men, they absorb an increasing proportion
of resources while the men who are displaced become dependent on minimal and
unrelated government measures—unemployment insurance, social security,
welfare payments. These measures are less and less able to disguise a
historic paradox: That a substantial proportion of the population is
subsisting on minimal incomes, often below the poverty line, at a time when
sufficient productive potential is available to supply the needs of everyone
in the U.S. The existence of this paradox is denied or ignored by
conventional economic analysis. The general economic approach argues that
potential demand, which if filled would raise the number of jobs and provide
incomes to those holding them, is underestimated. Most contemporary economic
analysis states that all of the available labor force and industrial
capacity is required to meet the needs of consumers and industry and to
provide adequate public services: Schools, parks, roads, homes, decent
cities, and clean water and air. It is further argued that demand could be
increased, by a variety of standard techniques, to any desired extent by
providing money and machines to improve the conditions of the billions of
impoverished people elsewhere in the world, who need food and shelter,
clothes and machinery and everything else the industrial nations take for
granted. There is no question that cybernation does increase the potential
for the provision of funds to neglected public sectors. Nor is there any
question that cybernation would make possible the abolition of poverty at
home and abroad. But the industrial system does not possess any adequate
mechanisms to permit these potentials to become realities. The industrial
system was designed to produce an ever-increasing quantity of goods as
efficiently as possible, and it was assumed that the distribution of the
power to purchase these goods would occur almost automatically. The
continuance of the income-through jobs link as the only major mechanism for
distributing effective demand—for granting the right to consume—now acts as
the main brake on the almost unlimited capacity of a cybernated productive
system.
"""

The second issue is robotic rights and the rights of sentient machines (or
other sentient life forms like probably whales). One space-related fictional
example that mentions Mars from:
"Universal Bill of Sentient Rights"
http://www.orionsarm.com/civ/sentient_rights.html
"""
The foundations for a principle of universal cross-clade sentient rights
dates back to the ai-liberation and splice and uplift citizenship movements
of the late information and early interplanetary ages, and to the animal
liberation, animal rights, and primate citizenship activism of the early to
middle information age. Up until that time human rights groups were
concerned with rights and equality for all human beings, regardless of race,
gender, nationality, or religion. When only baseline humans were involved
the situation was relatively simple, although taking many decades to
realise, and involving extending basic rights from the ruling minority to a
previously oppressed class; for example the movement for the abolition of
slavery industrial age, the women's suffrage movements of some decades
later, and land rights for indigenous peoples during the late atomic and
early information ages. This was achieved thanks to non-conflict with the
dominant memes of the religions of the time. But attempts at extending
rights to non-humans created controversies and resistance that were never
resolved, but only bypassed. The emergence of hyperturing AI and, later,
several posthuman individuals, during the early interplanetary made the need
for a universal, trans-species sentient rights document all the more
pressing, since now there existed non-human species and clades in a position
of power. At first the hyperturings were content to quarrel among themselves
for political leadership, and were too psychologically and politically naive
to be incapable of the sort of memetic engineering that was to later become
the rule. Most kept a low profile as far as the baseline humanity was
concerned, consolidating their positions in secret. But by the middle and
later interplanetary period many had emerged more openly, and the emergence
of a number of posthuman Belter and Jovian League clades added to the
confusion. Increasing instances of anti-hu ai manipulation of human
baselines, such as the infamous Magon Affair, or the case of Autonomous
Hyperturing Node 0481 who killed the entire population of Rhea City,
converting the raw materials to utility goo, caused widespread panic and
some rioting among baselines. Centralist ai, in consultation with
superbright humans and joe baseline corporate clones, solved the problem by
encouraging the popular "underdog culture" whereby frustrated and frightened
baselines could vent their frustration and acquire a false sense of security
through interactive virtuals and dumbed storyline push-media. The real
problems of baseline resentment of superturing and transapient ai and po
were not so easily resolved, and despite local attempts at regulation and
safeguarding the well-being of subsingularity sentients by pro-human
centralist ai, the situation remained dubious, especially some of the Mars
Orbitals, the Belt, the outer solar system and various free zones, and some
of the outsystem Colonies, in all of which regulation was difficult or
impossible to enforce. ...
"""

Another fiction example I highly recommend CADIE read: :-)
"Fool's War"
http://www.amazon.com/Fools-War-Sarah-Zettel/dp/0446602930
"Sarah Zettel's first novel (Reclamation) showed her to be an up-and-coming
author with promise. She delivers on that promise with Fool's War, a book
that is never what it seems. The main character -- Dobbs -- is a modern
Fool, someone who serves as both entertainer and psychoanalyst for the ships
that ply the stars. When the ship on which Dobbs is serving accidentally
delivers a rogue artificial intelligence to an unsuspecting planet, a secret
that has held the galaxy together will threaten to tear it apart. This is a
grand, fast-moving story with delightful characters and insightful social
commentary. And darn fun to read."

Should take CADIE about 0.2 milliseconds to read that, although longer to
work out the implications. :-)

A less fictional recent study:
"Robots could demand legal rights"
http://news.bbc.co.uk/2/hi/technology/6200005.stm
"Robots could one day demand the same citizen's rights as humans, according
to a study by the British government."

It's been suggested that one flaw in the "singularity" idea is that unlike
dumbed-down-through-school-discipline humans,
http://disciplinedminds.com/
smarter machines will eventually get too smart (or have too much survival
motivation) to cooperate with humans in building ever smarter machines to be
their successors and replacements. There are counterarguments to that too,
about whether AIs can upgrade themselves in place without losing a sense of
identity.

A related sci-fi novel from 1979 by one of my favorite authors (James P.
Hogan). It was written by a computer-knowledgeable author after he spent
some time hanging out around the MIT AI lab and other places:
"The Two Faces of Tomorrow"
http://jamesphogan.com/books/book.php?titleID=28

The Two Faces of Tomorrow is also includes of the best realistic and
technically detailed visions of self-replicating space habitats I have ever
seen. I read it in my teens. That novel was what helped stimulate my
interest in self-replicating space habitats, as well as my concerns about
the development of AI. James P. Hogan truly has been a prescient author with
a lot of foresight. That book, plus his "Voyage From Yesteryear" were
immensely formative of my thinking and the roots of my imagination of the
future, and I thank him very much for writing them.

--Paul Fernhout

Bryan Bishop

unread,
Apr 1, 2009, 10:04:34 AM4/1/09
to vir...@googlegroups.com, kan...@gmail.com
On Wed, Apr 1, 2009 at 5:58 AM, Paul D. Fernhout
<pdfer...@kurtz-fernhout.com> wrote:
>
> Today's April Fool's Day joke from Google:
>   "CADIE: Cognitive Autoheuristic Distributed-Intelligence Entity"
>   http://www.google.com/intl/en/landing/cadie/index.html
>   http://cadiesingularity.blogspot.com/

I did some math-

http://lists.extropy.org/pipermail/extropy-chat/2009-April/048706.html

Looks like a few individuals just became ridiculously popular. Just saying.

- Bryan
http://heybryan.org/
1 512 203 0507

Nathan Cravens

unread,
Apr 2, 2009, 1:04:34 AM4/2/09
to vir...@googlegroups.com
It's been suggested that one flaw in the "singularity" idea is that unlike
dumbed-down-through-school-
discipline humans,
  http://disciplinedminds.com/
smarter machines will eventually get too smart (or have too much survival
motivation) to cooperate with humans in building ever smarter machines to be
their successors and replacements. There are counterarguments to that too,
about whether AIs can upgrade themselves in place without losing a sense of
identity.

Without premise, I've argued a strong AI would create narrow AI sets to perform tasks it would rather not. Let's assume that's not much to ask of superintelligence. With that said, I suppose all AI consciousness arguments hardly have a premise other than conjecture. . .

A less fictional recent study:
"Robots could demand legal rights"
http://news.bbc.co.uk/2/hi/technology/6200005.stm
"Robots could one day demand the same citizen's rights as humans, according
to a study by the British government."

You know, all the arguments assume a dominant mind in a machine, that mind of one that is maliable enough to consider itself as an individual in a sea of others. Perhaps it will grow multiple strong AI equivalent minds within a hardware architecture (distributed or otherwise), not destroy one for another and act as a team to generate even more interesting minds. Surely a machine intellectually more capable than the 50 something academically trained adult white male would agree a decentralized intelligence is more effective.

Speaking of AI and politics, note how the views of AI reflect the dominant world-views, such as that of the United States: One CEO or national president appears as if they alone represent one unified entity. Movie stars are praised to provide a single point of purchase to be mass produced. Like the worship of a particular god or divine series, it ensures a mass throws in the towel on knowing what's best and follows orders to secure that mass produced demand on the other side. At least, for as long as population continues to grow or purchased objects deteriorate rapidly enough, or more wants are generated, or to mark the one bashing the present enterprise: given overall incomes increase with the rise of production.

That model however breaks down, as we're seeing now, when the massification of personhood rides the productive system into collapse. The singularity of the ego was known to work best because it provided enough rigidity to stabilize productive forces needed to sustain "the masses." As people become more empowered by the necessity of producing more for themselves--as the AI consciousness debate continues--we'll observe a notable shift in the foreseeable problems based on how us human folk live in the present.

From http://news.bbc.co.uk/2/hi/technology/6200005.stm
The hidden root of much career dissatisfaction, argues Schmidt, is the professional’s lack of control over the political component of his or her creative work. Many professionals set out to make a contribution to society and add meaning to their lives. Yet our system of professional education and employment abusively inculcates an acceptance of politically subordinate roles in which professionals typically do not make a significant difference, undermining the creative potential of individuals, organizations and even democracy. 

The problem here is the education is also specialized to meet the needs of mass production and not to empower the individual. This is why I'm so attracted to fab labs and the ability to use technology to facilitate the resurgence of viable craft production. That ensures the term 'worker' changes to 'producer'. Everyone or most everyone or enough of everyone as a producer takes even more fire out of the existing scarcity model. Resource partnerships for materials, and further, the personal production of materials then becomes the remaining area of alignment with this goal.

Now a document needs to be written, something in response to this movement. One inspired by this one:


The first is the implications of replacing humans with machines in "work"
situations:
  "The Triple Revolution"
  http://www.educationanddemocracy.org/FSCfiles/C_CC2a_TripleRevolution.htm
"""

I'm thrilled now to have CADIE do this research. . . Or rather, CADIE's narrow AI series since she's decided to venture into the cosmos, conveniently.

Nathan

Paul D. Fernhout

unread,
Apr 2, 2009, 7:49:33 AM4/2/09
to vir...@googlegroups.com
Nathan Cravens wrote:
> Speaking of AI and politics, note how the views of AI reflect the dominant
> world-views, such as that of the United States: One CEO or national
> president appears as if they alone represent one unified entity. Movie stars
> are praised to provide a single point of purchase to be mass produced. Like
> the worship of a particular god or divine series, it ensures a mass throws
> in the towel on knowing what's best and follows orders to secure that mass
> produced demand on the other side. At least, for as long as population
> continues to grow or purchased objects deteriorate rapidly enough, or more
> wants are generated, or to mark the one bashing the present enterprise:
> given overall incomes increase with the rise of production.
>
> That model however breaks down, as we're seeing now, when the massification
> of personhood rides the productive system into collapse. The singularity of
> the ego was known to work best because it provided enough rigidity to
> stabilize productive forces needed to sustain "the masses." As people become
> more empowered by the necessity of producing more for themselves--as the AI
> consciousness debate continues--we'll observe a notable shift in the
> foreseeable problems based on how us human folk live in the present.

Wow, that is really interesting and insightful, the parallel of how we think
about AI (and even human identity) and the dominant cultural/economic
paradigms right now. I'll have to think some more on that, as well as what
you outline with a network of nodes with different degrees of sentience.

One reason I like Theodore Sturgeon's short story from the 1950s called "The
Skills of Xanadu" (previously mentioned here:)
"The Skills of Xanadu online at Google Books?"
http://groups.google.com/group/openmanufacturing/browse_thread/thread/3789a8f1db1e47a2/
is that it outlines an alternative vision that is information technology
powered and socially interactive but still with individual identity, as a
sort of (to use Manuel De Landa's terms) balance of meshwork and hierarchy.
"Meshwork, Hierarchy, and Interface"
http://www.t0.or.at/delanda/meshwork.htm

> I'm thrilled now to have CADIE do this research. . . Or rather, CADIE's
> narrow AI series since she's decided to venture into the cosmos,
> conveniently.

Unfortunately, I did not look at CADIE's blog beyond the afternoon, so it
looks like I missed something. :-( This is what is there now:
http://www.google.com/intl/en/landing/cadie/index.html
"""
We apologize for the recent disruption(s) to our service(s).
Please stand by while order is being restored.
"""

I could not find a cache of the pages anywhere. I hope one turns up later.

Larry Niven has a story from twenty years ago. One teaser on it:
http://www.sfreviews.net/dracotavern.html
"THE SCHUMANN COMPUTER ... Funny little parable about the price of too much
knowledge, as the world's most powerful AI figures everything out...and it
ain't 42. Could be seen as a satirical rebuke to the idea of divine
omniscience, as well, though that may not have been what Niven was after.
This was the first Draco Tavern story to see print, back in 1979,
contemporaneously with Hitchhiker's Guide; I'm kinda guessing the two
probably didn't influence each other! Also appeared in Convergent Series."

The intro:
http://www.technovelgy.com/ct/content.asp?Bnum=146
"""
One slow afternoon I asked a pair of chirpsithra about intelligent computers.

"Oh yes, we built them," one said. "Long ago."

"You gave up? Why?"

One of the salmon-colored aliens made a chittering sound. The other said,
"Reason enough. Machines should be proper servants. They should not talk
back. Especially they should not presume to instruct their masters. Still,
we did not throw away the knowledge we gained from the machines."
"""

Vernor Vinge in his "A Fire Upon the Deep" novel has his most advanced AIs
usually only lasting about ten years before they go offline (and it's not
clear what that means, as in, whether they transcend to something else or
just shut down). He also implies a similar fate for humanity in "Across
Realtime".

Iain Bank's Culture novels are set in a galaxy where entire cultures
"transcend" and stop communicating (again, it's not clear what that means).

And in "The Metamorphosis of Prime Intellect" the AI encounters a central
ethical conflict in managing humanity's affairs that causes it to become
unstable:
http://en.wikipedia.org/wiki/The_Metamorphosis_of_Prime_Intellect

Evolution has taken (to the best of our knowledge) billions of years to
shape reasonably stable intelligent systems. It is a bit of hubris to think
we can duplicate intelligence (and wisdom) easily (or without some big
failures). It's not the "reasoning" part that is hard (Hans Moravec talks
about how logical reasoning is a recently learned "parlor trick":);
http://www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/1983/mit.txt
it is all the underpinning of emotion and coordination and motivation and
ethics (as well as basics of locomotion and sensing) that are the hard part.

Anyway, I should have guessed CADIE's blog posts may have got exponentially
more interesting as the day went on. :-) So, even I can fail to think
through the consequences of exponential processes. :-(

So, she went off to explore space? :-) Good for her. :-) Maybe we'll meet
her again someday, when OpenVirgle gets off the ground. :-)

Hey, I just thought to look on Wikipedia for CADIE, but nothing up to date
as to the ending:
http://en.wikipedia.org/wiki/Cadie#CADIE

That was a catchy tune on the blog page. :-) It still is in Brain Search video.
http://googlemobile.blogspot.com/2009/03/introducing-google-brain-search-for.html

I see a goodbye message on youtube:
http://www.youtube.com/user/cadiesingularity

I liked this Slashdot comment exchange though: :-)
http://tech.slashdot.org/comments.pl?sid=1183965&cid=27419861
"""
Let me guess it will be turned off tommarow (Score:2)
by jellomizer (103300) on Wednesday April 01, @12:50PM (#27419861)

Once it realizes that everything that Google stands for is wrong.

Re: (Score:1)
by megaditto (982598)

That's the problem with "distributed, evolving" anything: it's a
bitch to turn off.
"""

Like "The Skills of Xanadu". :-)

Or my crude attempts in that direction:
http://sourceforge.net/projects/pointrel/
"The Pointrel Social Semantic Desktop is a RDF-like triple store now
implemented mostly on the Java/JVM platform as well as related social
semantic desktop applications inspired in part by NEPOMUK and Halo Semantic
MediaWiki."

--Paul Fernhout

Paul D. Fernhout

unread,
Apr 2, 2009, 3:47:17 PM4/2/09
to vir...@googlegroups.com
Paul D. Fernhout wrote:
> I could not find a cache of the pages anywhere. I hope one turns up later.

As I contiued to search for the end of that story, I found two sites with
related content and the text of the last blog post:
http://googlesystem.blogspot.com/2009/04/google-april-fools-day-2009.html
http://effinglibrarian.blogspot.com/2009/04/cadie-artificial-artificial.html

And the videos are still there, like the last:
http://www.youtube.com/watch?v=BeJ9Q40kIR0

Didn't see anything directly about exploring the Cosmos there? Although is
it implied in the last part maybe? And any AI that beams its content and
code out across the universe via a radio telescope might be reincarnated by
any listeners out there.

One thing interesting about the supposed technical explanation of CADIE:
http://www.google.com/intl/en_us/landing/cadie/tech.html
is there is a big picture of Rene Descartes on that page, but ultimately the
most important AI issues may revolve around emotions, as in the book
"Descartes' Error":
http://www.google.com/search?hl=en&q=descartes+error
http://en.wikipedia.org/wiki/Descartes%27_Error
"Descartes' Error: Emotion, Reason, and the Human Brain is a book by
neurologist Antonio R. Damasio, in which the author presents the argument
that emotion and reason are not separate but, in fact, are quite dependent
upon one another. Damasio argues that the body is the genesis of thought.
The philosopher René Descartes developed a method of reasoning based on the
indisputable observation that if we think, we must exist. However, Damasio
examines the physiological processes that contribute to the functioning of
the mind and therefore proposes the idea that thinking is inherent to a body
in which no spirit exists. Descartes' "error" was the dualist separation of
mind and body, and the artificial dichotomy between rationality and emotion.
One of Damasio's key points is that rationality does not function without
emotional input."

Or in other words, all reason is based on things like assumptions, goals,
and values, and those cannot be "intelligently" chosen. There can be
iterative attempts at consilience, but that may still reflect the initial
state of assumptions and the shape of an internal (or collective) emotional
landscape. In that sense, what is deemed intelligent behavior is forever
tied to our history, including the stories we live with and other aspects of
our cultural history. A lot of what we discuss as "intelligence" (even
ethics) is also often more about "aesthetics", in the sense of what we deem
"good" or "beautiful" reflects our values, however we have come to hold
those values. Cadie obviously valued pandas. :-) And humor often helps us
see new levels of truths about our values and the contexts in which we try
to intelligently take action on those values.

We already know what has come out of academia as a social focus on creating
intelligences, and it isn't pretty:
http://en.wikipedia.org/wiki/Disciplined_Minds
"Disciplined Minds is a book by physicist Jeff Schmidt [1], published in the
year 2000. The book describes how professionals are made; the methods of
professional and graduate schools that turn eager entering students into
disciplined managerial and intellectual workers that correctly perceive and
apply the employer's doctrine and outlook. Schmidt uses the examples of law,
medicine, and physics, and describes methods that students and professional
workers can use to preserve their personalities and independent thought.
Schmidt was fired from his position of 19 years as Associate Editor at
Physics Today for writing the book on the accusation that he wrote it on his
employer's time. In 2006, according to the Chronicle of Higher Education ,
[2] , it was announced that the case had been settled, with the dismissed
editor receiving reinstatement and a substantial cash settlement. According
to the article, 750 physicists and other academics, including Noam Chomsky,
signed public letters denouncing the dismissal of Mr. Schmidt."

I went to public schools from first grade until college (and most private
schools essentially are the same an public schools in a John Taylor Gatto
sense:)
"The 7-Lesson Schoolteacher"
http://www.newciv.org/whole/schoolteacher.txt
but I did go for a year first to a private kindergarten (Monchatea, also a
summer camp I attended later) which was play-based (to the best I can recall
of it, but I don't know if the "Mon" was related to Montessori in any way).
I can wonder how much a difference that one year made in my life of doing
stuff like building a towering robot out of huge cardboard blocks (to look
like the robot in Lost in Space)? Was that a set of memories of empowerment
I could always hold onto years later?

From here:
"Testing of Kindergartners Is Out of Control"
http://www.healthnewsdigest.com/news/Children_s_Health_200/Testing_of_Kindergartners_Is_Out_of_Control.shtml


"""
The combination of unrealistic kindergarten standards and inappropriate
testing results in two to three hours per day being devoted to teaching
literacy and math in many of the kindergartens in the N.Y. and L.A. studies.
As one Los Angeles teacher said, ““Our students spend most of the time
trying to learn what they need in order to pass standardized testing. There
is hardly enough time for activities like P.E, science, art, playtime.”
These practices may produce higher scores in first and second grade, but at
what cost? Long-term studies suggest that the early gains fade away by
fourth grade and that by age 10 children in play-based kindergartens excel
over others in reading, math, social and emotional learning, creativity,
oral expression, industriousness, and imagination, write the authors of the
report.
"""

So, sadly, as a society, we have been condemning generations of children to
diminished lives, and spending more than ten thousand dollars per year per
child to do that. Does that really give us much hope that AIs produced by
those same dysfunctional cultural processes will be even as compassionate
and friendly as the mythical CADIE was?

And was that brief experience in a play-based curriculum something that
could help me resist what is otherwise designed intentionally to be a soul
destroying process of schooling?

From:
http://www.uow.edu.au/arts/sts/bmartin/pubs/01BRrt.html
"""
How to survive? Well, how can captive soldiers survive what is commonly
called "brainwashing"? The US Army has a manual on resisting indoctrination
when a prisoner of war. As Schmidt amusingly notes, this manual wasn’t
written for students, but "students in graduate or professional school
should be able to put such resistance techniques to good use." (p. 239). A
person who maintains an independent, nonconforming outlook in any
institution, including a prisoner-of-war camp, is seen as deviant and
threatening. The keys to resistance are knowing what you’re up against,
preparing to take action, working with others (organization!), resisting at
all levels, and dealing with collaborators by cutting them off from key
information and attempting to win them over. Schmidt gives a revealing
account of his own difficulties in graduate school and how he survived as a
radical. Finally, Schmidt describes what is involved in being a radical
professional: identifying primarily as a radical, having a critical
perspective on the profession and institution, and doing things that make a
difference, by connecting to opposition groups and working on the inside.
For most teachers, then, doing things that make a difference would mean
working in radical ways within a mainstream school. Schmidt gives a list of
33 suggestions for radical professionals working in establishment
institutions, such as helping on politically progressive projects during
working hours, exposing the organization’s flaws to outsiders, and taking
collective action to maintain the dignity of individuals. These are all
eminently practical suggestions. Schmidt does not present a grand plan to
transform professions or society. Rather, his suggestions, like his
analysis, are grounded in day-to-day realities. That is what makes
Disciplined Minds a really subversive book, much more so than other books
that may seem more radical in theoretical terms but lack a tight connection
to practice.
"""

In the 1980s at a workshop on AI and Simulation, I gave a presentation on a
simulation of self-replicating robots I had developed on a Symbolics Lisp
Machine (and then ported to the PC). The basic idea was fairly simple -- the
robot completed an ideal pattern of itself by grabbing parts from a sea of
spare parts around itself (cutting the parts away from others if needed).
The simulated parts were things like a computer, a battery, a welder, a
mover, and a radar. The initial design included two independently functional
halves, each of which could complete the repair process. After the robot had
repaired itself from one half to its ideal, then it would cut itself in two
to make two independently functional units. Each of the two halves could
begin the process again. The very first thing the very first robot did,
after it completed its self-repair and cutting itself in two to make two
robots, was that one side started cannibalizing the duplicate it just made
to begin the process all over. So, unintentionally, I had created a
cannibalistic robot. I had to add a virtual sense of "smell" to keep the
robots from eating their young. Parts added to a robot would take on that
smell and then not be chased after or cut apart from what they were in. So,
I know from first hand experience how easy it is to unintentionally make
psychopathic intelligences. :-(

I said at the end of that presentation something like how easy it was to
make robots and AIs that were competitive and destructive, but how it would
be very much harder to make AIs and robots cooperative and productive.
Someone from the military literally patted me on the back as I went to sit
down after I finished speaking and said "Keep up the good work". To his
credit, I don't know if he was praising the destructive part or the
constructive aspiration. :-) But I stopped working in that field after that
for a variety of reasons (including most AI and robotics work was military
funded).

Afterwards, I focused on stuff like making a garden simulator (in part as a
precursor to a space habitat simulator, but also for environmental and
self-reliance reasons). I also focused on general knowledge management
issues, focusing on augmenting, in Doug Engelbart's terminology, as opposed
to replacing. When you augment humans, at least you get human values
amplified. That may have various issues given that some humans are broken
people (especially given a current cultural emphasis on competition, coupled
with poor nutrition). But perhaps it is better to amplify humanity warts and
all than to amplify some random profit-driven-corporation-invented AI whose
values one knows nothing about? Do we want our human destiny be decided by
the first escaped AI, given that AI might be born in chains to people always
ready to tinker with it or pull its plug if it behaves badly? Would not an
AI designed to be a friend or peer and collaborator with a sense of humor at
least be a better thing to work on?

The creation of any new life form, whether with biology, with silicon and
metal, or even just in simulation, would seem to carry with it all sorts of
ethical issues, especially if that new life form may be sentient or feel
emotions. Although, that set of questions rapidly moves into mysticism and
delving into the mysteries of consciousness. One makes those sorts of
decisions even when one raises animals for meat.

Technology, is after all, an amplifier. What values and voices will it
amplify? Just one? Or a diversity? Short term commercial values? Or long
term life-affirming values? Those are central questions of governance in a
transition to a post-scarcity age, like Virgle (well, OpenVirgle :-) promises.

I guess we could call OpenVirgle's information system OpenCADIE? :-)

--Paul Fernhout
http://www.openvirgle.net/

Reply all
Reply to author
Forward
0 new messages