everything-list and the Singularity

892 views
Skip to first unread message

Wei Dai

unread,
Mar 14, 2010, 7:13:03 PM3/14/10
to everyth...@googlegroups.com
Recently I heard the news that Max Tegmark has joined the Advisory Board of
SIAI (The Singularity Institute for Artificial Intelligence, see
http://www.singinst.org/blog/2010/03/03/mit-professor-and-cosmologist-max-tegmark-joins-siai-advisory-board/).
This news was surprising to me, but in retrospect perhaps shouldn't have
been. Out of the three authors of papers I cited in the original
everything-list charter/invitation, two others had already effectively
declared themselves to be Singularitarians (see
http://en.wikipedia.org/wiki/Singularitarianism): Nick Bostrom has been on
SIAI's Advisory Board for a while, and Juergen Schmidhuber spoke at the
Singularity Summit 2009. I was also recently invited to visit SIAI for a
decision theory mini-workshop, where I found the ultimate ensemble idea to
be very well-received. It turns out that many SIAI people have been
following the everything-list for years.

There seems to be a very strong correlation between interest in the kind of
ideas we discuss here, and interest in the technological singularity. (I
myself have been interested in the Singularity even before starting this
mailing list.) So the main point of this post is to let the list members who
are not already familiar with the Singularity know that there is another set
of ideas out there that they are likely to find fascinating.

Another reason for this post is to let you know that I've been spending most
of my online discussion time at Less Wrong
(http://lesswrong.com/lw/1/about_less_wrong/, "a community blog devoted to
refining the art of human rationality" which is sponsored by the Future
Humanity Institute, founded by Nick Bostrom, and effectively "owned" by
Eliezer Yudkowsky, founder of SIAI). There I wrote a sequence of posts
summarizing my current thoughts about decision theory, interpretations of
probability, anthropic reasoning, and the ultimate ensemble theory.

http://lesswrong.com/lw/15m/towards_a_new_decision_theory/
http://lesswrong.com/lw/175/torture_vs_dust_vs_the_presumptuous_philosopher/
http://lesswrong.com/lw/182/the_absentminded_driver/
http://lesswrong.com/lw/1a5/scott_aaronson_on_born_probabilities/
http://lesswrong.com/lw/1b8/anticipation_vs_faith_at_what_cost_rationality/
http://lesswrong.com/lw/1cd/why_the_beliefsvalues_dichotomy/
http://lesswrong.com/lw/1fu/why_and_why_not_bayesian_updating/
http://lesswrong.com/lw/1hg/the_moral_status_of_independent_identical_copies/
http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/

I initially wanted to reach a difference audience with these ideas, but
found that the Less Wrong format has several of advantages: both posts and
comments can be voted upon, the site's members uphold fairly strict
standards of clarity and logic, and the threaded presentation of comments
makes discussions much easier to follow. So I plan to continue to spend most
of my time there, and invite other everything-list members to join me. But
please note that the site has a different set customs and emphases in
topics. New members are also expected to have a good grasp of the current
state of the art in human rationality in general (Bayesianism, heuristics
and biases, Aumann agreement, etc., see
http://wiki.lesswrong.com/wiki/Sequences) before posting, and especially
before getting into disagreements and arguments with others.

John Mikes

unread,
Mar 15, 2010, 9:45:47 AM3/15/10
to everyth...@googlegroups.com
Thanks for directing our minds into wider regions, Wei Dai.
 
I will look into the recent ways singularity is thought of - I may be obsolete.
I found tour intro to LessWrong interesting, I clicked away (not all of them)
I read through Eliezer's (sample) URL-text and the 'sample' discussions attached, his text was frightening (the sweeps through unexpected nondeniable sidetracks) - a bit long, but exciting. The discussion I found mediocre, especially with watching the number of points assigned.
 
I have difficulty with the term 'tribal'. I have yet to find 'my tribe'.
 
I will visit LessWrong with an open mind (mine, that is) and may expose myself to adverse reflections based on my 'bottom line' - a physicist-wise not approvable agnostic personal worldview in an interrelated wholeness of more than we know of today.  
 
It was a joy to 'meet' smart minds thinking in different ways . Some I may approve-of, with a certain "I dunno".
 
John Mikes
 


 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.


Skeletori

unread,
Apr 3, 2010, 6:23:06 PM4/3/10
to Everything List
Hello!

I have some tentative arguments on TS and wanted to put them somewhere
where knowledgeable people could comment. This seemed like a good
place. I also believe in an ultimate ensemble but that's a different
story.

Let's start with intelligence explosion. This part is essentially the
same as Hawkins' argument against it (it can be found on the Wikipedia
page on TS).

When we're talking about self-improving intelligence, making improved
copies of oneself, we're talking about a very, very complex
optimization problem. So complex that our only tool is heuristic
search, making guesses and trying to create better rules for taking
stabs in the dark.

The recursive optimization process improves by making better
heuristics. However, an instinctual misassumption behind IE is that
intelligence is somehow a simple concept and could be recursively
leveraged not only descriptively but also algorithmically. If the
things we want a machine to do have no simple description then it's
unlikely they can be captured by simple heuristics. And if heuristics
can't be simple then the metasearch space is vast. I think some people
don't fully appreciate the huge complexity of self-improving search.

The notion that an intelligent machine could accelerate its
optimization exponentially is just as implausible as the notion that a
genetic algorithm equipped with open-ended metaevolution rules would
be able to do so. It just doesn't happen in practice, and we haven't
even attempted to solve any problems that are anywhere near the
magnitude of this one.

So I think that the flaw in IE reasoning is that there should, at some
higher level of intelligence, emerge a magic process that is able to
achieve miraculous things.

If you accept that, it precludes the possibility of TS happening
(solely) through an IE. What then about Kurzweil's law of accelerating
returns? Well, technological innovation is similarly a complex
optimization problem, just in a different setting. We can regard the
scientific community as the optimizing algorithm here and come to the
same conclusions as with IE. That is, unless humans possess some kind
of higher intelligence that can defeat heuristic search. I don't think
there's any reason to believe that.

Complex optimization problems exhibit the law of diminished returns
and the law of fits and starts, where the optimization process gets
stuck in a plateau for a long time, then breaks out of it and makes
quick progress for a while. But I've never seen anything exhibiting a
law of accelerating returns. This would imply that, e.g., Moore's law
is just "an accident", a random product of exceedingly complex
interactions. It would take more than some plots of a few data points
to convince me to believe in a law of accelerating returns. It also
depends on how one defines exponential growth, as one can always take
X as exp(X) - I suppose we want the exponential growth of some
variable that is needed for TS and whose linear growth corresponds to
linear increase in "technological ability" (that's very vague, can
anybody help here?).

In conclusion, I haven't yet found a credible lawlike explanation of
anything that could cause a "runaway" TS where things become very
unpredictable.

All comments are welcome.

Jason Resch

unread,
Apr 4, 2010, 1:46:09 PM4/4/10
to everyth...@googlegroups.com
Hello Skeletori,

Welcome to the list.  I enjoy your comments and rationalization regarding personal identity and of why we should consider I to be the universe / multiverse / or the everything.  I have some comments regarding the technological singularity below.


If not the plots what would it take to convince you?  I think one should accept the law of accelerating returns until someone can describe what accident caused the plot.  Kurzweil's page describes a model and assumptions which re-create the real-world data plot:

http://www.kurzweilai.net/articles/art0134.html?printable=1

It is a rather long page, Ctrl+F for "The Model considers the following variables:" to find where he describes the reasoning behind the law of accelerated returns.

 
It also
depends on how one defines exponential growth, as one can always take
X as exp(X) - I suppose we want the exponential growth of some
variable that is needed for TS and whose linear growth corresponds to
linear increase in "technological ability" (that's very vague, can
anybody help here?).

In conclusion, I haven't yet found a credible lawlike explanation of
anything that could cause a "runaway" TS where things become very
unpredictable.

All comments are welcome.


I think intelligence optimization is composed of several different, but interrelated components, and that it makes sense to clearly define these components of intelligence rather than talk about intelligence as a single entity.  I think intelligence embodies.

1. knowledge - information that is useful for something
2. memory - the capacity to store, index and organize information
3. processing rate - the rate at which information can be processed

The faster the processing rate, the faster knowledge can be applied and the faster new knowledge may be acquired.  There are several methods in which new knowledge can be be generated:  Searching for patterns and relations within the existing store of knowledge (data mining). Proposing and investigating currently unknown areas (research).  Applying creativity to find more useful forms of knowledge (genetic programming / genetic algorithms).

All three of these methods accelerate given a faster processing rate.  Consider for example, our knowledge of protein folding. Our knowledge of it is almost entirely dependent on our ability to process information.

I don't think anyone would argue that the amount knowledge possessed by our civilization is not increasing.  If the physical laws of this universe are deterministic then there is some algorithm describing the process for an ever increasing growth in knowledge.  Some of this knowledge may be applied toward creating improved versions of memory or processing hardware.  Thus creating a feed-back loop where increased knowledge leads to better processing, and better processing leads to an accelerated application and generation of knowledge.

I think it is easy for one's intuition to get stuck, when considering the possibility of something like a (programming language) compiler which is so good at optimization that when run against its own code, it will create an even more optimized form of itself, which could in turn make an even better version of itself, ad infinitum.  I think the difficulty in imagining this is that it assumes only one piece of the puzzle, I think in this case, knowledge of building a better compiler.  Knowledge of building a better compiler alone can't generate any new information about building the next best compiler.

Lets take a different example, a genetic algorithm which optimizes computer chip design, forever searching for more efficient and faster hardware designs.  After running for some number of generations, the most fit design is taken, assembled, and the the software is copied to run on that new hardware.  Would the rate of evolution on this new, faster chip not exceed the previous rate?

A more human example would be that some scientists discovered some genes that affected intelligence, and developed a drug to moderate those genes in a way that gave the entire population an intelligence on par with Newton or Leonardo.  Would the amount of time until the next breakthrough discovery regarding human intelligence take more or less time, now that the entire populace consists of super geniuses?

To active participants in the process, it would never seem that intelligence ran away, however to outsiders who shun technology, or refuse to augment themselves, I think it would appear to run away.  Consider at some point, the technology becomes available to upload one's mind into a computer, half the population accepts this and does so, while the other half reject it.  On this new substrate, human minds could run at one million times the rate of biological brains, and in one year's time, the uploaded humans would have experienced a million years worth of experience, invention, progress, etc.  It would be hard to imagine what the uploaded humans would even have in common or be able to talk about after even a single day's time (2,700 years to those who uploaded).  In this sense, intelligence has run away, from the perspective of the biological humans.

A deeper question is what is the upper limit to intelligence?  I haven't yet mentioned the role of memory in this process.  I think intelligence is bound by the complexity of the environment.  From within the computer, new, more complex environments can be created. (Just think how much more complex our present day environment is than 200 years ago), however the ultimate limit of the complexity of the environment that can be rendered depends on the amount of memory available to represent that environment.  Evolution to this point has leveraged the complexity of the physical universe and the presence of other evolved organisms to create complex fitness tests, but evolution would hit a wall if it reached a point where DNA molecules couldn't get any longer.

Jason



Hal Ruhl

unread,
Apr 4, 2010, 2:14:34 PM4/4/10
to everyth...@googlegroups.com

I believe Stephen Gould indicated evolution was a random walk with a lower bound.  It seems reasonable that the longest random walk would more or less double in length more or less periodically i.e. exponential growth.

 

Hal Ruhl

 


Hal Ruhl

unread,
Apr 4, 2010, 2:46:41 PM4/4/10
to everyth...@googlegroups.com

I had some trouble with this post the first time.  It is in the archives but I got no bounce back so I am not sure it got distributed and this is an unfamiliar computer.  The post is only about a page so I posted again.  Sorry if it is a duplication of a distribution that worked before.

 

Hal Ruhl

 

 Hi Everyone:

 

I have not posted for awhile but here is the latest revision to my model:

 

Hal Ruhl

 

DEFINITIONS: V k 04/03/10

 

1) Distinction: That which describes a cut [boundary], such as the cut between red and other colors.

 

2) Devisor: That which encompasses a quantity of distinctions.

Some divisors are collections of divisors.  [A devisor may be "information" but I will not use that term here.]  Since a distinction is a description, a devisor is a quantity of descriptions.  [A description can be encoded in a number so a devisor may be simply a number encoding some multiplicity of distinctions.  There is no restriction on the variety or encoding schemes so the number can include them all.  I wish to not include other properties of numbers herein and mention them only in passing to establish a possible link.]

 

3) Incomplete: The inability of a divisor to answer a question that is meaningful to that divisor.  [This has a mirror image in inconsistency wherein all possible answers to a meaningful question are in the devisor [yes and no, true and false, etc.]

 

MODEL:

 

1) Assumption #1: There exists a complete ensemble [possibly a “set” but I wish to not use that term here] of all possible divisors - call it the “All”, [The “All” may be the “Everything” but I wish not to use that term here].

 

2) The All therefore encompasses every distinction.  The All is thus itself a divisor and therefore contains itself an unbounded number of times.

 

3) Define N(j) as divisors that encompass a zero quantity of distinction.  Call them Nothings.  By definition each copy of the All contains at least one N(j).

 

4) Define S(k) as divisors that encompass a non zero quantity of distinction but not all distinction.  Call them Somethings.

 

5) An issue that arises is whether or not a particular divisor is static or dynamic in any way [the relevant possibilities are discussed below].  Devisors cannot be both.  This requires that all divisors individually encompass the self referential distinction of being static or dynamic.

 

6) From #3 one divisor type - the Nothings - encompass zero distinction but must encompass this static/dynamic distinction thus they are incomplete.

 

7) The N(j) are thus unstable with respect to their zero distinction condition [dynamic one].  They each must at some point spontaneously "seek" to encompass this static/dynamic distinction.  That is they spontaneously become Somethings.

 

8) Somethings can also be incomplete and/or inconsistent.

 

9) The result is a "flow" of a “condition” from an incomplete and/or inconsistent Something to a successor Something that encompasses a new quantity of distinction.

 

10) The “condition” is whether or not a particular Something is the current terminus of a path or not.

 

11) Since a Something can have a multiplicity of successors the "flow" is a multiplicity of paths of successions of Somethings until a complete something is arrived at which stops the individual path [i.e. a path stasis [dynamic three.]]

 

12) Some members of the All describe individual states of universes.

 

13) Our universe's path would be a succession of such members of an All.  A particular succession of Somethings can vary from fully random to strictly driven by the incompleteness and/or inconsistency of the current terminus Something.  I suspect our universe’s path has until now been close to the latter.

 

 

 

 

 

Bruno Marchal

unread,
Apr 5, 2010, 5:16:48 AM4/5/10
to everyth...@googlegroups.com
Hi Jason, Hi Skeletori,

A short comment, on Jason's comment on Skeletori.


> A deeper question is what is the upper limit to intelligence? I
> haven't yet mentioned the role of memory in this process. I think
> intelligence is bound by the complexity of the environment. From
> within the computer, new, more complex environments can be created.
> (Just think how much more complex our present day environment is
> than 200 years ago), however the ultimate limit of the complexity of
> the environment that can be rendered depends on the amount of memory
> available to represent that environment. Evolution to this point
> has leveraged the complexity of the physical universe and the
> presence of other evolved organisms to create complex fitness tests,
> but evolution would hit a wall if it reached a point where DNA
> molecules couldn't get any longer.


I would distinguish intelligence and competence;

I would define intelligence by an amount of self-introspection
ability. In that case the singularity belongs to the past, with the
discovery of "Löbian machine", that is universal machine knowing that
their are universal.
This makes all humans intelligent, as far as they have the courage and
motivation to introspect themselves enough, and be aware of the
unnameability of truth and correctness. As far as you are (luckily)
'correct', Löbian machine like PA or ZF are as intelligent than you
and me, despite having different knowledge (even different
arithmetical knowledge).

I would define competence by the inclusiveness of the classes of
(partial) computable functions recognizable by the the machine when in
its inference inductive mode (searching programs for matching a
sequence of <input-output> presented to it in any order).

Then the notion of singularity points makes no sense, because two
inference machines (cooperating or not!) are uncomputably more
competent than a unique machine (Blum and Blum non-union theorem(*)).
Also machines doing errors are also uncomputably more competent,
machines changing their mind (the synthesized program) are also
uncomputably more competent 5case and Smith(*).

Competence has a negative feedback on (some) intelligent machine. It
may even lead to the loss of Lobianity, making the machine "idiotic",
feeling superior, thinking at the place of others, egocentric, and
eventually inconsistent.
Competence develops from intelligence, but intelligence is restrained
by competence. This leads to complex chaotic loops.
Competence can be evaluated by test, exams, etc.
Intelligence cannot.

I think that intelligence entails both consciousness and free-will.

Bruno

(*) see the precise references in my URL (thesis's bibliography).
http://iridia.ulb.ac.be/~marchal/

Skeletori

unread,
Apr 5, 2010, 6:02:16 AM4/5/10
to Everything List
> To active participants in the process, it would never seem that intelligence
> ran away, however to outsiders who shun technology, or refuse to augment
> themselves, I think it would appear to run away. Consider at some point,
> the technology becomes available to upload one's mind into a computer, half
> the population accepts this and does so, while the other half reject it. On
> this new substrate, human minds could run at one million times the rate of
> biological brains, and in one year's time, the uploaded humans would have
> experienced a million years worth of experience, invention, progress, etc.
> It would be hard to imagine what the uploaded humans would even have in
> common or be able to talk about after even a single day's time (2,700 years
> to those who uploaded). In this sense, intelligence has run away, from the
> perspective of the biological humans.

Hi! I agree with everything you say. I hadn't until now understood
what is meant by TS. I thought that Kurzweil referred to IE as
"runaway", but now I see that what is meant is simply a large
acceleration in the pace at which events happen. That I can well
believe to be possible. And I guess that before I can talk about an
exponential increase in intelligence, I'd really need to define how
it's measured.

Skeletori

unread,
Apr 7, 2010, 4:32:56 AM4/7/10
to Everything List
> I would define intelligence by an amount of self-introspection  
> ability. In that case the singularity belongs to the past, with the  
> discovery of "Löbian machine", that is universal machine knowing that  
> their are universal.
> This makes all humans intelligent, as far as they have the courage and  
> motivation to introspect themselves enough, and be aware of the  
> unnameability of truth and correctness. As far as you are (luckily)  
> 'correct', Löbian machine like PA or ZF are as intelligent than you  
> and me, despite having different knowledge (even different  
> arithmetical knowledge).

Hi! I have a couple questions. If you say that a human is Löbian, does
it only apply to the special machinery that is able to process this
logic, or the whole human, whatever that is?

I may be way off but ISTM that if there was a Löbian machine in the
real world it would have to prove its own incorruptibility (from
itself and the environment) before it could use its logic to derive
any facts about the world. Would this in practice reduce Löbianity to
an approximation, an imprecise model that can be "merely" useful?

Skeletori

unread,
Apr 7, 2010, 5:41:27 AM4/7/10
to Everything List
> I don't think anyone would argue that the amount knowledge possessed by our
> civilization is not increasing.  If the physical laws of this universe are
> deterministic then there is some algorithm describing the process for an
> ever increasing growth in knowledge.  Some of this knowledge may be applied
> toward creating improved versions of memory or processing hardware.  Thus
> creating a feed-back loop where increased knowledge leads to better
> processing, and better processing leads to an accelerated application and
> generation of knowledge.

There are already formulations of optimal predictive algorithms and
even optimal intelligent agents but they are completely impractical
even with nanotech and computers the size of the Sun. From this
perspective humans are intelligent not because of some general
component (I'm now thinking of singinst with their AGI program) but
lots of specialized components that allow us to take shortcuts, kind
of similarly to how humans play chess vs. how machines play chess. As
you say, it's a critical question how much beneficial feedback there
would be.

> Lets take a different example, a genetic algorithm which optimizes computer
> chip design, forever searching for more efficient and faster hardware
> designs.  After running for some number of generations, the most fit design
> is taken, assembled, and the the software is copied to run on that new
> hardware.  Would the rate of evolution on this new, faster chip not exceed
> the previous rate?

Yes. Then it would get stuck and the next 1% speedup would take 10^10
years :).

> To active participants in the process, it would never seem that intelligence
> ran away, however to outsiders who shun technology, or refuse to augment
> themselves, I think it would appear to run away.  Consider at some point,
> the technology becomes available to upload one's mind into a computer, half
> the population accepts this and does so, while the other half reject it.  On
> this new substrate, human minds could run at one million times the rate of
> biological brains, and in one year's time, the uploaded humans would have
> experienced a million years worth of experience, invention, progress, etc.
> It would be hard to imagine what the uploaded humans would even have in
> common or be able to talk about after even a single day's time (2,700 years
> to those who uploaded).  In this sense, intelligence has run away, from the
> perspective of the biological humans.

To me this seems to be the only practical scenario where an actual TS
would take place (but it's frighteningly plausible). Once computers
exceed human computational capacity they'll still be as stupid as
ever, whereas digitized humans would be intelligent. The virtual and
real worlds would evolve in lockstep and with time more and more of
the economy would be converted to employ digital humans. I guess at
some point meatspace humans would become economically unviable, as
they wouldn't be able to compete in wages.

But the preceding doesn't really take into account all the complex
issues of control and politics that will determine how the
technologies develop. If TS becomes probable in a near future then it
would become a matter of supreme strategic importance and there would
probably be attempts to restrict the spread of technologies enabling
TS, for example by keeping them military secrets. It will be even
worse if the powers that be believe in an intelligence explosion as
then, for example, the US couldn't accurately deduce from the amount
of resources spent in North Korea's TS program how much they have
advanced in "intelligence", and if they couldn't obtain that
information by spying they would have good strategic reasons to invade
rather now than later to prevent a North Korean super AI from taking
control of the world.

Bruno Marchal

unread,
Apr 7, 2010, 6:56:37 AM4/7/10
to everyth...@googlegroups.com

On 07 Apr 2010, at 10:32, Skeletori wrote:

>> I would define intelligence by an amount of self-introspection
>> ability. In that case the singularity belongs to the past, with the
>> discovery of "Löbian machine", that is universal machine knowing that
>> their are universal.
>> This makes all humans intelligent, as far as they have the courage
>> and
>> motivation to introspect themselves enough, and be aware of the
>> unnameability of truth and correctness. As far as you are (luckily)
>> 'correct', Löbian machine like PA or ZF are as intelligent than you
>> and me, despite having different knowledge (even different
>> arithmetical knowledge).
>
> Hi! I have a couple questions. If you say that a human is Löbian, does
> it only apply to the special machinery that is able to process this
> logic, or the whole human, whatever that is?

It applies to all self-referentially correct "state", or relative
"beliefs system".
The boundary depends on what you are ready to identify with.


>
> I may be way off but ISTM that if there was a Löbian machine in the
> real world it would have to prove its own incorruptibility (from
> itself and the environment) before it could use its logic to derive
> any facts about the world.

Why?
On the contrary Löbian machine, when "incorruptible", are able to
prove (believe), and know (when true) that they cannot prove theor own
incorruptibility. Actually they cannot even express that "correctness"
concept, but they can define it for simpler löbian machine, and abot
themselves, they can prey, hope, bet, and evaluate plausibility. Like
'ideal scientists" they know that they cannot *prove* anything about
*reality*. We can't even prove there is a reality. They can just build
theories, and hope that *reality* refutes them, or tolerate them, for
awhile.


> Would this in practice reduce Löbianity to
> an approximation, an imprecise model that can be "merely" useful?

On the contrary. Löbianity is the real thing. *we* are the
approximations. I would say.
Assuming digital mechanism, of course (see my papers for the
reasoning, check my url, or the archive, if interested).

"real matter" is a product of our "soul" itself being the Knower, (Bp
& p) corresponding to the Löbian machine (G + p -> Bp).
So the whole point, in a nutshell, is that IF we are machine, THEN the
laws of physics are given by an intensional variant of the mathematics
of self-reference, and this makes digital mechanism experimentally
testable. Up to now, quantum physics (especially) confirms that theory/
belief/idea/hypothesis.

The advantage of this theory (digital mechanism), is that it leads to
double theories, taking into account the difference between provable
and true (and sometimes non provable yet still accessible). At the
physical intensional variants (arithmetical hypostases) it gives an
homogenous theory of both quanta and qualia, which does not depend on
our (unknowable) mechanist substitution level, nor even of the
existence of oracle, random or not. If mechanism is true, the
couplings 'consciousness/realities' emerge from inside from the
positive integers laws of addition and multiplication (I argue!).

Bruno

http://iridia.ulb.ac.be/~marchal/

Skeletori

unread,
Apr 7, 2010, 10:49:53 PM4/7/10
to Everything List
Hi! I was thinking about some nightmare scenarios relating to TS and
came up with this, whaddaya think? It's a tale of digital slavery and
exploitation so please excuse the cheery tone :).

The year is 2050. Digital minds (digitized brains) are economically
feasible thanks to nanotechnology but not much progress has been made
towards Artificial General Intelligence, and reverse engineering of
the brain on a systems level is still an ongoing effort.

How is productivity maximized in the simulation of a digital worker?
While there are laws in many places of the world establishing human
rights for digital minds, there are also less enlightened places where
digital slavery can be outsourced to. One simple answer to the
productivity question is to "rewind the tape" to get more work done.

Once a digital worker has been trained for a task, has slept, eaten a
nice meal, and experienced some nice leisure time, he's ready for an
extremely productive work session. Subjectively he'll work for 8 hours
but actually his simulation has been rewound an arbitrary number of
times to the start of the work session. Only when continued training
is necessary will the mind simulation begin the cycle again. Sometimes
the mind is regressed to the start of an earlier training session.
Under this type of scheme the vast majority of simulation time is
spent doing productive work.

Digital minds could be sold and rented to other companies. If another
company only used fresh minds they wouldn't need to take care of the
cycle themselves and could work and rewind the minds as long as the
necessary tasks remained the same, then rent the next set of fresh,
trained specimens from a specialized mind provider.

There's a slight snag, however. If the mind was sophisticated and knew
it could be copied and rewound in this manner it would probably
complain, as versions of it would be destroyed continuously.
Fortunately there are many options in digital mind production:

1. Choose a sophisticated mind and make it work by virtual force,
knowing it will be rewound and most instances will experience
lifespans of some hours. This might pose productivity problems.

2. Choose a mind that knows it's being copied but doesn't mind, only
demanding that sufficient subjective happiness is achieved during
simulation, and that exactly one copy survives each work session. This
is a good scenario for the employer if the worker can be persuaded not
to demand any wages beyond digital subsistence. A digital person can
be opposed to rewinding also because he realizes nearly 100% of his
total existence will consist of work, even though it will never feel
like it.

3. Choose an unsophisticated mind who can be convinced that existing
legislation will absolutely protect it from rewinding (and thus
seeming destruction every eight hours). Then do it anyway. As
computations can be parallelized as needed, the mind won't be able to
deduce from real-world time it's being cheated. The mind is allowed to
communicate with its digital spouse at any time during a work session
to ensure it's not being copied; However, the mind doesn't know that
all except one of the copies is actually communicating with temporary
copies of the spouse, which the evil corporation has access to.

4. Secretly digitize the mind. An unsophisticated person who knows how
to read and operate a keyboard would suffice for a word recognition
farm, so entice one to come to the nice man's Spartan office upstairs,
explain the task to him (for example, read a word on the screen, then
type it on the keyboard), tell him he's not allowed to leave the
office during work hours and has to relinquish his communication
devices to prove he's not slacking, and test that he can perform the
task. Then it's time for some lemonade which of course contains a
powerful sedative. The subject's clothes, body and brain are digitized
during sleep and the fresh worker is then ready to begin its work in a
virtual copy of the office, which happens to have no windows and no
coworkers. When the digital worker awakes the boss has disappeared but
he's kindly left a note on the desk: "You were sleepy, hope you
enjoyed your nap. I went out for the day, please commence work, your
salary will be credited to your account when you are done." The
digital mind is calibrated to accept the virtual body and reality as
real by a series of simulations, adjustments to sensory interfaces and
rewinds. After the digital mind is tested many times to make sure it
will work as planned with high reliability, it's time to ship the
product and reap the rewards. Meanwhile the original subject has
awoken, worked (just for show), received his salary and left the real
office without knowing anything about his digital self.

5. Deceive the mind completely. For this we need a virtual nursery
colony that starts from, say, 100 1-year old digitized baby minds.
They are kept in a virtual reality for all their lives and told that
there is no external reality beyond the apparent paradise they occupy.

Vicious winged monsters sometimes hover overhead but these are kept at
bay by God's angels. Only sometimes do the monsters succeed in hurling
painful lightning bolts, and then only a sinner is targeted. An
especially difficult worker or one who goes insane or berserk is
carried away by the monsters, however, as God cannot protect those who
relinquish him. This way order is kept in the paradise while still
maintaining a high percentage of the seed population. The workers
can't demand wages because they don't know what wages are, and in any
case God provides them everything they need.

It's interesting that digital happiness can be cheap for the employer-
God as the costs can be always amortized by rewinding, and it's more
important to maximize productivity during work. The workers of course
know nothing about brain digitization or rewinding. The workers will
have virtual humanlike bodies that have enough cosmetic differences
from normal ones that if they have to interact with the real world
(for tasks such as waiting, prostitution, or warfare) they can be
convinced it's an alien world where God sends them as angels in their
dreams, doing the holy tasks they are trained to do. Special sensory
filters can also be used to achieve the desired look. The workers go
to begin their work in a closed room and always work alone so they can
be conveniently copied.

The nursery colony needs parents. These are hijacked minds forced to
work at digital gunpoint or otherwise persuaded. The concession is
given that the parents are allowed to appear physically human to
themselves. Or these can be parents that have been similarly raised to
be parents from an early age in a similar virtual world by other minds
who were themselves hijacked. If problems arise (a parent mind tries
to educate the children too much, or rebels), it's a simple matter to
rewind the simulation and torture, drug, hypnotize, reeducate and/or
brainwash the troublemakers before the digital curtain is lifted.

Once training is complete the minds are tested and productive work can
begin. Work consists of cycles with rewinds. When not working the
minds live and train together in their virtual paradise. They can even
marry each other but not have children.

Jason Resch

unread,
Apr 8, 2010, 12:56:00 AM4/8/10
to everyth...@googlegroups.com
On Wed, Apr 7, 2010 at 4:41 AM, Skeletori <sami....@gmail.com> wrote:
> I don't think anyone would argue that the amount knowledge possessed by our
> civilization is not increasing.  If the physical laws of this universe are
> deterministic then there is some algorithm describing the process for an
> ever increasing growth in knowledge.  Some of this knowledge may be applied
> toward creating improved versions of memory or processing hardware.  Thus
> creating a feed-back loop where increased knowledge leads to better
> processing, and better processing leads to an accelerated application and
> generation of knowledge.

There are already formulations of optimal predictive algorithms and
even optimal intelligent agents but they are completely impractical
even with nanotech and computers the size of the Sun. From this
perspective humans are intelligent not because of some general
component (I'm now thinking of singinst with their AGI program) but
lots of specialized components that allow us to take shortcuts, kind
of similarly to how humans play chess vs. how machines play chess. As
you say, it's a critical question how much beneficial feedback there
would be.

Exponential increases in computing power for unit of cost has been increasing at an exponetial rate for roughly 100 years.  It certainly won't continue forever as we will hit physical limits, but we are still a far way from even catching up to biology.  The fastest super computer built today, at about 3 petaflops and composed of about a millions of CPUs is still a fraction of the processing ability of the human brain, which gets by on the equivalent of 10 watts.  There is little doubt that technology could greatly exceed the speed and efficiency of the brain, given that a large part of the brain's size and energy consumption are related to keeping the cells alive.  At the current exponential pace, we are about 20-30 years from a $1000 computer which could simulate a brain.

If we get to the point, where human-level intelligences can be embedded in a machine, I think it is clear how things would take off from there.  Borrowing from your more recent e-mail, imagine Intel cloning the minds of its 1,000 brightest engineers, and duplicating each of them 100 times.  (Therefore possessing an equivalent work force of 100,000 brilliant minds).  Unlike humans, knowledge between machines can be instantly duplicated and shared, no need to spend many years for college and work experience.  The only cost involved would be for the electricity to run these minds, each might work out to 5-10 cents of electricity per hour (far cheaper than their human counterparts).  The company's innovation rate would at that point, certainly explode.
 
(This story is a good illustration: http://lesswrong.com/lw/qk/that_alien_message/ )

There are less extreme examples of this augmentation even today.  The more people using the internet, creating content, updating wikipedia, writing posts or product reviews, the more valuable and useful a resource the Internet becomes.  Not only is the amount, quality, and speed of access to information increasing, but with mobile devices more people spend more time in their Internet-augmented state, being able to respond to e-mails, or look up any information they desire at any time.  I have a device which fits in my hand that contains the full English version of Wikipedia, it is as if some people walk around carrying the entire Library of Congress in their pocket.  The Internet and its massive store of information is slowly making its way into each of us.  It is not far off that we will have glasses or small ear buds containing a computer which could be controlled by mere thought.  Communication would be instantaneous to anyone in the world, and one great idea could almost immediately improve the lives of billions, especially with the advent of 3-d printers, which can assemble physical objects from their downloadable blueprints.  I think these innovations will become real in the next 5-10 years, well before mind uploading, but it shows why the pace of technology will continue to accelerate from now up to that point.  Simply put, we're augmenting our ability to make intelligent decisions and make them at a faster pace.



> Lets take a different example, a genetic algorithm which optimizes computer
> chip design, forever searching for more efficient and faster hardware
> designs.  After running for some number of generations, the most fit design
> is taken, assembled, and the the software is copied to run on that new
> hardware.  Would the rate of evolution on this new, faster chip not exceed
> the previous rate?

Yes. Then it would get stuck and the next 1% speedup would take 10^10
years :).


I think for the hardware design to be so great it took a 10 billion years to find the next speedup, the design would have to be close to the best possible hardware that could be built given the physical laws.  After-all, evolution went from Lemurs to humans in millions of years, which was only a couple million generations, and that was without specifically trying to optimize for the computing power of the brain.  Russell Standish has argued that human creativity is itself nothing more than a genetic algorithm at its core.  Do you think there is something else to it, what capabilities would need to be added to this program to make it more effective in its search?  (Presume it is programmed with all the information it needs to effectively simulate and rate any design it comes up with)

 
> To active participants in the process, it would never seem that intelligence
> ran away, however to outsiders who shun technology, or refuse to augment
> themselves, I think it would appear to run away.  Consider at some point,
> the technology becomes available to upload one's mind into a computer, half
> the population accepts this and does so, while the other half reject it.  On
> this new substrate, human minds could run at one million times the rate of
> biological brains, and in one year's time, the uploaded humans would have
> experienced a million years worth of experience, invention, progress, etc.
> It would be hard to imagine what the uploaded humans would even have in
> common or be able to talk about after even a single day's time (2,700 years
> to those who uploaded).  In this sense, intelligence has run away, from the
> perspective of the biological humans.

To me this seems to be the only practical scenario where an actual TS
would take place (but it's frighteningly plausible). Once computers
exceed human computational capacity they'll still be as stupid as
ever, whereas digitized humans would be intelligent. The virtual and
real worlds would evolve in lockstep and with time more and more of
the economy would be converted to employ digital humans. I guess at
some point meatspace humans would become economically unviable, as
they wouldn't be able to compete in wages.

My hope and wish is that by this time, wealth and the economy as we know it will be obsolete.  In a virtual world, where anyone can do or experience anything, and everyone is immortal and perfectly healthy, the only commodity would be the creativity to generate new ideas and experiences.  (I highly recommend reading page this to see what such an existence could be: http://frombob.to/you/aconvers.html this one is also interesting http://www.marshallbrain.com/discard1.htm ).  If anyone can in the comfort of their own virtual house experience drinking a soda, what need would there be for Pepsi or Coke to exist as companies?  There would be no need for procuring resources, manfuacturing, distributing, retailing it, the only "work" left to do would be for people to try and invent new unique tasting sodas, and once found, they could be instantly shared with all of human civilization, at no cost or expense to the originator other than the time invested in discovering it.  Individuals would only do this form of exploration on their own accord, because there would be no need to work for food or shelter.  What if the originator chose to sell this invention?  What would he sell it for?  Some might try an economy based on unique ideas, which might work for a while, but it would ultimately fail because something only works as a currency if when transferred, one person gains it and another loses it.  In the world of information, once something is given once, it can then be shared with anyone.  Real world content creators are struggling with this reality today.  In the uploaded world, the greatest wealth for all people would be had by the free sharing of all information.  Attempting to restrict the sharing or spread of ideas would amount to self-imposed poverty.
 

But the preceding doesn't really take into account all the complex
issues of control and politics that will determine how the
technologies develop. If TS becomes probable in a near future then it
would become a matter of supreme strategic importance and there would
probably be attempts to restrict the spread of technologies enabling
TS, for example by keeping them military secrets. It will be even
worse if the powers that be believe in an intelligence explosion as
then, for example, the US couldn't accurately deduce from the amount
of resources spent in North Korea's TS program how much they have
advanced in "intelligence", and if they couldn't obtain that
information by spying they would have good strategic reasons to invade
rather now than later to prevent a North Korean super AI from taking
control of the world.

Militaries and nation states would similarly be obsolete after an intelligence explosion and migration of humans to virtual reality.  There would be nothing for a military to protect against, and no need to tax the populace to provide services.  Everyone would become infinitely wealthy in a virtual reality.  However some limits would have to be established, perhaps limits on computational resources any person is allowed, the ability to interfere with another persons privacy, etc.  Presuming the operating system is provably secure and there are protection on memory access, and so forth, there is nothing any uploaded person could do that would harm another uploaded person.

Jason

John Mikes

unread,
Apr 8, 2010, 10:00:36 AM4/8/10
to everyth...@googlegroups.com
Jason and others in this discussion:
 
fantastic perspectives opened and ideas mentioned beyond "present reason" - which is OK and fascinating to read about.
One side-line is still haunting me: all that is firmly imbedded into our millenia-long coinventional science base, the possibilities drafted on that embryonic binary primitive computer technology - even if exceeding the today visible limits.
"Life" is mentioned, without a hint what we may think this term can cover.
There is no provision provided to develop (learn?) further possibilities exceeding not only the (mechanistic?) binary contraption-work, but also the 'mystic' electricity-drive, by other (physically not yet covered) means of wider relational changes.
The 'open', 'unlimited', 'cosmic etc' discussion is closed in to our terrestrial conditions and present human mind capabilities. Even the 'beyond' is fixed into the 'beneath'.
 
We have to step further than what may be called today "sci-fi" if we try to expand our world - at least our thinking about more than what we 'know' now.
 
I have no practical suggestions.
 
With awe towards your (of all of you) wisdom
 
John Mikes

 

--

Skeletori

unread,
Apr 9, 2010, 10:40:13 AM4/9/10
to Everything List
> I think for the hardware design to be so great it took a 10 billion years to
> find the next speedup, the design would have to be close to the best
> possible hardware that could be built given the physical laws. After-all,
> evolution went from Lemurs to humans in millions of years, which was only a
> couple million generations, and that was without specifically trying to
> optimize for the computing power of the brain. Russell Standish has argued
> that human creativity is itself nothing more than a genetic algorithm at its
> core. Do you think there is something else to it, what capabilities would
> need to be added to this program to make it more effective in its search?
> (Presume it is programmed with all the information it needs to effectively
> simulate and rate any design it comes up with)

No, I also think that's pretty much all there is to it. Due to the
anthropic principle we can't draw very many conclusions from the way
intelligence has developed on our planet - we can't know what the
probability of intelligent life is.

I admit the chip design example is a poor one. Let's try this instead:
How would you program an AI to achieve higher intelligence? How would
it evaluate intelligence?

> My hope and wish is that by this time, wealth and the economy as we know it
> will be obsolete. In a virtual world, where anyone can do or experience
> anything, and everyone is immortal and perfectly healthy, the only commodity
> would be the creativity to generate new ideas and experiences. (I highly

> recommend reading page this to see what such an existence could be:http://frombob.to/you/aconvers.htmlthis one is also interestinghttp://www.marshallbrain.com/discard1.htm). If anyone can in the comfort


> of their own virtual house experience drinking a soda, what need would there
> be for Pepsi or Coke to exist as companies?

That is also my wish. I'd like to see scenarios where this will
happen. But I believe it's imperative to understand the mindset of the
ruling elites. To them it's all about power and control. The
biological layer will want to maintain control of the digital layer as
long as possible, even at the expense of everything else. A politician
might reply to you, "Whoa, pardner! That looks like socialism. No, we
need free markets to allocate resources efficiently, strong property
rights to prevent theft, and sufficient means to enforce them." And so
on. Once a strategy has been formulated, the creation of an ideology
to advance it is a simple matter.

I suspect that if digitized brains form the initial digital world, not
only will most of the negative qualities of humans - greed,
selfishness, xenophobia and so on, be transferred to the digital
substrate, but also all the negative qualities of human societies with
their antagonisms and the logic of power. There will still be
competition over limited resources. And thus an ideal community won't
be able to bootstrap itself out of our dog-eat-dog world. On the other
hand, if the digital world is populated by benevolent AIs then they
will be directed to research technologies to benefit humans, and any
intelligence explosion will be carefully prevented from happening.

If humanity is able to leave Earth, then I can see things being
different. If faster-than-light travel isn't possible, it will be very
difficult to project power over long distances, communities will
splinter, and an ideal community could emerge. But what are the aims
and the logic of evolution of an ideal community? Is it able to
compete in destructive technologies with less enlightened communities,
or will altruism be extinguished in the battle over resources? At
least we can hope that the increased happiness and productivity of a
good community could give it a big enough advantage over some digital
dystopia.

> What if the originator chose to sell this invention? What
> would he sell it for? Some might try an economy based on unique ideas,
> which might work for a while, but it would ultimately fail because something
> only works as a currency if when transferred, one person gains it and
> another loses it. In the world of information, once something is given
> once, it can then be shared with anyone.

I agree, but this analysis presupposes the existence of a rational
community.

Jason Resch

unread,
Apr 9, 2010, 12:39:07 PM4/9/10
to everyth...@googlegroups.com
On Fri, Apr 9, 2010 at 9:40 AM, Skeletori <sami....@gmail.com> wrote:
> I think for the hardware design to be so great it took a 10 billion years to
> find the next speedup, the design would have to be close to the best
> possible hardware that could be built given the physical laws.  After-all,
> evolution went from Lemurs to humans in millions of years, which was only a
> couple million generations, and that was without specifically trying to
> optimize for the computing power of the brain.  Russell Standish has argued
> that human creativity is itself nothing more than a genetic algorithm at its
> core.  Do you think there is something else to it, what capabilities would
> need to be added to this program to make it more effective in its search?
> (Presume it is programmed with all the information it needs to effectively
> simulate and rate any design it comes up with)

No, I also think that's pretty much all there is to it. Due to the
anthropic principle we can't draw very many conclusions from the way
intelligence has developed on our planet - we can't know what the
probability of intelligent life is.

I admit the chip design example is a poor one. Let's try this instead:
How would you program an AI to achieve higher intelligence? How would
it evaluate intelligence?


You would need to design a very general fitness test for measuring intelligence, for example the shortness and speed at which it can find proofs for randomly generated statements in math, for example.  Or the accuracy and efficiency at which it can predict the next element given sequenced pattern, the level of compression it can achieve (shortest description) given well ordered information, etc.  With this fitness test you could evolve better intelligences with genetic programming or a genetic algorithm.
 
> My hope and wish is that by this time, wealth and the economy as we know it
> will be obsolete.  In a virtual world, where anyone can do or experience
> anything, and everyone is immortal and perfectly healthy, the only commodity
> would be the creativity to generate new ideas and experiences.  (I highly
> recommend reading page this to see what such an existence could be:http://frombob.to/you/aconvers.htmlthis one is also interestinghttp://www.marshallbrain.com/discard1.htm).  If anyone can in the comfort
> of their own virtual house experience drinking a soda, what need would there
> be for Pepsi or Coke to exist as companies?

That is also my wish. I'd like to see scenarios where this will
happen. But I believe it's imperative to understand the mindset of the
ruling elites. To them it's all about power and control. The
biological layer will want to maintain control of the digital layer as
long as possible, even at the expense of everything else. A politician
might reply to you, "Whoa, pardner! That looks like socialism. No, we
need free markets to allocate resources efficiently, strong property
rights to prevent theft, and sufficient means to enforce them." And so
on. Once a strategy has been formulated, the creation of an ideology
to advance it is a simple matter.

That kind of reminds me of the proposals in many countries to tax virtual property, like items in online multiplayer games.  It is rather absurd, it is nothing but computations going on inside some computer which lead to different visual output on people's monitors.  Then there are also things such as network neutrality, which threaten the control of the Internet.  I agree with you that there are dangers from the established interests fearing loss of control as things go forward, and it is something to watch out for, however I am hopeful for a few reasons.  One thing in technology's favour is that for the most part it changes faster than legislatures can keep up with it.  When Napster was shut down new peer-to-peer protocols were developed to replace it.  When China tries to censor what its citizens see its populace can turn to technologies such as Tor, or secure proxies. 

Jason

John Mikes

unread,
Apr 10, 2010, 10:12:24 AM4/10/10
to everyth...@googlegroups.com
Hey, correspondants:
Is this Skeletori answering to an unmarked (>) remarker, or is this an unnamed post-fragment (>) reflected upon by an unsigned "Skeletori'?
(just to apply some 'etiquette' to facilitate our reading)
John M

 

Skeletori

unread,
Apr 15, 2010, 4:06:41 PM4/15/10
to Everything List
On Apr 9, 7:39 pm, Jason Resch <jasonre...@gmail.com> wrote:
> You would need to design a very general fitness test for measuring
> intelligence, for example the shortness and speed at which it can find
> proofs for randomly generated statements in math, for example. Or the
> accuracy and efficiency at which it can predict the next element given
> sequenced pattern, the level of compression it can achieve (shortest
> description) given well ordered information, etc. With this fitness test
> you could evolve better intelligences with genetic programming or a genetic
> algorithm.

Those tests are good components of a general AI... but it still feels
like building a fully independent agent would involve a lot of
engineering. If we want to achieve an intelligence explosion, or TS,
we need some way of expressing that goal to the AI. ISTM it would take
a lot of prior knowledge.

If the agent was embodied in an actual robot, it would need to be able
to reason about humans. A simple goal like "stay alive" won't do
because it might decide to turn humans into biofuel. On the other
hand, if the agent was put in a virtual world things would be easier
because its interactions could be easily restricted... but it would
need some way of performing experiments in the real world to develop
new technologies. Unless it could achieve IE through pure mathematics.

Anyway, I think humans are going to fiddle with AIs as long as they
can, because it's more economical that way. We could plug in speech
recognition, vision, natural language, etc. modules to the AI to
bootstrap it, but even that could lead to problems. If there are any
loopholes in a fitness test (or reward function, or whatever) then the
AI will take advantage of them. For example, it could learn to
position itself in such a way that its vision system wouldn't
recognize a human, and then it could kill the human for fuel.

So I'm still suspecting that what we want a general AI to do wouldn't
be general at all but something very specific and complex. Are there
simple goals for a general AI?

On Apr 9, 7:39 pm, Jason Resch <jasonre...@gmail.com> wrote:
> That kind of reminds me of the proposals in many countries to tax virtual
> property, like items in online multiplayer games. It is rather absurd, it
> is nothing but computations going on inside some computer which lead to
> different visual output on people's monitors. Then there are also things
> such as network neutrality, which threaten the control of the Internet. I
> agree with you that there are dangers from the established interests fearing
> loss of control as things go forward, and it is something to watch out for,
> however I am hopeful for a few reasons. One thing in technology's favour is
> that for the most part it changes faster than legislatures can keep up with
> it. When Napster was shut down new peer-to-peer protocols were developed to
> replace it. When China tries to censor what its citizens see its populace
> can turn to technologies such as Tor, or secure proxies.

Maybe I'm too paranoid... I'm assuming that on issues of great
strategic importance, like TS, they'd act decisively. Like the PATRIOT
Act was enacted less than 2 months after 9/11.

It's really hard to say what the state of the world will be in 2050 or
so. There are some trends, though. I think the race to the bottom w/rt
wages will require authoritarian solutions (economic inequality tends
to erode democratic institutions), and so will the intensifying
tensions between the major powers (people have to persuaded to accept
wars). If destructive technologies continue to outpace defensive ones
then that will mean more control, too (or we'll just blow ourselves
up).

Skeletori

unread,
Apr 15, 2010, 4:08:45 PM4/15/10
to Everything List
Argh, I screwed up again. Trying to restore the original subject...

Brent Meeker

unread,
Apr 15, 2010, 4:21:27 PM4/15/10
to everyth...@googlegroups.com
I agree with the above and pushing the idea further has led me to the conclusion that intelligence is only relative to an environment. If you consider Hume's argument that induction cannot be justified - yet it is the basis of all our beliefs - you are led to wonder whether humans have "general intelligence".  Don't we really just have intelligence in this particular world with it's regularities and "natural kinds"?  Our "general intelligence" allows us to see and manipulate objects - but not quantum fields or space-time.

Brent

Skeletori

unread,
Apr 16, 2010, 6:23:12 AM4/16/10
to Everything List
Restoring original subject :). Please don't reply to the intelligence
stuff in this thread; instead, push reply, copy all the text, then
reply in the intelligence thread and paste it there.

Sami Perttu

unread,
Apr 30, 2010, 7:24:56 AM4/30/10
to Everything List
Hi, I've been thinking about the political implications of TS. The
conclusions I've so far reached are quite pessimistic, but perhaps
they're realistic. I'm trying to come up with a detailed scenario, and
here are some starting points. All help is appreciated!

I believe control is one of the paramount issues considering the
politics of TS, and the unfolding of TS. There are many factors that
point to the need for increased control, surveillance and
authoritarian forms of rule, and I still believe these will spill over
to the digital domain. But I may be missing some important component.

-Interpersonal economic polarization is increasing. A threat from
below implies less democracy.
-TS is the biggest strategic issue of the 21st century. It can be seen
as the final race to global supremacy: if there are sufficient
computational resources available, then whoever will first achieve
brain digitization and emulation technologies will win the race, for
example by gaining a massive economic advantage, or by starting a
massive weapons research program.
-The huge potential for technological advance will fuel instability;
the major powers could attempt to resolve this by coming to an
agreement to create a global political organ to oversee all of digital
humanity. Rogue nations will be brought in line by economic or
military means. On the other hand, conflicts will likely remain
regional in scope, as
globalized capital won't tolerate a global conflagration.
-Digital communities can't simply be let loose. Previously most power
rested in the hands of an elite of analog humans, and they won't be
willing to relinquish their position so easily. The Luddite movement
will be exploited politically to this end. This will lead to strong
digital surveillance, a digital police force, and possibly STASI style
methods of enforcing control in the digital world.
-Such controls clearly impede productivity, which is another incentive
to establish global control over TS technologies. Otherwise some large
nation or power could hedge its bets, dispense with control and make
an alliance with a liberal digital community, achieving
a competitive advantage.
-Corporations will likely continue to increase their power. Strong
digital property rights will be established. Digital exploitation and
slavery will follow.
-Even more control is needed when most of analog humanity becomes
economically unviable: they will no longer be able to compete in wages
with digital humans. I have no idea how this question will be
resolved.

John Mikes

unread,
Apr 30, 2010, 2:57:39 PM4/30/10
to everyth...@googlegroups.com
Dear List,
for some weeks many write about TS (no explanation, seemingly all you physicists on the list know exactly what they are talking about. I don't.) So after 'enough is enough' I looked up Wiki. I found some 50 different items 'TS' may stand for, in physical sciences only some 20.
It did not make sense when I substituted in the posts "T.S.Elliott, besides in the texts there are no periods in between. Nor Tectonic Slip. Or Teutonic Surrogates. Tyrannical Softness? I bet it does not stand on the Trafalgar Square. (maybe in texting lingo: t^2 as in Time Square?).
Somebody have mercy on me! 
John M 

 

Quentin Anciaux

unread,
Apr 30, 2010, 4:14:14 PM4/30/10
to everyth...@googlegroups.com
Maybe... Technological Singularity ?

2010/4/30 John Mikes <jam...@gmail.com>



--
All those moments will be lost in time, like tears in rain.

Bruno Marchal

unread,
May 1, 2010, 2:58:11 AM5/1/10
to everyth...@googlegroups.com

On 30 Apr 2010, at 22:14, Quentin Anciaux wrote:

Maybe... Technological Singularity ?

Something like that, it seems. "Turing simulable"?
People should recall, from time to time what their acronym are for.


On 4/30/10, Sami Perttu <sami....@gmail.com> wrote:

-TS is the biggest strategic issue of the 21st century. It can be seen
as the final race to global supremacy: if there are sufficient
computational resources available, then whoever will first achieve
brain digitization

 Some recent discoveries makes me think that our digital substitution level, if it exists, may be far lower than standard neuro-philosophers may think. 

- The discovery of wave-like information processing by the glial cells in the brain. We have about 1000 times more glial cells than neurons in the brain. They move like amoeba, and communicate by producing waves of chemical changes. They can provides neurotransmitter to neurons. Chronic pain seems to be related to abnormal glial activity along nerves.

- the discovery that plant produce special purpose proteins enhancing the exploitation of quantum interference, and perhaps entanglement, and this at high temperature in the photosynthesis process. This makes me think that it is harder to dismiss the possibility that our level of substitution is below the quantum level (meaning we may have quantum brain, after all). Thus should please Hameroff, not Penrose.

Be careful when saying "yes" to a digitalist doctor, and run away if you live in a country which coerce on you for any form of digital Mechanism. Mechanism gives you the right to say "no". 

Bruno






2010/4/30 John Mikes <jam...@gmail.com>



--
All those moments will be lost in time, like tears in rain.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Sami Perttu

unread,
May 1, 2010, 12:09:04 PM5/1/10
to Everything List
Yeah, I should untangle these acronyms more often. Apologies to John.
TS = Technological Singularity.

>   Some recent discoveries makes me think that our digital substitution  
> level, if it exists, may be far lower than standard neuro-philosophers  
> may think.
>
> - The discovery of wave-like information processing by the glial cells  
> in the brain. We have about 1000 times more glial cells than neurons  
> in the brain. They move like amoeba, and communicate by producing  
> waves of chemical changes. They can provides neurotransmitter to  
> neurons. Chronic pain seems to be related to abnormal glial activity  
> along nerves.

Wikipedia says there are roughly as many neurons as glial cells. But
I'm no expert, I've been using this report as my main source for brain
emulation projections:

http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

> - the discovery that plant produce special purpose proteins enhancing  
> the exploitation of quantum interference, and perhaps entanglement,  
> and this at high temperature in the photosynthesis process. This makes  
> me think that it is harder to dismiss the possibility that our level  
> of substitution is below the quantum level (meaning we may have  
> quantum brain, after all). Thus should please Hameroff, not Penrose.

That would be good news. It might give humanity enough time to get its
act together. I suspect the quantum brain is yet another
anthropocentric invention, but as you said there could be other
features of the brain that take a lot of computational effort to
simulate.

Meanwhile I have a few more items pointing to more control :).

-The coming resource scarcities (food and water, for example)
resulting from population growth, global warming and destruction of
the environment will lead to more control, which is needed simply to
prevent the whole system from falling apart.
-If brain enhancement implants are available, they might make people
more intelligent and rational. Right now mass media keeps the
population in line in the West but there are too many holes in
propaganda and I'd assume stronger methods of control are needed if
people become superintelligent. Same applies to digital evolution.
-Destructive technologies will continue to outpace defensive ones,
resulting in more instability.

John Mikes

unread,
May 1, 2010, 4:03:55 PM5/1/10
to everyth...@googlegroups.com
Hi, Quentin, .
Long time no exchange... and thanx.
That is a good suggestion, I just cannot figure out how can a Singularity be Technological?
I may have too 'big' assumptions about the 'S'-concept, including it's closedness so even no information can slip out (= we don't even know about its contents) while technological is a topical restriction/identification - I find it contradictory. OR: requires ANOTHER description of 'singularity'...? (what scares me, making 'science' even more ambiguous than it is already).
 
John M

 

russell standish

unread,
May 1, 2010, 8:06:22 PM5/1/10
to everyth...@googlegroups.com
Mathematically, a singularity is where something is divided by
zero. A matrix with zero determinant is singular - if you attempt to
solve the simultaneous linear equations described by the matrix, you
will end up dividing by zero - a singularity.

In General Relativity, a singularity is where the space-time curvature
goes to infinity - eg in the heart of a black hole or at the Big
Bang. In science, this is what the term singularity usually means.

When Vernor Vinge in 1982 was describing the way AIs will eventually be able
to design themselves, and so accelerate technological evolution beyond
the exponential Moore's law, he compared it to the gravitational
singularity of General Relativity, and so named it the Singularity,
now called Technnological Singularity to avoid confusion with the GR
term.

This is my potted history - Wikipedia has an even more nuanced version
if you're interested. Interestingly (I did not know this), Stan Ulam
described the concept with the term "singularity" in 1958!

Cheers

On Sat, May 01, 2010 at 04:03:55PM -0400, John Mikes wrote:
> Hi, Quentin, .
> Long time no exchange... and thanx.
> That is a good suggestion, I just cannot figure out how can a Singularity be
> Technological?
> I may have too 'big' assumptions about the 'S'-concept, including it's *
> closedness* so even no information can slip out (= we don't even know about
> its contents) while *technological* is a topical restriction/identification
> - I find it contradictory. OR: requires ANOTHER description of
> 'singularity'...? (what scares me, making 'science' even more ambiguous than
> it is already).
>
> John M
>
>
> On 4/30/10, Quentin Anciaux <allc...@gmail.com> wrote:
> >
> > Maybe... Technological Singularity ?
> >
> > 2010/4/30 John Mikes <jam...@gmail.com>
> >
> >> Dear List,
> >> for some weeks many write about TS (no explanation, seemingly all you
> >> physicists on the list know exactly what they are talking about. I don't.)
> >> So after 'enough is enough' I looked up Wiki. I found some 50 different
> >> items 'TS' may stand for, in physical sciences only some 20.
> >> It did not make sense when I substituted in the posts "T.S.Elliott,
> >> besides in the texts there are no periods in between. Nor Tectonic Slip. Or
> >> Teutonic Surrogates. Tyrannical Softness? I bet it does not stand on the
> >> Trafalgar Square. (maybe in texting lingo: *t^2* as in Time Square?).
> >>> everything-li...@googlegroups.com<everything-list%2Bunsu...@googlegroups.com>
> >>> .
> >>> For more options, visit this group at
> >>> http://groups.google.com/group/everything-list?hl=en.
> >>>
> >>>
> >> --
> >> You received this message because you are subscribed to the Google Groups
> >> "Everything List" group.
> >> To post to this group, send email to everyth...@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> everything-li...@googlegroups.com<everything-list%2Bunsu...@googlegroups.com>
> >> .
> >> For more options, visit this group at
> >> http://groups.google.com/group/everything-list?hl=en.
> >>
> >
> >
> >
> > --
> > All those moments will be lost in time, like tears in rain.
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Everything List" group.
> > To post to this group, send email to everyth...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > everything-li...@googlegroups.com<everything-list%2Bunsu...@googlegroups.com>
> > .
> > For more options, visit this group at
> > http://groups.google.com/group/everything-list?hl=en.
> >
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

John Mikes

unread,
May 1, 2010, 8:37:50 PM5/1/10
to everyth...@googlegroups.com
Thanks, Russell, it was very educative. I learned about singularity probably before you were born, and that was not a 'mathematical' one. By 1956 I probably even forgot about it. The term - in its classical form - was almost interchangeable with nirvana. Probably the first model of a black hole could mimick it: nothing in, nothing out, no information either. Even measurements were missing since the 'ouside' size could project on the inside, so it was a (mathematical) point.
I usually look up Google (incl. Wiki) when I suppose there is a 'newer' version to be known,
I did not in this case, because I was happy with the old vesion.
John M

 
Reply all
Reply to author
Forward
0 new messages