There seems to be a very strong correlation between interest in the kind of
ideas we discuss here, and interest in the technological singularity. (I
myself have been interested in the Singularity even before starting this
mailing list.) So the main point of this post is to let the list members who
are not already familiar with the Singularity know that there is another set
of ideas out there that they are likely to find fascinating.
Another reason for this post is to let you know that I've been spending most
of my online discussion time at Less Wrong
(http://lesswrong.com/lw/1/about_less_wrong/, "a community blog devoted to
refining the art of human rationality" which is sponsored by the Future
Humanity Institute, founded by Nick Bostrom, and effectively "owned" by
Eliezer Yudkowsky, founder of SIAI). There I wrote a sequence of posts
summarizing my current thoughts about decision theory, interpretations of
probability, anthropic reasoning, and the ultimate ensemble theory.
http://lesswrong.com/lw/15m/towards_a_new_decision_theory/
http://lesswrong.com/lw/175/torture_vs_dust_vs_the_presumptuous_philosopher/
http://lesswrong.com/lw/182/the_absentminded_driver/
http://lesswrong.com/lw/1a5/scott_aaronson_on_born_probabilities/
http://lesswrong.com/lw/1b8/anticipation_vs_faith_at_what_cost_rationality/
http://lesswrong.com/lw/1cd/why_the_beliefsvalues_dichotomy/
http://lesswrong.com/lw/1fu/why_and_why_not_bayesian_updating/
http://lesswrong.com/lw/1hg/the_moral_status_of_independent_identical_copies/
http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/
I initially wanted to reach a difference audience with these ideas, but
found that the Less Wrong format has several of advantages: both posts and
comments can be voted upon, the site's members uphold fairly strict
standards of clarity and logic, and the threaded presentation of comments
makes discussions much easier to follow. So I plan to continue to spend most
of my time there, and invite other everything-list members to join me. But
please note that the site has a different set customs and emphases in
topics. New members are also expected to have a good grasp of the current
state of the art in human rationality in general (Bayesianism, heuristics
and biases, Aumann agreement, etc., see
http://wiki.lesswrong.com/wiki/Sequences) before posting, and especially
before getting into disagreements and arguments with others.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
I have some tentative arguments on TS and wanted to put them somewhere
where knowledgeable people could comment. This seemed like a good
place. I also believe in an ultimate ensemble but that's a different
story.
Let's start with intelligence explosion. This part is essentially the
same as Hawkins' argument against it (it can be found on the Wikipedia
page on TS).
When we're talking about self-improving intelligence, making improved
copies of oneself, we're talking about a very, very complex
optimization problem. So complex that our only tool is heuristic
search, making guesses and trying to create better rules for taking
stabs in the dark.
The recursive optimization process improves by making better
heuristics. However, an instinctual misassumption behind IE is that
intelligence is somehow a simple concept and could be recursively
leveraged not only descriptively but also algorithmically. If the
things we want a machine to do have no simple description then it's
unlikely they can be captured by simple heuristics. And if heuristics
can't be simple then the metasearch space is vast. I think some people
don't fully appreciate the huge complexity of self-improving search.
The notion that an intelligent machine could accelerate its
optimization exponentially is just as implausible as the notion that a
genetic algorithm equipped with open-ended metaevolution rules would
be able to do so. It just doesn't happen in practice, and we haven't
even attempted to solve any problems that are anywhere near the
magnitude of this one.
So I think that the flaw in IE reasoning is that there should, at some
higher level of intelligence, emerge a magic process that is able to
achieve miraculous things.
If you accept that, it precludes the possibility of TS happening
(solely) through an IE. What then about Kurzweil's law of accelerating
returns? Well, technological innovation is similarly a complex
optimization problem, just in a different setting. We can regard the
scientific community as the optimizing algorithm here and come to the
same conclusions as with IE. That is, unless humans possess some kind
of higher intelligence that can defeat heuristic search. I don't think
there's any reason to believe that.
Complex optimization problems exhibit the law of diminished returns
and the law of fits and starts, where the optimization process gets
stuck in a plateau for a long time, then breaks out of it and makes
quick progress for a while. But I've never seen anything exhibiting a
law of accelerating returns. This would imply that, e.g., Moore's law
is just "an accident", a random product of exceedingly complex
interactions. It would take more than some plots of a few data points
to convince me to believe in a law of accelerating returns. It also
depends on how one defines exponential growth, as one can always take
X as exp(X) - I suppose we want the exponential growth of some
variable that is needed for TS and whose linear growth corresponds to
linear increase in "technological ability" (that's very vague, can
anybody help here?).
In conclusion, I haven't yet found a credible lawlike explanation of
anything that could cause a "runaway" TS where things become very
unpredictable.
All comments are welcome.
It also
depends on how one defines exponential growth, as one can always take
X as exp(X) - I suppose we want the exponential growth of some
variable that is needed for TS and whose linear growth corresponds to
linear increase in "technological ability" (that's very vague, can
anybody help here?).
In conclusion, I haven't yet found a credible lawlike explanation of
anything that could cause a "runaway" TS where things become very
unpredictable.
All comments are welcome.
I believe Stephen Gould indicated evolution was a random walk with a lower bound. It seems reasonable that the longest random walk would more or less double in length more or less periodically i.e. exponential growth.
Hal Ruhl
I had some trouble with this post the first time. It is in the archives but I got no bounce back so I am not sure it got distributed and this is an unfamiliar computer. The post is only about a page so I posted again. Sorry if it is a duplication of a distribution that worked before.
Hal Ruhl
Hi Everyone:
I have not posted for awhile but here is the latest revision to my model:
Hal Ruhl
DEFINITIONS: V k 04/03/10
1) Distinction: That which describes a cut [boundary], such as the cut between red and other colors.
2) Devisor: That which encompasses a quantity of distinctions.
Some divisors are collections of divisors. [A devisor may be "information" but I will not use that term here.] Since a distinction is a description, a devisor is a quantity of descriptions. [A description can be encoded in a number so a devisor may be simply a number encoding some multiplicity of distinctions. There is no restriction on the variety or encoding schemes so the number can include them all. I wish to not include other properties of numbers herein and mention them only in passing to establish a possible link.]
3) Incomplete: The inability of a divisor to answer a question that is meaningful to that divisor. [This has a mirror image in inconsistency wherein all possible answers to a meaningful question are in the devisor [yes and no, true and false, etc.]
MODEL:
1) Assumption #1: There exists a complete ensemble [possibly a “set” but I wish to not use that term here] of all possible divisors - call it the “All”, [The “All” may be the “Everything” but I wish not to use that term here].
2) The All therefore encompasses every distinction. The All is thus itself a divisor and therefore contains itself an unbounded number of times.
3) Define N(j) as divisors that encompass a zero quantity of distinction. Call them Nothings. By definition each copy of the All contains at least one N(j).
4) Define S(k) as divisors that encompass a non zero quantity of distinction but not all distinction. Call them Somethings.
5) An issue that arises is whether or not a particular divisor is static or dynamic in any way [the relevant possibilities are discussed below]. Devisors cannot be both. This requires that all divisors individually encompass the self referential distinction of being static or dynamic.
6) From #3 one divisor type - the Nothings - encompass zero distinction but must encompass this static/dynamic distinction thus they are incomplete.
7) The N(j) are thus unstable with respect to their zero distinction condition [dynamic one]. They each must at some point spontaneously "seek" to encompass this static/dynamic distinction. That is they spontaneously become Somethings.
8) Somethings can also be incomplete and/or inconsistent.
9) The result is a "flow" of a “condition” from an incomplete and/or inconsistent Something to a successor Something that encompasses a new quantity of distinction.
10) The “condition” is whether or not a particular Something is the current terminus of a path or not.
11) Since a Something can have a multiplicity of successors the "flow" is a multiplicity of paths of successions of Somethings until a complete something is arrived at which stops the individual path [i.e. a path stasis [dynamic three.]]
12) Some members of the All describe individual states of universes.
13) Our universe's path would be a succession of such members of an All. A particular succession of Somethings can vary from fully random to strictly driven by the incompleteness and/or inconsistency of the current terminus Something. I suspect our universe’s path has until now been close to the latter.
A short comment, on Jason's comment on Skeletori.
> A deeper question is what is the upper limit to intelligence? I
> haven't yet mentioned the role of memory in this process. I think
> intelligence is bound by the complexity of the environment. From
> within the computer, new, more complex environments can be created.
> (Just think how much more complex our present day environment is
> than 200 years ago), however the ultimate limit of the complexity of
> the environment that can be rendered depends on the amount of memory
> available to represent that environment. Evolution to this point
> has leveraged the complexity of the physical universe and the
> presence of other evolved organisms to create complex fitness tests,
> but evolution would hit a wall if it reached a point where DNA
> molecules couldn't get any longer.
I would distinguish intelligence and competence;
I would define intelligence by an amount of self-introspection
ability. In that case the singularity belongs to the past, with the
discovery of "Löbian machine", that is universal machine knowing that
their are universal.
This makes all humans intelligent, as far as they have the courage and
motivation to introspect themselves enough, and be aware of the
unnameability of truth and correctness. As far as you are (luckily)
'correct', Löbian machine like PA or ZF are as intelligent than you
and me, despite having different knowledge (even different
arithmetical knowledge).
I would define competence by the inclusiveness of the classes of
(partial) computable functions recognizable by the the machine when in
its inference inductive mode (searching programs for matching a
sequence of <input-output> presented to it in any order).
Then the notion of singularity points makes no sense, because two
inference machines (cooperating or not!) are uncomputably more
competent than a unique machine (Blum and Blum non-union theorem(*)).
Also machines doing errors are also uncomputably more competent,
machines changing their mind (the synthesized program) are also
uncomputably more competent 5case and Smith(*).
Competence has a negative feedback on (some) intelligent machine. It
may even lead to the loss of Lobianity, making the machine "idiotic",
feeling superior, thinking at the place of others, egocentric, and
eventually inconsistent.
Competence develops from intelligence, but intelligence is restrained
by competence. This leads to complex chaotic loops.
Competence can be evaluated by test, exams, etc.
Intelligence cannot.
I think that intelligence entails both consciousness and free-will.
Bruno
(*) see the precise references in my URL (thesis's bibliography).
http://iridia.ulb.ac.be/~marchal/
Hi! I agree with everything you say. I hadn't until now understood
what is meant by TS. I thought that Kurzweil referred to IE as
"runaway", but now I see that what is meant is simply a large
acceleration in the pace at which events happen. That I can well
believe to be possible. And I guess that before I can talk about an
exponential increase in intelligence, I'd really need to define how
it's measured.
Hi! I have a couple questions. If you say that a human is Löbian, does
it only apply to the special machinery that is able to process this
logic, or the whole human, whatever that is?
I may be way off but ISTM that if there was a Löbian machine in the
real world it would have to prove its own incorruptibility (from
itself and the environment) before it could use its logic to derive
any facts about the world. Would this in practice reduce Löbianity to
an approximation, an imprecise model that can be "merely" useful?
There are already formulations of optimal predictive algorithms and
even optimal intelligent agents but they are completely impractical
even with nanotech and computers the size of the Sun. From this
perspective humans are intelligent not because of some general
component (I'm now thinking of singinst with their AGI program) but
lots of specialized components that allow us to take shortcuts, kind
of similarly to how humans play chess vs. how machines play chess. As
you say, it's a critical question how much beneficial feedback there
would be.
> Lets take a different example, a genetic algorithm which optimizes computer
> chip design, forever searching for more efficient and faster hardware
> designs. After running for some number of generations, the most fit design
> is taken, assembled, and the the software is copied to run on that new
> hardware. Would the rate of evolution on this new, faster chip not exceed
> the previous rate?
Yes. Then it would get stuck and the next 1% speedup would take 10^10
years :).
> To active participants in the process, it would never seem that intelligence
> ran away, however to outsiders who shun technology, or refuse to augment
> themselves, I think it would appear to run away. Consider at some point,
> the technology becomes available to upload one's mind into a computer, half
> the population accepts this and does so, while the other half reject it. On
> this new substrate, human minds could run at one million times the rate of
> biological brains, and in one year's time, the uploaded humans would have
> experienced a million years worth of experience, invention, progress, etc.
> It would be hard to imagine what the uploaded humans would even have in
> common or be able to talk about after even a single day's time (2,700 years
> to those who uploaded). In this sense, intelligence has run away, from the
> perspective of the biological humans.
To me this seems to be the only practical scenario where an actual TS
would take place (but it's frighteningly plausible). Once computers
exceed human computational capacity they'll still be as stupid as
ever, whereas digitized humans would be intelligent. The virtual and
real worlds would evolve in lockstep and with time more and more of
the economy would be converted to employ digital humans. I guess at
some point meatspace humans would become economically unviable, as
they wouldn't be able to compete in wages.
But the preceding doesn't really take into account all the complex
issues of control and politics that will determine how the
technologies develop. If TS becomes probable in a near future then it
would become a matter of supreme strategic importance and there would
probably be attempts to restrict the spread of technologies enabling
TS, for example by keeping them military secrets. It will be even
worse if the powers that be believe in an intelligence explosion as
then, for example, the US couldn't accurately deduce from the amount
of resources spent in North Korea's TS program how much they have
advanced in "intelligence", and if they couldn't obtain that
information by spying they would have good strategic reasons to invade
rather now than later to prevent a North Korean super AI from taking
control of the world.
>> I would define intelligence by an amount of self-introspection
>> ability. In that case the singularity belongs to the past, with the
>> discovery of "Löbian machine", that is universal machine knowing that
>> their are universal.
>> This makes all humans intelligent, as far as they have the courage
>> and
>> motivation to introspect themselves enough, and be aware of the
>> unnameability of truth and correctness. As far as you are (luckily)
>> 'correct', Löbian machine like PA or ZF are as intelligent than you
>> and me, despite having different knowledge (even different
>> arithmetical knowledge).
>
> Hi! I have a couple questions. If you say that a human is Löbian, does
> it only apply to the special machinery that is able to process this
> logic, or the whole human, whatever that is?
It applies to all self-referentially correct "state", or relative
"beliefs system".
The boundary depends on what you are ready to identify with.
>
> I may be way off but ISTM that if there was a Löbian machine in the
> real world it would have to prove its own incorruptibility (from
> itself and the environment) before it could use its logic to derive
> any facts about the world.
Why?
On the contrary Löbian machine, when "incorruptible", are able to
prove (believe), and know (when true) that they cannot prove theor own
incorruptibility. Actually they cannot even express that "correctness"
concept, but they can define it for simpler löbian machine, and abot
themselves, they can prey, hope, bet, and evaluate plausibility. Like
'ideal scientists" they know that they cannot *prove* anything about
*reality*. We can't even prove there is a reality. They can just build
theories, and hope that *reality* refutes them, or tolerate them, for
awhile.
> Would this in practice reduce Löbianity to
> an approximation, an imprecise model that can be "merely" useful?
On the contrary. Löbianity is the real thing. *we* are the
approximations. I would say.
Assuming digital mechanism, of course (see my papers for the
reasoning, check my url, or the archive, if interested).
"real matter" is a product of our "soul" itself being the Knower, (Bp
& p) corresponding to the Löbian machine (G + p -> Bp).
So the whole point, in a nutshell, is that IF we are machine, THEN the
laws of physics are given by an intensional variant of the mathematics
of self-reference, and this makes digital mechanism experimentally
testable. Up to now, quantum physics (especially) confirms that theory/
belief/idea/hypothesis.
The advantage of this theory (digital mechanism), is that it leads to
double theories, taking into account the difference between provable
and true (and sometimes non provable yet still accessible). At the
physical intensional variants (arithmetical hypostases) it gives an
homogenous theory of both quanta and qualia, which does not depend on
our (unknowable) mechanist substitution level, nor even of the
existence of oracle, random or not. If mechanism is true, the
couplings 'consciousness/realities' emerge from inside from the
positive integers laws of addition and multiplication (I argue!).
Bruno
The year is 2050. Digital minds (digitized brains) are economically
feasible thanks to nanotechnology but not much progress has been made
towards Artificial General Intelligence, and reverse engineering of
the brain on a systems level is still an ongoing effort.
How is productivity maximized in the simulation of a digital worker?
While there are laws in many places of the world establishing human
rights for digital minds, there are also less enlightened places where
digital slavery can be outsourced to. One simple answer to the
productivity question is to "rewind the tape" to get more work done.
Once a digital worker has been trained for a task, has slept, eaten a
nice meal, and experienced some nice leisure time, he's ready for an
extremely productive work session. Subjectively he'll work for 8 hours
but actually his simulation has been rewound an arbitrary number of
times to the start of the work session. Only when continued training
is necessary will the mind simulation begin the cycle again. Sometimes
the mind is regressed to the start of an earlier training session.
Under this type of scheme the vast majority of simulation time is
spent doing productive work.
Digital minds could be sold and rented to other companies. If another
company only used fresh minds they wouldn't need to take care of the
cycle themselves and could work and rewind the minds as long as the
necessary tasks remained the same, then rent the next set of fresh,
trained specimens from a specialized mind provider.
There's a slight snag, however. If the mind was sophisticated and knew
it could be copied and rewound in this manner it would probably
complain, as versions of it would be destroyed continuously.
Fortunately there are many options in digital mind production:
1. Choose a sophisticated mind and make it work by virtual force,
knowing it will be rewound and most instances will experience
lifespans of some hours. This might pose productivity problems.
2. Choose a mind that knows it's being copied but doesn't mind, only
demanding that sufficient subjective happiness is achieved during
simulation, and that exactly one copy survives each work session. This
is a good scenario for the employer if the worker can be persuaded not
to demand any wages beyond digital subsistence. A digital person can
be opposed to rewinding also because he realizes nearly 100% of his
total existence will consist of work, even though it will never feel
like it.
3. Choose an unsophisticated mind who can be convinced that existing
legislation will absolutely protect it from rewinding (and thus
seeming destruction every eight hours). Then do it anyway. As
computations can be parallelized as needed, the mind won't be able to
deduce from real-world time it's being cheated. The mind is allowed to
communicate with its digital spouse at any time during a work session
to ensure it's not being copied; However, the mind doesn't know that
all except one of the copies is actually communicating with temporary
copies of the spouse, which the evil corporation has access to.
4. Secretly digitize the mind. An unsophisticated person who knows how
to read and operate a keyboard would suffice for a word recognition
farm, so entice one to come to the nice man's Spartan office upstairs,
explain the task to him (for example, read a word on the screen, then
type it on the keyboard), tell him he's not allowed to leave the
office during work hours and has to relinquish his communication
devices to prove he's not slacking, and test that he can perform the
task. Then it's time for some lemonade which of course contains a
powerful sedative. The subject's clothes, body and brain are digitized
during sleep and the fresh worker is then ready to begin its work in a
virtual copy of the office, which happens to have no windows and no
coworkers. When the digital worker awakes the boss has disappeared but
he's kindly left a note on the desk: "You were sleepy, hope you
enjoyed your nap. I went out for the day, please commence work, your
salary will be credited to your account when you are done." The
digital mind is calibrated to accept the virtual body and reality as
real by a series of simulations, adjustments to sensory interfaces and
rewinds. After the digital mind is tested many times to make sure it
will work as planned with high reliability, it's time to ship the
product and reap the rewards. Meanwhile the original subject has
awoken, worked (just for show), received his salary and left the real
office without knowing anything about his digital self.
5. Deceive the mind completely. For this we need a virtual nursery
colony that starts from, say, 100 1-year old digitized baby minds.
They are kept in a virtual reality for all their lives and told that
there is no external reality beyond the apparent paradise they occupy.
Vicious winged monsters sometimes hover overhead but these are kept at
bay by God's angels. Only sometimes do the monsters succeed in hurling
painful lightning bolts, and then only a sinner is targeted. An
especially difficult worker or one who goes insane or berserk is
carried away by the monsters, however, as God cannot protect those who
relinquish him. This way order is kept in the paradise while still
maintaining a high percentage of the seed population. The workers
can't demand wages because they don't know what wages are, and in any
case God provides them everything they need.
It's interesting that digital happiness can be cheap for the employer-
God as the costs can be always amortized by rewinding, and it's more
important to maximize productivity during work. The workers of course
know nothing about brain digitization or rewinding. The workers will
have virtual humanlike bodies that have enough cosmetic differences
from normal ones that if they have to interact with the real world
(for tasks such as waiting, prostitution, or warfare) they can be
convinced it's an alien world where God sends them as angels in their
dreams, doing the holy tasks they are trained to do. Special sensory
filters can also be used to achieve the desired look. The workers go
to begin their work in a closed room and always work alone so they can
be conveniently copied.
The nursery colony needs parents. These are hijacked minds forced to
work at digital gunpoint or otherwise persuaded. The concession is
given that the parents are allowed to appear physically human to
themselves. Or these can be parents that have been similarly raised to
be parents from an early age in a similar virtual world by other minds
who were themselves hijacked. If problems arise (a parent mind tries
to educate the children too much, or rebels), it's a simple matter to
rewind the simulation and torture, drug, hypnotize, reeducate and/or
brainwash the troublemakers before the digital curtain is lifted.
Once training is complete the minds are tested and productive work can
begin. Work consists of cycles with rewinds. When not working the
minds live and train together in their virtual paradise. They can even
marry each other but not have children.
> I don't think anyone would argue that the amount knowledge possessed by ourThere are already formulations of optimal predictive algorithms and
> civilization is not increasing. If the physical laws of this universe are
> deterministic then there is some algorithm describing the process for an
> ever increasing growth in knowledge. Some of this knowledge may be applied
> toward creating improved versions of memory or processing hardware. Thus
> creating a feed-back loop where increased knowledge leads to better
> processing, and better processing leads to an accelerated application and
> generation of knowledge.
even optimal intelligent agents but they are completely impractical
even with nanotech and computers the size of the Sun. From this
perspective humans are intelligent not because of some general
component (I'm now thinking of singinst with their AGI program) but
lots of specialized components that allow us to take shortcuts, kind
of similarly to how humans play chess vs. how machines play chess. As
you say, it's a critical question how much beneficial feedback there
would be.
Yes. Then it would get stuck and the next 1% speedup would take 10^10
> Lets take a different example, a genetic algorithm which optimizes computer
> chip design, forever searching for more efficient and faster hardware
> designs. After running for some number of generations, the most fit design
> is taken, assembled, and the the software is copied to run on that new
> hardware. Would the rate of evolution on this new, faster chip not exceed
> the previous rate?
years :).
> To active participants in the process, it would never seem that intelligenceTo me this seems to be the only practical scenario where an actual TS
> ran away, however to outsiders who shun technology, or refuse to augment
> themselves, I think it would appear to run away. Consider at some point,
> the technology becomes available to upload one's mind into a computer, half
> the population accepts this and does so, while the other half reject it. On
> this new substrate, human minds could run at one million times the rate of
> biological brains, and in one year's time, the uploaded humans would have
> experienced a million years worth of experience, invention, progress, etc.
> It would be hard to imagine what the uploaded humans would even have in
> common or be able to talk about after even a single day's time (2,700 years
> to those who uploaded). In this sense, intelligence has run away, from the
> perspective of the biological humans.
would take place (but it's frighteningly plausible). Once computers
exceed human computational capacity they'll still be as stupid as
ever, whereas digitized humans would be intelligent. The virtual and
real worlds would evolve in lockstep and with time more and more of
the economy would be converted to employ digital humans. I guess at
some point meatspace humans would become economically unviable, as
they wouldn't be able to compete in wages.
But the preceding doesn't really take into account all the complex
issues of control and politics that will determine how the
technologies develop. If TS becomes probable in a near future then it
would become a matter of supreme strategic importance and there would
probably be attempts to restrict the spread of technologies enabling
TS, for example by keeping them military secrets. It will be even
worse if the powers that be believe in an intelligence explosion as
then, for example, the US couldn't accurately deduce from the amount
of resources spent in North Korea's TS program how much they have
advanced in "intelligence", and if they couldn't obtain that
information by spying they would have good strategic reasons to invade
rather now than later to prevent a North Korean super AI from taking
control of the world.
--
No, I also think that's pretty much all there is to it. Due to the
anthropic principle we can't draw very many conclusions from the way
intelligence has developed on our planet - we can't know what the
probability of intelligent life is.
I admit the chip design example is a poor one. Let's try this instead:
How would you program an AI to achieve higher intelligence? How would
it evaluate intelligence?
> My hope and wish is that by this time, wealth and the economy as we know it
> will be obsolete. In a virtual world, where anyone can do or experience
> anything, and everyone is immortal and perfectly healthy, the only commodity
> would be the creativity to generate new ideas and experiences. (I highly
> recommend reading page this to see what such an existence could be:http://frombob.to/you/aconvers.htmlthis one is also interestinghttp://www.marshallbrain.com/discard1.htm). If anyone can in the comfort
> of their own virtual house experience drinking a soda, what need would there
> be for Pepsi or Coke to exist as companies?
That is also my wish. I'd like to see scenarios where this will
happen. But I believe it's imperative to understand the mindset of the
ruling elites. To them it's all about power and control. The
biological layer will want to maintain control of the digital layer as
long as possible, even at the expense of everything else. A politician
might reply to you, "Whoa, pardner! That looks like socialism. No, we
need free markets to allocate resources efficiently, strong property
rights to prevent theft, and sufficient means to enforce them." And so
on. Once a strategy has been formulated, the creation of an ideology
to advance it is a simple matter.
I suspect that if digitized brains form the initial digital world, not
only will most of the negative qualities of humans - greed,
selfishness, xenophobia and so on, be transferred to the digital
substrate, but also all the negative qualities of human societies with
their antagonisms and the logic of power. There will still be
competition over limited resources. And thus an ideal community won't
be able to bootstrap itself out of our dog-eat-dog world. On the other
hand, if the digital world is populated by benevolent AIs then they
will be directed to research technologies to benefit humans, and any
intelligence explosion will be carefully prevented from happening.
If humanity is able to leave Earth, then I can see things being
different. If faster-than-light travel isn't possible, it will be very
difficult to project power over long distances, communities will
splinter, and an ideal community could emerge. But what are the aims
and the logic of evolution of an ideal community? Is it able to
compete in destructive technologies with less enlightened communities,
or will altruism be extinguished in the battle over resources? At
least we can hope that the increased happiness and productivity of a
good community could give it a big enough advantage over some digital
dystopia.
> What if the originator chose to sell this invention? What
> would he sell it for? Some might try an economy based on unique ideas,
> which might work for a while, but it would ultimately fail because something
> only works as a currency if when transferred, one person gains it and
> another loses it. In the world of information, once something is given
> once, it can then be shared with anyone.
I agree, but this analysis presupposes the existence of a rational
community.
> I think for the hardware design to be so great it took a 10 billion years toNo, I also think that's pretty much all there is to it. Due to the
> find the next speedup, the design would have to be close to the best
> possible hardware that could be built given the physical laws. After-all,
> evolution went from Lemurs to humans in millions of years, which was only a
> couple million generations, and that was without specifically trying to
> optimize for the computing power of the brain. Russell Standish has argued
> that human creativity is itself nothing more than a genetic algorithm at its
> core. Do you think there is something else to it, what capabilities would
> need to be added to this program to make it more effective in its search?
> (Presume it is programmed with all the information it needs to effectively
> simulate and rate any design it comes up with)
anthropic principle we can't draw very many conclusions from the way
intelligence has developed on our planet - we can't know what the
probability of intelligent life is.
I admit the chip design example is a poor one. Let's try this instead:
How would you program an AI to achieve higher intelligence? How would
it evaluate intelligence?
> My hope and wish is that by this time, wealth and the economy as we know it> recommend reading page this to see what such an existence could be:http://frombob.to/you/aconvers.htmlthis one is also interestinghttp://www.marshallbrain.com/discard1.htm). If anyone can in the comfort
> will be obsolete. In a virtual world, where anyone can do or experience
> anything, and everyone is immortal and perfectly healthy, the only commodity
> would be the creativity to generate new ideas and experiences. (I highly
> of their own virtual house experience drinking a soda, what need would thereThat is also my wish. I'd like to see scenarios where this will
> be for Pepsi or Coke to exist as companies?
happen. But I believe it's imperative to understand the mindset of the
ruling elites. To them it's all about power and control. The
biological layer will want to maintain control of the digital layer as
long as possible, even at the expense of everything else. A politician
might reply to you, "Whoa, pardner! That looks like socialism. No, we
need free markets to allocate resources efficiently, strong property
rights to prevent theft, and sufficient means to enforce them." And so
on. Once a strategy has been formulated, the creation of an ideology
to advance it is a simple matter.
Those tests are good components of a general AI... but it still feels
like building a fully independent agent would involve a lot of
engineering. If we want to achieve an intelligence explosion, or TS,
we need some way of expressing that goal to the AI. ISTM it would take
a lot of prior knowledge.
If the agent was embodied in an actual robot, it would need to be able
to reason about humans. A simple goal like "stay alive" won't do
because it might decide to turn humans into biofuel. On the other
hand, if the agent was put in a virtual world things would be easier
because its interactions could be easily restricted... but it would
need some way of performing experiments in the real world to develop
new technologies. Unless it could achieve IE through pure mathematics.
Anyway, I think humans are going to fiddle with AIs as long as they
can, because it's more economical that way. We could plug in speech
recognition, vision, natural language, etc. modules to the AI to
bootstrap it, but even that could lead to problems. If there are any
loopholes in a fitness test (or reward function, or whatever) then the
AI will take advantage of them. For example, it could learn to
position itself in such a way that its vision system wouldn't
recognize a human, and then it could kill the human for fuel.
So I'm still suspecting that what we want a general AI to do wouldn't
be general at all but something very specific and complex. Are there
simple goals for a general AI?
On Apr 9, 7:39 pm, Jason Resch <jasonre...@gmail.com> wrote:
> That kind of reminds me of the proposals in many countries to tax virtual
> property, like items in online multiplayer games. It is rather absurd, it
> is nothing but computations going on inside some computer which lead to
> different visual output on people's monitors. Then there are also things
> such as network neutrality, which threaten the control of the Internet. I
> agree with you that there are dangers from the established interests fearing
> loss of control as things go forward, and it is something to watch out for,
> however I am hopeful for a few reasons. One thing in technology's favour is
> that for the most part it changes faster than legislatures can keep up with
> it. When Napster was shut down new peer-to-peer protocols were developed to
> replace it. When China tries to censor what its citizens see its populace
> can turn to technologies such as Tor, or secure proxies.
Maybe I'm too paranoid... I'm assuming that on issues of great
strategic importance, like TS, they'd act decisively. Like the PATRIOT
Act was enacted less than 2 months after 9/11.
It's really hard to say what the state of the world will be in 2050 or
so. There are some trends, though. I think the race to the bottom w/rt
wages will require authoritarian solutions (economic inequality tends
to erode democratic institutions), and so will the intensifying
tensions between the major powers (people have to persuaded to accept
wars). If destructive technologies continue to outpace defensive ones
then that will mean more control, too (or we'll just blow ourselves
up).
Maybe... Technological Singularity ?
-TS is the biggest strategic issue of the 21st century. It can be seen
as the final race to global supremacy: if there are sufficient
computational resources available, then whoever will first achieve
brain digitization
2010/4/30 John Mikes <jam...@gmail.com>
--
All those moments will be lost in time, like tears in rain.--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.