Superintelligence cannot be contained: Lessons from Computability Theory

29 views
Skip to first unread message

RHC

unread,
Jan 12, 2021, 4:58:10 PM1/12/21
to Metaphysical Speculations
pdf file here (google drive).


"Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible."

Santeri Satama

unread,
Jan 12, 2021, 5:59:15 PM1/12/21
to Metaphysical Speculations
Superintelligence is not a hypothetical during many psychadelic experiences.

As for machine intelligence, the quote makes first the assumption that hypercomputation (of any sort) is impossible in principle, but that's a no go because impossible is pretty hard to prove. The rest of the quote is basically just weirdly convoluted restatement of the Halting problem... which the article then goes on to discuss, I see. The main lesson is that because of Halting problem, Turing machine based AI can't contain us - except the zombies who believe and act like they are just Turing machine programs. 

jim.c...@gmail.com

unread,
Jan 13, 2021, 8:02:04 AM1/13/21
to Metaphysical Speculations
From the abstract:

Assuming that a superintelligence will contain a program that
includes all the programs that can be executed by a universal Turing machine on input potentially as complex as
the state of the world, strict containment requires simulations of such a program, something theoretically (and
practically) infeasible.

That is a BIG assumption that such an AI could exist. Doesn't it imply that the superintelligence could itself exceed the capabilities of computation?

BTW, isn't this a downloadable file?
Reply all
Reply to author
Forward
0 new messages