Artist and Picture by J.W. Dunne

31 views
Skip to first unread message

Evgenii Rudnyi

unread,
Jul 11, 2019, 2:29:38 PM7/11/19
to everyth...@googlegroups.com
"A certain artist, having escaped from the lunatic asylum in which,
rightly or wrongly, he had be confined, purchased the materials of his
craft and set to work to make a complete picture of the universe."

...

"The interpretation of this parable is sufficiently obvious. The artist
is trying to describe in his picture a creature equipped with all the
knowledge which he himself possesses, symbolizing that knowledge by the
picture which the pictured creature would draw. And it becomes
abundantly evident that the knowledge thus pictured must always be less
the than the knowledge employed in making the picture. In other words,
the mind which any human science can describe can never be an adequate
representation of the mind which can make that science. And the process
of correcting that inadequacy must follow the serial steps of an
infinite regress."

https://scienceforartists.wordpress.com/2011/09/20/artist-and-picture-by-j-w-dunne/

Terren Suydam

unread,
Jul 11, 2019, 2:41:41 PM7/11/19
to Everything List
Similarly, one can never completely understand one's own mind, for it would take a bigger mind than one has to do so. This, I believe, is the best argument against the runaway-intelligence scenarios in which sufficiently advanced AIs recursively improve their own code to achieve ever increasing advances in intelligence. 

Terren

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/76d1afbe-124a-8afa-d67a-9301e1b426b7%40rudnyi.ru.

Philip Thrift

unread,
Jul 11, 2019, 2:50:29 PM7/11/19
to Everything List
In the wake of Kant.

cf. Rorty, Goff. 

@philipthrift

Brent Meeker

unread,
Jul 11, 2019, 2:56:09 PM7/11/19
to everyth...@googlegroups.com
Advances in intelligence can just be gaining more factual knowledge,
knowing more mathematics, using faster algorithms, etc.  None of that is
barred by not being able to model oneself.

Brent

Terren Suydam

unread,
Jul 12, 2019, 12:28:44 AM7/12/19
to Everything List
Sure, but that's not the "FOOM" scenario, in which an AI modifies its own source code, gets smarter, and with the increase in intelligence, is able to make yet more modifications to its own source code, and so on, until its intelligence far outstrips its previous capabilities before the recursive self-improvement began. It's hypothesized that such a process could take an astonishingly short amount of time, thus "FOOM". See https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.

My point was that the inherent limitation of a mind to understand itself completely, makes the FOOM scenario less likely. An AI would be forced to model its own cognitive apparatus in a necessarily incomplete way. It might still be possible to improve itself using these incomplete models, but there would always be some uncertainty.  

Another more minor objection is that the FOOM scenario also selects for AIs that become massively competent at self-improvement, but it's not clear whether this selected-for intelligence is merely a narrow competence, or translates generally to other domains of interest.


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Philip Thrift

unread,
Jul 12, 2019, 3:50:43 AM7/12/19
to Everything List


On self-modifying AI, see also


We model self-modification in AI by introducing “tiling” agents whose decision systems will approve the construction of highly similar agents, creating a repeating pattern (including similarity of the offspring’s goals). Constructing a formalism in the most straightforward way produces a Gödelian difficultythe “Löbian obstacle”. By technical methods we demonstrate the possibility of avoiding this obstacle, but the underlying puzzles of rational coherence are thus only partially addressed. We extend the formalism to partially unknown deterministic environments, and show a very crude extension to probabilistic environments and expected utility; but the problem of finding a fundamental decision criterion for self-modifying probabilistic agents remains open.

@philipthrift

Quentin Anciaux

unread,
Jul 12, 2019, 4:28:59 AM7/12/19
to everyth...@googlegroups.com
Hi,

Is it not how evolution is working ? By iteration and random modification, new better organisms come to existence ?

Why AI could not use iterating evolution to make better and better AI ?

Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have built a better, smarter version of us ? The AI surely would be able to build another one and by iterating, a better one.

What's wrong with this ?

Quentin



--
All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)

Philip Thrift

unread,
Jul 12, 2019, 5:53:02 AM7/12/19
to Everything List


AI researchers have been using genetic algorithms and artificial life to "evolve" AI programs since the 1970s.

@philipthrift

Quentin Anciaux

unread,
Jul 12, 2019, 6:28:48 AM7/12/19
to everyth...@googlegroups.com
Le ven. 12 juil. 2019 à 11:53, Philip Thrift <cloud...@gmail.com> a écrit :


AI researchers have been using genetic algorithms and artificial life to "evolve" AI programs since the 1970s.

@philipthrift


I know, that's why I'm asking  Terren about his position...
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Jul 12, 2019, 9:57:33 AM7/12/19
to everyth...@googlegroups.com
On 12 Jul 2019, at 10:28, Quentin Anciaux <allc...@gmail.com> wrote:

Hi,

Is it not how evolution is working ? By iteration and random modification, new better organisms come to existence ?

Why AI could not use iterating evolution to make better and better AI ?

Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have built a better, smarter version of us ? The AI surely would be able to build another one and by iterating, a better one.

What's wrong with this ?

That has been studied for long by logicians. We can make a machine reflecting itself, syntactically and/or dynamically, and it will climb on the transfinite, either limited to the constructive one, or not (making impossible to prove the consistency of the machine above some ordinals).

An excellent book is the book by Torkel Franzen “Inexhaustibility”.

You can see my paper for the construction of digital machine or programs reducing themselves (like amoeba), or capable of regerentation from any of their subprograms, or part (like Planaria), and you can make program dreaming integrally themselves on different inputs, even dovetailing on them.

What cannot be done is 

- a stopping program giving its entire trace/history at his running (substitution) level.
- a program capable of proving it has a model (equivalently: capable of proving its consistency, at his “chatty” level = substitution level + classical logic)
- a program capable of defining its own semantic (although it can define a sequence of better apporixamtion, including jumps).

There are many papers in the mathematical logic and theoretical computer science literature on this. 

Intelligence is there at the start. Machine can develop variate form of competence, but those have a negative feedback on Intelligence. 

I admit that my definition of Intelligence is very large (anything not stupid, and something is stupid if it believes in its own intelligence, or in its own stupidity). Consistency becomes a model of Intelligence, showing its … consistency.

All protagorean virtues (that can be taught only by exemples and metaphorical narratives) obey that theory, and it can be useful to avoid the “theological trap”.

The more we have neurons, the more we are prone to the lies.

Bruno



Bruno Marchal

unread,
Jul 12, 2019, 10:12:37 AM7/12/19
to everyth...@googlegroups.com
On 12 Jul 2019, at 10:28, Quentin Anciaux <allc...@gmail.com> wrote:

Hi,

Is it not how evolution is working ? By iteration and random modification, new better organisms come to existence ?

Why AI could not use iterating evolution to make better and better AI ?

Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have built a better, smarter version of us ? The AI surely would be able to build another one and by iterating, a better one.

What's wrong with this ?

Evolution works that way at some level, but globally, higher “programmation” meta level exists, most plausibly, and the trace of it are already in Darwin, so that evolution itself test different meta-strategies, like sex, notably, and of course, the nervous system, especially centralised, has given the language explosion, and the means to deeper the introspection (the communicable part being largely the mathematical science or the theoretical sciences).

But that happens already in arithmetic where the universal inductive numbers climbs on the transfinite ordinals, when not cardinals.

Many simple simple things get Turing universal by pure iteration, but then the interesting things are the one which are more and more autonomous relatively the starting basic iteration. 

Bruno



Bruno Marchal

unread,
Jul 12, 2019, 10:29:18 AM7/12/19
to everyth...@googlegroups.com
On 12 Jul 2019, at 06:28, Terren Suydam <terren...@gmail.com> wrote:

Sure, but that's not the "FOOM" scenario, in which an AI modifies its own source code, gets smarter, and with the increase in intelligence, is able to make yet more modifications to its own source code, and so on, until its intelligence far outstrips its previous capabilities before the recursive self-improvement began. It's hypothesized that such a process could take an astonishingly short amount of time, thus "FOOM". See https://wiki.lesswrong.com/wiki/AI_takeoff#Hard_takeoff for more.

My point was that the inherent limitation of a mind to understand itself completely, makes the FOOM scenario less likely.


I would say that the appearance of sex was a form scenario, entailing the existence of brains, language, computer, learners etc. seems to be a FOOM scenario. We live it right now. In geological time, life is like an explosion, albeit a creative one.




An AI would be forced to model its own cognitive apparatus in a necessarily incomplete way.

That is the eternal motor. Each time the terrestrial G grans some G* truth, it transforms itself, and is back to the same limitation, but with (infinitely) more power.

The Löbian universal machine are forever never completely satisfied … The computationalist Löbian universal machine knows why.



It might still be possible to improve itself using these incomplete models, but there would always be some uncertainty. 

That is right, but it is always the same, and confidence can grow on the simple, and modesty and caution can grow on the complex. Eventually the wise stay mute.




 

Another more minor objection is that the FOOM scenario also selects for AIs that become massively competent at self-improvement, but it's not clear whether this selected-for intelligence is merely a narrow competence, or translates generally to other domains of interest.

Universality allows all competences to grow, but it is a partial order with many incomparable degrees, some path can preserve the starting infinite intelligence of the machines, but many, if not most, path can lead to growing stupidities, and an attachment to the lies, especially the lies to oneself. But that regulates itself in practice. We can reduce the harm, and learn modesty and caution, it is the price of keep liberty/universality, without abandoning completely security.

The oscillation between Security (total computable, controllable) and Liberty ( universality thus only partially computable and only partially controllable) accompany life, but not necessarily consciousness (open problem).

Bruno





On Thu, Jul 11, 2019 at 2:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
Advances in intelligence can just be gaining more factual knowledge,
knowing more mathematics, using faster algorithms, etc.  None of that is
barred by not being able to model oneself.

Brent

On 7/11/2019 11:41 AM, Terren Suydam wrote:
> Similarly, one can never completely understand one's own mind, for it
> would take a bigger mind than one has to do so. This, I believe, is
> the best argument against the runaway-intelligence scenarios in which
> sufficiently advanced AIs recursively improve their own code to
> achieve ever increasing advances in intelligence.
>
> Terren


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/304332c1-13a6-7006-651b-494e468eefc4%40verizon.net.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Terren Suydam

unread,
Jul 12, 2019, 12:59:20 PM7/12/19
to Everything List
It's just a question of time. The FOOM scenario is motivated by safety concerns - that AI's intelligence could surpass our ability to deal with it, leading to the Singularity. So it's not about whether those other paths are possible, it's about how long they would take, and in each of those cases, would the AIs involved be safe.

It's hard to know how fast these different paths would take. In general though, it's much easier to see FOOM happening by considering an AI analyzing its own cognitive apparatus and updating it directly, according to some theory or model of intelligence it has developed, than by either evolution or by starting from scratch. In the case of evolution, the AI would have to run a bunch of iterations, each of which would take some amount of time. How many iterations, how much time?  I'm out of my depth on that. My hunch is that this would be a slow process, even with a lot of computational resources. Also, it bears pointing out that the evolution path is much less safe from the standpoint of being able to reason about whether the AIs created would value human life/flourishing.

In the case of an AI building its own new AI, that's actually the same basic scenario as an AI just modifying its own source code. In both cases it's instantiating a design based on its own theory of intelligence. Starting from scratch is slower, because with recursive self-improvement, it's got a huge head start - it's starting from a model that is more or less proven to be intelligent already.  But it's not hard to imagine that a recursively-improving AI would finally arrive at a point where it realized the only way to continuing increasing intelligence would be to create a new design, one that would be impossible for human level intelligence to ever grasp. From a safety point of view, both of these paths at least have the possibility of being able to reason about whether the AIs preserve a goal system in which human life/flourishing is valued.

Brent Meeker

unread,
Jul 12, 2019, 2:19:08 PM7/12/19
to everyth...@googlegroups.com


On 7/12/2019 1:28 AM, Quentin Anciaux wrote:
Hi,

Is it not how evolution is working ? By iteration and random modification, new better organisms come to existence ?

Why AI could not use iterating evolution to make better and better AI ?

Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have built a better, smarter version of us ? The AI surely would be able to build another one and by iterating, a better one.

What's wrong with this ?

It's not wrong, but in natural evolution "better" just means more surviving progeny.  So what's "better" is essentially defined by the environment, i.e. natural selection.  If an AI uses interative evolution, what is the environment that will define "better"?  It may not be what we think is better.

Brent

Bruno Marchal

unread,
Jul 13, 2019, 4:50:28 AM7/13/19
to everyth...@googlegroups.com
On 12 Jul 2019, at 20:19, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 7/12/2019 1:28 AM, Quentin Anciaux wrote:
Hi,

Is it not how evolution is working ? By iteration and random modification, new better organisms come to existence ?

Why AI could not use iterating evolution to make better and better AI ?

Also if *we build* a real AGI, isn't it the same thing ? Wouldn't we have built a better, smarter version of us ? The AI surely would be able to build another one and by iterating, a better one.

What's wrong with this ?

It's not wrong, but in natural evolution "better" just means more surviving progeny. 

I would say “better surviving”, or just “surviving". Humans progeny is ridiculously low in number compared to bacteria.



So what's "better" is essentially defined by the environment, i.e. natural selection.  If an AI uses interative evolution, what is the environment that will define "better"?  It may not be what we think is better.

It is only “better" in the sense of “surviving" instead of “disappearing". Small creatures can survive thanks to big progeny numbers, despite most die quickly (but then feed others), or in term of more efficacious care to the progeny, or something else.

With the histories in arithmetic, there is some sense where the relative  “progeny measure” plays an a posteriori role, which is needed to stabilise consciousness and avoid the too many white rabbits, though.

Bruno





John Clark

unread,
Jul 28, 2019, 7:50:39 AM7/28/19
to everyth...@googlegroups.com
On Fri, Jul 12, 2019 at 4:28 AM Quentin Anciaux <allc...@gmail.com> wrote:
 
>  All those moments will be lost in time, like tears in rain. (Roy Batty/Rutger Hauer)\\

I'm sorry to say that Rutger Hauer has just died, he gave one of the best performances in the movies.


John K Clark

spudb...@aol.com

unread,
Jul 28, 2019, 4:30:22 PM7/28/19
to everyth...@googlegroups.com
I'd like to remember the actor for the other roles, as well. BR was a snapshot of not only Horselover Fats, spooky vision, but our own. I wondered, in the film set 2019, how we achieved interstellar colonies in just 40 years, as well as completely human looking robots, with emotions? Picky Picky Picky. 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit
Reply all
Reply to author
Forward
0 new messages