The real reasons we don’t have AGI yet

25 views
Skip to first unread message

Richard Ruquist

unread,
Oct 8, 2012, 1:13:35 PM10/8/12
to everyth...@googlegroups.com, Swi...@yahoogroups.com
The real reasons we don’t have AGI yet
A response to David Deutsch’s recent article on AGI
October 8, 2012 by Ben Goertzel


(Credit: iStockphoto)

As we noted in a recent post, physicist David Deutsch said the field
of “artificial general intelligence” or AGI has made “no progress
whatever during the entire six decades of its existence.” We asked Dr.
Ben Goertzel, who introduced the term AGI and founded the AGI
conference series, to respond. — Ed.

Like so many others, I’ve been extremely impressed and fascinated by
physicist David Deutsch’s work on quantum computation — a field that
he helped found and shape.

I also encountered Deutsch’s thinking once in a totally different
context — while researching approaches to home schooling my children,
I noticed his major role in the Taking Children Seriously movement,
which advocates radical unschooling, and generally rates all coercion
used against children as immoral.

In short, I have frequently admired Deutsch as a creative, gutsy,
rational and intriguing thinker. So when I saw he had written an
article entitled “Creative blocks: The very laws of physics imply that
artificial intelligence must be possible. What’s holding us up?,” I
was eager to read it and get his thoughts on my own main area of
specialty, artificial general intelligence.

Oops.

I was curious what Deutsch would have to say about AGI and quantum
computing. But he quickly dismisses Penrose and others who think human
intelligence relies on neural quantum computing, quantum gravity
computing, and what-not. Instead, his article begins with a long,
detailed review of the well-known early history of computing, and then
argues that the “long record of failure” of the AI field AGI-wise can
only be remedied via a breakthrough in epistemology following on from
the work of Karl Popper.

This bold, eccentric view of AGI is clearly presented in the article,
but is not really argued for. This is understandable since we’re
talking about a journalistic opinion piece here rather than a journal
article or a monograph. But it makes it difficult to respond to
Deutsch’s opinions other than by saying “Well, er, no” and then
pointing out the stronger arguments that exist in favor of alternative
perspectives more commonly held within the AGI research community.

I salute David Deutsch’s boldness, in writing and thinking about a
field where he obviously doesn’t have much practical grounding.
Sometimes the views of outsiders with very different backgrounds can
yield surprising insights. But I don’t think this is one of those
times. In fact, I think Deutsch’s perspective on AGI is badly
mistaken, and if widely adopted, would slow down progress toward AGI
dramatically.

The real reasons we don’t have AGI yet, I believe, have nothing to do
with Popperian philosophy, and everything to do with:

The weakness of current computer hardware (rapidly being remedied via
exponential technological growth!)
The relatively minimal funding allocated to AGI research (which, I
agree with Deutsch, should be distinguished from “narrow AI” research
on highly purpose-specific AI systems like IBM’s Jeopardy!-playing AI
or Google’s self-driving cars).
The integration bottleneck: the difficulty of integrating multiple
complex components together to make a complex dynamical software
system, in cases where the behavior of the integrated system depends
sensitively on every one of the components.
Assorted nitpicks, quibbles and major criticisms

I’ll begin here by pointing out some of the odd and/or erroneous
positions that Deutsch maintains in his article. After that, I’ll
briefly summarize my own alternative perspective on why we don’t have
human-level AGI yet, as alluded to in the above three bullet points.

Deutsch begins by bemoaning the AI field’s “long record of failure” at
creating AGI — without seriously considering the common
counterargument that this record of failure isn’t very surprising,
given the weakness of current computers relative to the human brain,
and the far greater weakness of the computers available to earlier AI
researchers. I actually agree with his statement that the AI field
has generally misunderstood the nature of general intelligence. But I
don’t think the rate of progress in the AI field, so far, is a very
good argument in favor of this statement. There are too many other
factors underlying this rate of progress, such as the nature of the
available hardware.

He also makes a rather strange statement regarding the recent
emergence of the AGI movement:

The field used to be called “AI” — artificial intelligence. But “AI”
was gradually appropriated to describe all sorts of unrelated computer
programs such as game players, search engines and chatbots, until the
G for ‘general’ was added to make it possible to refer to the real
thing again, but now with the implication that an AGI is just a
smarter species of chatbot.

As the one who introduced the term AGI and founded the AGI conference
series, I am perplexed by the reference to chatbots here. In a recent
paper in AAAI magazine, resulting from the 2009 AGI Roadmap Workshop,
a number of coauthors (including me) presented a host of different
scenarios, tasks, and tests for assessing humanlike AGI systems.

The paper is titled “Mapping the Landscape of Human-Level General
Intelligence,” and chatbots play a quite minor role in it. Deutsch is
referring to the classical Turing test for measuring human-level AI (a
test involving fooling human judges into believing a computers
humanity, in a chat-room context). But the contemporary AGI community,
like the mainstream AI community, tends to consider the Turing Test as
a poor guide for research.

But perhaps he considers the other practical tests presented in our
paper — like controlling a robot that attends and graduates from a
human college — as basically the same thing as a “chatbot.” I suspect
this might be the case, because he avers that

AGI cannot possibly be defined purely behaviourally. In the classic
‘brain in a vat’ thought experiment, the brain, when temporarily
disconnected from its input and output channels, is thinking, feeling,
creating explanations — it has all the cognitive attributes of an AGI.
So the relevant attributes of an AGI program do not consist only of
the relationships between its inputs and outputs.

The upshot is that, unlike any functionality that has ever been
programmed to date, this one can be achieved neither by a
specification nor a test of the outputs. What is needed is nothing
less than a breakthrough in philosophy. …

This is a variant of John Searle’s Chinese Room argument [video]. In
his classic 1980 paper “Minds, Brains and Programs,” Searle considered
the case of a person who knows only English, sitting alone in a room
following English instructions for manipulating strings of Chinese
characters. Does the person really understand Chinese?

To someone outside the room, it may appear so. But clearly, there is
no real “understanding” going on. Searle takes this as an argument
that intelligence cannot be defined using formal syntactic or
programmatic terms, and that conversely, a computer program (which he
views as “just following instructions”) cannot be said to be
intelligent in the same sense as people.

Deutsch’s argument is sort of the reverse of Searle’s. In Deutsch’s
brain-in-a-vat version, the intelligence is qualitatively there, even
though there are no intelligent behaviors to observe. In Searle’s
version, the intelligent behaviors can be observed, but there is no
intelligence qualitatively there.

Everyone in the AI field has heard the Chinese Room argument and its
variations many times before, and there is an endless literature on
the topic. In 1991, computer scientist Pat Hayes half-seriously
defined cognitive science as the ongoing research project of refuting
Searle’s argument.

Deutsch attempts to use his variant of the Chinese Room argument to
bolster his view that we can’t build an AGI without fully solving the
philosophical problem of the nature of mind. But this seems just as
problematic as Searle’s original argument. Searle tried to argue that
computer programs can’t be intelligent in the same sense as people;
Deutsch on the other hand, thinks computer programs can be intelligent
in the same sense as people, but that his Chinese room variant shows
we need new philosophy to tell us how to do so.

I classify this argument of Deutsch’s right up there with the idea
that nobody can paint a beautiful painting without fully solving the
philosophical problem of the nature of beauty. Somebody with no clear
theory of beauty could make a very beautiful painting — they just
couldn’t necessarily convince a skeptic that it was actually
beautiful. Similarly, a complete theory of general intelligence is not
necessary to create an AGI — though it might be necessary to convince
a skeptic with a non-pragmatic philosophy of mind that one’s AGI is
actually generally intelligent, rather than just “behaving generally
intelligent.”

Of course, to the extent we theoretically understand general
intelligence, the job of creating AGI is likely to be easier. But
exactly what mix of formal theory, experiment, and informal
qualitative understanding is going to guide the first successful
creation of AGI, nobody now knows.

What Deutsch leads up to with this call for philosophical inquiry is
even more perplexing:

Unfortunately, what we know about epistemology is contained largely in
the work of the philosopher Karl Popper and is almost universally
underrated and misunderstood (even — or perhaps especially — by
philosophers). For example, it is still taken for granted by almost
every authority that knowledge consists of justified, true beliefs and
that, therefore, an AGI’s thinking must include some process during
which it justifies some of its theories as true, or probable, while
rejecting others as false or improbable.

This assertion seems a bit strange to me. Indeed, AGI researchers tend
not to be terribly interested in Popperian epistemology. However, nor
do they tend to be tied to the Aristotelian notion of knowledge as
“justified true belief.” Actually, AGI researchers’ views of knowledge
and belief are all over the map. Many AGI researchers prefer to avoid
any explicit role for notions like theory, truth, or probability in
their AGI systems.

He follows this with a Popperian argument against the view of
intelligence as fundamentally about prediction, which seems to me not
to get at the heart of the matter. Deutsch asserts that “in reality,
only a tiny component of thinking is about prediction at all … the
truth is that knowledge consists of conjectured explanations.”

But of course, those who view intelligence in terms of prediction
would just counter-argue that the reason these conjectured
explanations are useful is because they enable a system to better make
predictions about what actions will let it achieve its goals in what
contexts. What’s missing is an explanation of why Deutsch sees a
contradiction between the “conjectured explanations” view of
intelligence and the “predictions” view. Or is it merely a difference
of emphasis?

In the end, Deutsch presents a view of AGI that comes very close to my
own, and to the standard view in the AGI community:

An AGI is qualitatively, not quantitatively, different from all other
computer programs. Without understanding that the functionality of an
AGI is qualitatively different from that of any other kind of computer
program, one is working in an entirely different field. If one works
towards programs whose “thinking” is constitutionally incapable of
violating predetermined constraints, one is trying to engineer away
the defining attribute of an intelligent being, of a person: namely,
creativity.

Yes. This is not a novel suggestion, it’s what basically everyone in
the AGI community thinks; but it’s a point worth emphasizing.

But where he differs from nearly all AGI researchers is that he thinks
what we need to create AGI is probably a single philosophical insight:

I can agree with the AGI-is-imminent camp: it is plausible that just a
single idea stands between us and the breakthrough. But it will have
to be one of the best ideas ever.

The real reasons why we don’t have AGI yet

Deutsch thinks the reason we don’t have human-level AGI yet is the
lack of an adequate philosophy of mind to sufficiently, definitively
refute puzzles like the Chinese Room or his brain-in-a-vat scenario,
and that lead us to a theoretical understanding of why brains are
intelligent and how to make programs that emulate the key relevant
properties of brains.

While I think that better, more fully-fleshed-out theories of mind
would be helpful, I don’t think he has correctly identified the core
reasons why we don’t have human-level AGI yet.

The main reason, I think, is simply that our hardware is far weaker
than the human brain. It may actually be possible to create
human-level AGI on current computer hardware, or even the hardware of
five or ten years ago. But the process of experimenting with various
proto-AGI approaches on current hardware is very slow, not just
because proto-AGI programs run slowly, but because current software
tools, engineered to handle the limitations of current hardware, are
complex to use.

With faster hardware, we could have much easier to use software tools,
and could explore AGI ideas much faster. Fortunately, this particular
drag on progress toward advanced AGI is rapidly diminishing as
computer hardware exponentially progresses.

Another reason is an AGI funding situation that’s slowly rising from
poor to sub-mediocre. Look at the amount of resources society puts
into, say, computer chip design, cancer research, or battery
development. AGI gets a teeny tiny fraction of this. Software
companies devote hundreds of man-years to creating products like word
processors, video games, or operating systems; an AGI is much more
complicated than any of these things, yet no AGI project has ever been
given nearly the staff and funding level of projects like OS X,
Microsoft Word, or World of Warcraft.

I have conjectured before that once some proto-AGI reaches a
sufficient level of sophistication in its behavior, we will see an
“AGI Sputnik” dynamic — where various countries and corporations
compete to put more and more money and attention into AGI, trying to
get there first. The question is, just how good does a proto-AGI have
to be to reach the AGI Sputnik level?

The integration bottleneck

Weak hardware and poor funding would certainly be a good enough reason
for not having achieved human-level AGI yet. But I don’t think theyre
the only reason. I do think there is also a conceptual reason, which
boils down to the following three points:

Intelligence depends on the emergence of certain high-level structures
and dynamics across a system’s whole knowledge base;
We have not discovered any one algorithm or approach capable of
yielding the emergence of these structures;
Achieving the emergence of these structures within a system formed by
integrating a number of different AI algorithms and structures is
tricky. It requires careful attention to the manner in which these
algorithms and structures are integrated; and so far, the integration
has not been done in the correct way.
One might call this the “integration bottleneck.” This is not a
consensus in the AGI community by any means — though it’s a common
view among the sub-community concerned with “integrative AGI.” I’m not
going to try to give a full, convincing argument for this perspective
in this article. But I do want to point out that it’s a quite concrete
alternative to Deutsch’s explanation, and has a lot more resonance
with the work going on in the AGI field.

This “integration bottleneck” perspective also has some resonance with
neuroscience. The human brain appears to be an integration of an
assemblage of diverse structures and dynamics, built using common
components and arranged according to a sensible cognitive
architecture. However, its algorithms and structures have been honed
by evolution to work closely together — they are very tightly
inter-adapted, in somewhat the same way that the different organs of
the body are adapted to work together. Due their close interoperation
they give rise to the overall systemic behaviors that characterize
human-like general intelligence.

So in this view, the main missing ingredient in AGI so far is
“cognitive synergy”: the fitting-together of different intelligent
components into an appropriate cognitive architecture, in such a way
that the components richly and dynamically support and assist each
other, interrelating very closely in a similar manner to the
components of the brain or body and thus giving rise to appropriate
emergent structures and dynamics.

The reason this sort of intimate integration has not yet been explored
much is that it’s difficult on multiple levels, requiring the design
of an architecture and its component algorithms with a view toward the
structures and dynamics that will arise in the system once it is
coupled with an appropriate environment. Typically, the AI algorithms
and structures corresponding to different cognitive functions have
been developed based on divergent theoretical principles, by disparate
communities of researchers, and have been tuned for effective
performance on different tasks in different environments.

Making such diverse components work together in a truly synergetic and
cooperative way is a tall order, yet my own suspicion is that this —
rather than some particular algorithm, structure or architectural
principle — is the “secret sauce” needed to create human-level AGI
based on technologies available today.

Achieving this sort of cognitive-synergetic integration of AGI
components is the focus of the OpenCog AGI project that I co-founded
several years ago. We’re a long way from human adult level AGI yet,
but we have a detailed design and codebase and roadmap for getting
there. Wish us luck!

Where to focus: engineering and computer science, or philosophy?

The difference between Deutsch’s perspective and my own is not a
purely abstract matter; it does have practical consequence. If
Deutsch’s perspective is correct, the best way for society to work
toward AGI would be to give lots of funding to philosophers of mind.
If my view is correct, on the other hand, most AGI funding should go
to folks designing and building large-scale integrated AGI systems.

Until sufficiently advanced AGI has been achieved, it will be
difficult to refute perspectives like Deutsch’s in a fully definitive
way. But in the end, Deutsch has not made a strong case that the AGI
field is helpless without a philosophical revolution.

I do think philosophy is important, and I look forward to the
philosophy of mind and general intelligence evolving along with the
development of better and better AGI systems.

But I think the best way to advance both philosophy of mind and AGI is
to focus the bulk of our AGI-oriented efforts on actually building and
experimenting with a variety of proto-AGI systems — using the tools
and ideas we have now to explore concrete concepts, such as the
integration bottleneck I’ve mentioned above. Fortunately, this is
indeed the focus of a significant subset of the AGI research
community.

And if you’re curious to learn more about what is going on in the AGI
field today, I’d encourage you to come to the AGI-12 conference at
Oxford, December 8–11, 2012.

John Clark

unread,
Oct 8, 2012, 1:33:28 PM10/8/12
to everyth...@googlegroups.com
How David Deutsch can watch a computer beat the 2 best human Jeopardy! players on planet Earth and then say that AI has made “no progress whatever during the entire six decades of its existence” is a complete mystery to me.

  John K Clark


Stephen P. King

unread,
Oct 8, 2012, 2:22:20 PM10/8/12
to everyth...@googlegroups.com
On 10/8/2012 1:13 PM, Richard Ruquist wrote:
except from
The real reasons we don�t have AGI yet
A response to David Deutsch�s recent article on AGI
October 8, 2012 by Ben Goertzel
So in this view, the main missing ingredient in AGI so far is
�cognitive synergy�: the fitting-together of different intelligent
components into an appropriate cognitive architecture, in such a way
that the components richly and dynamically support and assist each
other, interrelating very closely in a similar manner to the
components of the brain or body and thus giving rise to appropriate
emergent structures and dynamics.

The reason this sort of intimate integration has not yet been explored
much is that it�s difficult on multiple levels, requiring the design
of an architecture and its component algorithms with a view toward the
structures and dynamics that will arise in the system once it is
coupled with an appropriate environment. Typically, the AI algorithms
and structures corresponding to different cognitive functions have
been developed based on divergent theoretical principles, by disparate
communities of researchers, and have been tuned for effective
performance on different tasks in different environments.

Making such diverse components work together in a truly synergetic and
cooperative way is a tall order, yet my own suspicion is that this �
rather than some particular algorithm, structure or architectural
principle � is the �secret sauce� needed to create human-level AGI
based on technologies available today.

Achieving this sort of cognitive-synergetic integration of AGI
components is the focus of the OpenCog AGI project that I co-founded
several years ago. We�re a long way from human adult level AGI yet,
but we have a detailed design and codebase and roadmap for getting
there. Wish us luck!
Hi Richard,

��� My suspicion is that what is needed here, if we can put on our programmer hats, is the programer's version of a BEC, Bose-Einstein Condensate, where every "part" is an integrated reflection of the whole. My own idea is that some form of algebraic and/or topological closure is required to achieve this as inspired by the Brouwer Fixed point theorem.

-- 
Onward!

Stephen

Alberto G. Corona

unread,
Oct 8, 2012, 2:45:00 PM10/8/12
to everyth...@googlegroups.com
Deutsch is right about the need to advance in Popperian epistemology,
which ultimately is evolutionary epistemology. How evolution makes a
portion of matter ascertain what is truth in virtue of what and for
what purpose. The idea of intelligence need a knowledge of what is
truth but also a motive for acting and therefore using this
intelligence. if there is no purpose there is no acting, if no act, no
selection of intelligent behaviours if no evolution, no intelligence.
Not only intelligence is made for acting accoding with arbitrary
purpose: It has evolved from the selection of resulting behaviours for
precise purposes.

an ordinary purpose is non separable from other purposes that are
coordinated for a particular superior purpose, but the chain of
reasoning and actng means tthat a designed intelligent robot also need
an ultimate purpose. otherwise it would be a sequencer and achiever of
disconnected goals at a certain level where the goals would never have
coordination, that is it would be not intelligent.

This is somewhat different ffom humans, because much of our goals are
hardcoded and non accessible to introspection, although we can use
evolutionary reasoning for obtaining falsable hypothesis about
apparently irrational behaviour, like love, anger aestetics, pleasure
and so on. However men are from time to time asking themselves for the
deep meaning of what he does. specially when a whole chain of goals
have failed, so he is a in a bottleneck. Because this is the right
thing to do for intelligent beings. A true intelligent being therefore
has existential, moral and belief problems. If an artificial
intelligent being has these problems, the designed as solved the
problem of AGI to the most deeper level.

An AGI designed has no such "core engine" of impulses and perceptions
that drive, in the first place, intelligence to action: curiosity,
fame and respect, power, social navigation instimcts. It has to start
from scratch. Concerning perceptions, a man has hardwired
perceptions that create meaning: There is part of brain circuitry at
various levels that make it feel that a person in front of him is
another person. But really it is its evolved circuitry what makes the
impression that that is a person and that this is true, instead of a
bunch of moving atoms. Popperian Evoluitionary epistemology build from
this. All of this link computer science with philosophy at the deeper
level.

Another comment concerning design: The evolutionary designs are
different from rational designs. The modularity in rartional design
arises from the fact that reason can not reason with many variables at
the same time. Reason uses divide an conquer. Object oriented design,
modual architecture and so on are a consequence of that limitation.
These design are understandable by other humans, but they are not the
most effcient. In contrast, modularity in evolution is functional.
That means that if a brain structure is near other in the brain
forming a greater structuture it is for reasons of efficiency, not for
reasons of modularity. the interfaces between modules are not
discrete, but pervasive. This makes essentially a reverse engineering
of the brain inpossible.






2012/10/8 John Clark <johnk...@gmail.com>
>
> How David Deutsch can watch a computer beat the 2 best human Jeopardy! players on planet Earth and then say that AI has made “no progress whatever during the entire six decades of its existence” is a complete mystery to me.
>
> John K Clark
>
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
Alberto.

Craig Weinberg

unread,
Oct 8, 2012, 2:50:25 PM10/8/12
to everyth...@googlegroups.com, Swi...@yahoogroups.com
Deutsch is right. Searle is right. Genuine AGI can only come when thoughts are driven by feeling and will rather than programmatic logic. It's a fundamental misunderstanding to assume that feeling can be generated by equipment which is incapable of caring about itself. Without personal investment, there is no drive to develop right hemisphere awareness - to look around for enemies and friends, to be vigilant. These kinds of capacities cannot be burned into ROM, they have to be discovered through unscripted participation. They have to be able to lie and have a reason to do so.

I'm not sure about Deutsch's purported Popper fetish, but if that's true, I can see why that would be the case. My hunch is that although Ben Goertzel is being fair to Deutsch, he may be distorting Deutsch's position somewhat as far as I question that he is suggesting that we invest in developing Philosophy instead of technology. Maybe he is, but it seems like an exaggeration. It seems to me that Deutsch is advocating the very reasonable position that we evaluate our progress with AGI before doubling down on the same strategy for the next 60 years. Nobody whats to cut off AGI funding - certainly not me, I just think that the approach has become unscientific and sentimental like alchemists with their dream of turning lead into gold. Start playing with biology and maybe you'll have something. It will be a little messier though, since with biology and unlike with silicon computers, when you start getting close to something with human like intelligence, people tend to object when you leave twitching half-persons moaning around the laboratory. You will know you have real AGI because there will be a lot of people screaming.

Craig

meekerdb

unread,
Oct 8, 2012, 4:35:13 PM10/8/12
to everyth...@googlegroups.com
On 10/8/2012 11:45 AM, Alberto G. Corona wrote:
> Deutsch is right about the need to advance in Popperian epistemology,
> which ultimately is evolutionary epistemology. How evolution makes a
> portion of matter ascertain what is truth in virtue of what and for
> what purpose. The idea of intelligence need a knowledge of what is
> truth but also a motive for acting and therefore using this
> intelligence. if there is no purpose there is no acting, if no act, no
> selection of intelligent behaviours if no evolution, no intelligence.
> Not only intelligence is made for acting accoding with arbitrary
> purpose: It has evolved from the selection of resulting behaviours for
> precise purposes.
>
> an ordinary purpose is non separable from other purposes that are
> coordinated for a particular superior purpose, but the chain of
> reasoning and actng means tthat a designed intelligent robot also need
> an ultimate purpose. otherwise it would be a sequencer and achiever of
> disconnected goals at a certain level where the goals would never have
> coordination, that is it would be not intelligent.

I agree that intelligence cannot be separated from purpose. I think that's why projects
aimed at creating AGI flounder - a "general" purpose tends to be no purpose at all. But
I'm not so sure about an ultimate goal, at least not in the sense of a single goal. I can
imagine an intelligent robot who has several high-level goals that are to be satisfied by
not necessarily summed or otherwise combined into a single goal.

>
> This is somewhat different ffom humans, because much of our goals are
> hardcoded and non accessible to introspection, although we can use
> evolutionary reasoning for obtaining falsable hypothesis about
> apparently irrational behaviour, like love, anger aestetics, pleasure
> and so on.

There's no reason to give a Mars Rover introspective knowledge of its hardcoded goals. A
robot would only need introspective knowledge of goals if there were the possibility of
changing them - i.e. not hardcoded.

> However men are from time to time asking themselves for the
> deep meaning of what he does. specially when a whole chain of goals
> have failed, so he is a in a bottleneck. Because this is the right
> thing to do for intelligent beings. A true intelligent being therefore
> has existential, moral and belief problems. If an artificial
> intelligent being has these problems, the designed as solved the
> problem of AGI to the most deeper level.

I think it's a matter of depth. A human is generally more complex and has hierarchy of
goals. A dead end in trying to satisfy some goal occasions reflection on how that goal
relates to some higher goal; how to back track. So a Mars Rover may find itself in a box
canyon so that it has to back track and this makes its journey to the objective too far to
reach before winter and so it has to select a secondary objective point to reach. But it
can't reflect on whether gathering data an transmitting it is good or not.

>
> An AGI designed has no such "core engine" of impulses and perceptions
> that drive, in the first place, intelligence to action: curiosity,
> fame and respect, power, social navigation instimcts. It has to start
> from scratch. Concerning perceptions, a man has hardwired
> perceptions that create meaning: There is part of brain circuitry at
> various levels that make it feel that a person in front of him is
> another person. But really it is its evolved circuitry what makes the
> impression that that is a person and that this is true, instead of a
> bunch of moving atoms. Popperian Evoluitionary epistemology build from
> this. All of this link computer science with philosophy at the deeper
> level.

And because man evolved as a social animal he is hard wired to want to exchange knowledge
with other humans.

>
> Another comment concerning design: The evolutionary designs are
> different from rational designs. The modularity in rartional design
> arises from the fact that reason can not reason with many variables at
> the same time. Reason uses divide an conquer. Object oriented design,
> modual architecture and so on are a consequence of that limitation.
> These design are understandable by other humans, but they are not the
> most effcient. In contrast, modularity in evolution is functional.
> That means that if a brain structure is near other in the brain
> forming a greater structuture it is for reasons of efficiency,

Are saying spatial modularity implies functional modularity?

> not for
> reasons of modularity.

No it may be for reasons of adaptability. Evolution has no way to reason about efficiency
or even a measure of efficiency. It can only try random variations and copy ones that work.

> the interfaces between modules are not
> discrete, but pervasive. This makes essentially a reverse engineering
> of the brain inpossible.

And not even desirable.

Brent

Russell Standish

unread,
Oct 8, 2012, 5:39:28 PM10/8/12
to everyth...@googlegroups.com, fo...@googlegroups.com
On Mon, Oct 08, 2012 at 01:13:35PM -0400, Richard Ruquist wrote:
> The real reasons we don’t have AGI yet
> A response to David Deutsch’s recent article on AGI
> October 8, 2012 by Ben Goertzel
>
>

Thanks for posting this, Richard. I was thinking of writing my own
detailed response to David Deutsch's op ed, but Ben Goertzel has done
such a good job, I now don't have to!

My response, similar to Ben's is that David does not convincingly
explain why Popperian epistemology is the "secret sauce". In fact, it
is not even at all obvious how to practically apply Popperian
epistemology to the task at hand. Until some more detailed practical
proposal is put forward, the best I can say is, meh, I'll believe it
when it happens.

The problem that exercises me (when I get a chance to exercise it) is
that of creativity. David Deutsch correctly identifies that this is one of
the main impediments to AGI. Yet biological evolution is a creative
process, one for which epistemology apparently has no role at all.

Continuous, open-ended creativity in evolution is considered the main
problem in Artificial Life (and perhaps other fields). Solving it may
be the work of a single moment of inspiration (I wish), but more
likely it will involve incremental advances in topics such as
information, complexity, emergence and other such partly philosophical
topics before we even understand what it means for something to be
open-ended creative. Popperian epistemology, to the extent it has a
role, will come much further down the track.

Cheers
--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

Stephen P. King

unread,
Oct 8, 2012, 6:49:12 PM10/8/12
to fo...@googlegroups.com, everyth...@googlegroups.com
On 10/8/2012 5:39 PM, Russell Standish wrote:
> On Mon, Oct 08, 2012 at 01:13:35PM -0400, Richard Ruquist wrote:
>> The real reasons we don�t have AGI yet
>> A response to David Deutsch�s recent article on AGI
>> October 8, 2012 by Ben Goertzel
>>
>>
> Thanks for posting this, Richard. I was thinking of writing my own
> detailed response to David Deutsch's op ed, but Ben Goertzel has done
> such a good job, I now don't have to!
>
> My response, similar to Ben's is that David does not convincingly
> explain why Popperian epistemology is the "secret sauce". In fact, it
> is not even at all obvious how to practically apply Popperian
> epistemology to the task at hand. Until some more detailed practical
> proposal is put forward, the best I can say is, meh, I'll believe it
> when it happens.
>
> The problem that exercises me (when I get a chance to exercise it) is
> that of creativity. David Deutsch correctly identifies that this is one of
> the main impediments to AGI. Yet biological evolution is a creative
> process, one for which epistemology apparently has no role at all.
>
> Continuous, open-ended creativity in evolution is considered the main
> problem in Artificial Life (and perhaps other fields). Solving it may
> be the work of a single moment of inspiration (I wish), but more
> likely it will involve incremental advances in topics such as
> information, complexity, emergence and other such partly philosophical
> topics before we even understand what it means for something to be
> open-ended creative. Popperian epistemology, to the extent it has a
> role, will come much further down the track.
>
> Cheers
Hi Russell,

Question: Why has little if any thought been given in AGI to
self-modeling and some capacity to track the model of self under the
evolutionary transformations?


--
Onward!

Stephen


Russell Standish

unread,
Oct 8, 2012, 7:37:02 PM10/8/12
to fo...@googlegroups.com, everyth...@googlegroups.com
On Mon, Oct 08, 2012 at 06:49:12PM -0400, Stephen P. King wrote:
> Hi Russell,
>
> Question: Why has little if any thought been given in AGI to
> self-modeling and some capacity to track the model of self under the
> evolutionary transformations?
>
>

Its not my field - general evolutionary processes are not self-aware,
or self- anything, in general. But Hod Lipson has developed some (rather crude
IMHO) self-aware robots (in the shape of a starfish, for some strange reason).

Stephen P. King

unread,
Oct 8, 2012, 7:54:49 PM10/8/12
to everyth...@googlegroups.com, fo...@googlegroups.com
On 10/8/2012 7:37 PM, Russell Standish wrote:
> On Mon, Oct 08, 2012 at 06:49:12PM -0400, Stephen P. King wrote:
>> Hi Russell,
>>
>> Question: Why has little if any thought been given in AGI to
>> self-modeling and some capacity to track the model of self under the
>> evolutionary transformations?
>>
>>
> Its not my field - general evolutionary processes are not self-aware,
> or self- anything, in general. But Hod Lipson has developed some (rather crude
> IMHO) self-aware robots (in the shape of a starfish, for some strange reason).
>
> Cheers
>

But would that not make an AGI just a glorified calculator? I am
very interested in Lipson's work! I cannot find his latest research...

--
Onward!

Stephen


Kim Jones

unread,
Oct 8, 2012, 11:52:29 PM10/8/12
to everyth...@googlegroups.com
Please, please read Edward de Bono's book "The Mechanism of Mind" for some genuine insights into creativity and how this comes about in mind. Russell if you can't track down a copy I'll lend you mine but it's a treasured object, not least because of the fact that the author autographed it!

meekerdb

unread,
Oct 9, 2012, 2:16:22 AM10/9/12
to everyth...@googlegroups.com
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

��� Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model.� They are not members of a social community.� Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).

Brent

Evgenii Rudnyi

unread,
Oct 9, 2012, 3:09:48 AM10/9/12
to everyth...@googlegroups.com
On 08.10.2012 20:45 Alberto G. Corona said the following:
> Deutsch is right about the need to advance in Popperian
> epistemology, which ultimately is evolutionary epistemology.

You may want to read Three Worlds by Karl Popper. Then you see where to
Popperian epistemology can evolve.

�To sum up, we arrive at the following picture of the universe. There
is the physical universe, world 1, with its most important sub-universe,
that of the living organisms. World 2, the world of conscious
experience, emerges as an evolutionary product from the world of
organisms. World 3, the world of the products of the human mind,
emerges as an evolutionary product from world 2.�

�The feedback effect between world 3 and world 2 is of particular
importance. Our minds are the creators of world 3; but world 3 in its
turn not only informs our minds, but largely creates them. The very idea
of a self depends on world 3 theories, especially upon a theory of time
which underlies the identity of the self, the self of yesterday, of
today, and of tomorrow. The learning of a language, which is a world 3
object, is itself partly a creative act and partly a feedback effect;
and the full consciousness of self is anchored in our human language.�

Evgenii
--

http://blog.rudnyi.ru/2012/06/three-worlds.html

Alberto G. Corona

unread,
Oct 9, 2012, 4:48:48 AM10/9/12
to everyth...@googlegroups.com
2012/10/9 Evgenii Rudnyi <use...@rudnyi.ru>:
> On 08.10.2012 20:45 Alberto G. Corona said the following:
>
>> Deutsch is right about the need to advance in Popperian
>> epistemology, which ultimately is evolutionary epistemology.
>
>
> You may want to read Three Worlds by Karl Popper. Then you see where to
> Popperian epistemology can evolve.
>
> “To sum up, we arrive at the following picture of the universe. There is
> the physical universe, world 1, with its most important sub-universe, that
> of the living organisms. World 2, the world of conscious experience,
> emerges as an evolutionary product from the world of organisms. World 3,
> the world of the products of the human mind, emerges as an evolutionary
> product from world 2.”
>
..and the perception of world1 is not a "objective image of the
"phisical realiy"", but a result of particular adaptive needs,
interests and purposes. This perception of world1 is not part of world
1, but an evolutionary product, that is part of world 2. This is very
important.

That means that any perception has a purpose from the beginning, and
"making an objective idea of phisical reality" is not neither can be a
valid purpose, because it is ultimateley, purposeless and,
additionally it has infinite "objective" versions of it (do we want
to perceive radiation? neutrinos? atoms? only macroscopical thigs?.

Therefore, the general intelligence, that is part of World 3, has to
work with the impulses, perceptions and purposes evolved in world 2 .
Therefore we the humans are limited by that (and in any other
artificial case, as i would show). But this does not means that the
human mind, have the take this limit as a absolute limit for their
reasoning. He ask itself about the nature of these limitations and
reach different reactions to this: one answer is to adopt a particular
belief that match with these limitations, negate them (nihilism). or
try to transcend them (gnosticism : my limitations are false
impositions and I have to search for my true self ) or try to know
them (realism)

That is the difference between a tool, like an ordinary Mars Rover
that send data to the eart from a person or for that matter, an AGI
devide sent to mars with the memories of life in earth erased and with
an impulse to inform the earth by radiowaves: While the ordinary rover
will be incapable to improve its own program in a qualitative way, the
man or the AGI will first think about how to improve his task (because
it would be a pleasure for him and his main purpose). To do so he will
have to ask itself about the motives of the people that sent him. To
do so he will ask himself about the nature of his work to try to know
the intentions of the senders, So he will study the nature of the
phisical medium to better know the purposes of the senders (while
actively working in the task, because it is a pleasure form him) .
But, because he will also have impulses for self preservation and
curiosity in order to inprove self preservation. To predict the future
he will go up in the chain of causations, he will enter into
philosophical questions at some level and adopt a certain wordview
among the three above mentioned ones, beyond the strict limitations
of his task. Otherwise he would not be AGI or human.

General inteligence by definition can not be limited if there is
enough time and resources. So the true test of AGI would be a
philosophical questioning about existence, purpose, perception. That
includes moral questions that can be asked due to the freedom of
alternatives between different purposes that the AGI has: For example,
whether if the rover would weight less or more the self preservation
versus task realization ( Do I go to this dangerous crater that has
this interesting ice looking rocks or I spend the time pointing my
panels to the sun? )

Note that a response to the questions:
1 "What is the ultimate meaning of your activity" -"It`s my pleasure
to search interesting things and to send the info to the earth".
2 " What is is "interesting for you?". -"Interesting is what i find
interesting by my own program, which I cannot neither I want to know".
3- "Don´t you realize that if you adopt this attitude then you can not
improve your task that way?" - "Dont waste my time. Bye"

Would reveal an worldview (the first) that is a hint of general
intelligence, despite the fact that apparently he refuses to answer
philosophical questions.


> “The feedback effect between world 3 and world 2 is of particular
> importance. Our minds are the creators of world 3; but world 3 in its turn
> not only informs our minds, but largely creates them. The very idea of a
> self depends on world 3 theories, especially upon a theory of time which
> underlies the identity of the self, the self of yesterday, of today, and of
> tomorrow. The learning of a language, which is a world 3 object, is itself
> partly a creative act and partly a feedback effect; and the full
> consciousness of self is anchored in our human language.”
>
> Evgenii
> --
>
> http://blog.rudnyi.ru/2012/06/three-worlds.html

Roger Clough

unread,
Oct 9, 2012, 5:35:53 AM10/9/12
to everything-list
Hi Alberto G. Corona

IMHO the bottom line revolves around the problem of solipsism,
which is that we cannot prove that other people or objects have minds,
we can only say at most that they appear to have minds.


Roger Clough, rcl...@verizon.net
10/9/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Alberto G. Corona
Receiver: everything-list
Time: 2012-10-08, 14:45:00
Subject: Re:_The_real_reasons_we_don?_have_AGI_yet
> How David Deutsch can watch a computer beat the 2 best human Jeopardy! players on planet Earth and then say that AI has made ?o progress whatever during the entire six decades of its existence? is a complete mystery to me.

Stephen P. King

unread,
Oct 9, 2012, 7:22:54 AM10/9/12
to everyth...@googlegroups.com
--

��� Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...

-- 
Onward!

Stephen

Roger Clough

unread,
Oct 9, 2012, 7:28:13 AM10/9/12
to everything-list
Hi Stephen P. King

I suppose AGI would be the Holy Grail of artificial intelligence,
but I fear that only the computer can know that it has
actually achieved it, for intelligence is subjective.
Not that computers can't in principle be subjective,
but that subjectivity (Firstness) can never be made public ,
only descriptions of it (Thirdness) can be made public.

Firstness is the raw experience. The object or event as privately experienced.
Unprovable. Since AGI is Firstness, it is not proveable.

Thirdness is a description of that experience. The public expression of that private experience.
Proveable yes or no.


Roger Clough, rcl...@verizon.net
10/9/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Stephen P. King
Receiver: everything-list
Time: 2012-10-08, 14:22:20
Subject: Re: The real reasons we don?_have_AGI_yet


On 10/8/2012 1:13 PM, Richard Ruquist wrote:

except from

The real reasons we don? have AGI yet
A response to David Deutsch? recent article on AGI
October 8, 2012 by Ben Goertzel

So in this view, the main missing ingredient in AGI so far is
?ognitive synergy?: the fitting-together of different intelligent
components into an appropriate cognitive architecture, in such a way
that the components richly and dynamically support and assist each
other, interrelating very closely in a similar manner to the
components of the brain or body and thus giving rise to appropriate
emergent structures and dynamics.

The reason this sort of intimate integration has not yet been explored
much is that it? difficult on multiple levels, requiring the design
of an architecture and its component algorithms with a view toward the
structures and dynamics that will arise in the system once it is
coupled with an appropriate environment. Typically, the AI algorithms
and structures corresponding to different cognitive functions have
been developed based on divergent theoretical principles, by disparate
communities of researchers, and have been tuned for effective
performance on different tasks in different environments.

Making such diverse components work together in a truly synergetic and
cooperative way is a tall order, yet my own suspicion is that this ?
rather than some particular algorithm, structure or architectural
principle ? is the ?ecret sauce? needed to create human-level AGI
based on technologies available today.

Achieving this sort of cognitive-synergetic integration of AGI
components is the focus of the OpenCog AGI project that I co-founded
several years ago. We?e a long way from human adult level AGI yet,
but we have a detailed design and codebase and roadmap for getting
there. Wish us luck!
Hi Richard,

?? My suspicion is that what is needed here, if we can put on our programmer hats, is the programer's version of a BEC, Bose-Einstein Condensate, where every "part" is an integrated reflection of the whole. My own idea is that some form of algebraic and/or topological closure is required to achieve this as inspired by the Brouwer Fixed point theorem.


--
Onward!

Stephen

Roger Clough

unread,
Oct 9, 2012, 8:36:14 AM10/9/12
to everything-list
Hi Evgenii Rudnyi

Popper's three worlds are related to but not exactly Peirces
three categories:

World 1 is the objective world, which I would have to call Category 0.

World 2 is what Popper calls subjective reality, or what Peirce called Firstness

World 3 is Popper's objective knowledge, which is Pierce's Thirdness.

Popper may have included world 2 in what Peirce called Secondness, but
it's not clear. Secondness is his missing step, it's the one in which
your mind makes sense of your subjective perception. My own
understanding is your mind compares what you see with what you
already know and either identifies it as such or modifies it to
another, newly invented or associated description. If you
see two apples, it then calls the image "two apples".

Thirdness is what you then call it and can express to others.


Roger Clough, rcl...@verizon.net
10/9/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Evgenii Rudnyi
Receiver: everything-list
Time: 2012-10-09, 03:09:48
Subject: Re: The real reasons we don?_have_AGI_yet


On 08.10.2012 20:45 Alberto G. Corona said the following:
> Deutsch is right about the need to advance in Popperian
> epistemology, which ultimately is evolutionary epistemology.

You may want to read Three Worlds by Karl Popper. Then you see where to
Popperian epistemology can evolve.

?o sum up, we arrive at the following picture of the universe. There
is the physical universe, world 1, with its most important sub-universe,
that of the living organisms. World 2, the world of conscious
experience, emerges as an evolutionary product from the world of
organisms. World 3, the world of the products of the human mind,
emerges as an evolutionary product from world 2.?

?he feedback effect between world 3 and world 2 is of particular
importance. Our minds are the creators of world 3; but world 3 in its
turn not only informs our minds, but largely creates them. The very idea
of a self depends on world 3 theories, especially upon a theory of time
which underlies the identity of the self, the self of yesterday, of
today, and of tomorrow. The learning of a language, which is a world 3
object, is itself partly a creative act and partly a feedback effect;
and the full consciousness of self is anchored in our human language.?

Evgenii
--

http://blog.rudnyi.ru/2012/06/three-worlds.html

Roger Clough

unread,
Oct 9, 2012, 8:46:05 AM10/9/12
to everything-list
Hi meekerdb

IMHO self is an active agent, something like Maxwell's Demon,
that can intelligently sort raw experiences into meaningful bins, such
as Kant's categories, thus giving them some meaning.


Roger Clough, rcl...@verizon.net
10/9/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: meekerdb
Receiver: everything-list
Time: 2012-10-09, 02:16:22
Subject: Re: [foar] Re: The real reasons we don?_have_AGI_yet


On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

?? Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model.? They are not members of a social community.?

Bruno Marchal

unread,
Oct 9, 2012, 10:43:42 AM10/9/12
to everyth...@googlegroups.com

On 08 Oct 2012, at 20:50, Craig Weinberg wrote:

Deutsch is right.

Deutsch is not completely wrong, just unaware of the progress in theoretical computer science, which explains why some paths are necessarily long, and can help to avoid the confusion between consciousness, intelligence, competence, imagination, creativity.

I have already explain why, since recently I think that all UMs are already conscious, including the computer you are looking at right now. But that consciousness is still disconnected, or only trivially connected, to you and the environment.

Since always I think PA, ZF, and all Löbian machines are as conscious as you and me. But still not connected, except on mathematical notion. I have explained and justified that proving in formal theory, or in a way that we think we could formalize if we and the times, is a way to actually talk to such machine, and the 8 hypostases are why any such little machine can already told us. They are sort of reincarnation of plotinus, to put it in that way.

It is easy to confuse them with zombie, as the actual dialog has to be made by hand, with transpiration. But such machine are already as conscious as all Löbian entities, from the octopus to us.

Consciousness and intelligence are both not definable, and have complex positive and negative feedback on competence.

General intelligence of machine needs *us* opening our mind.

The singularity is in the past. Now we can only make UMs as deluded as us, for the best, or the worth. They have already a well defined self, a repesentation of the self, some connection with truth (where the experience will come from), but here the organic entities and billions years of advantage. But they evolves also, and far quicker that the organic.

No progress in AI? I see explosive progress. Especially in the 1930 for the main thing: the discovery of the Universal Machine (UM).



Searle is right.

Searle is invalid. Old discussion. 
He confuses levels of description. There is nothing to add to Hofstadter and Dennett critics of the argument in Mind'I.

It is the same error as confusing "proving A" and "emulating a machine proving A". 
ZF can prove the consistency of PA, and PA cannot. But PA can prove that ZF can prove the consistency of PA. The first proof provide an emulation of the second, but PA and ZF keeps their distinct identities in that process.

Bruno



Genuine AGI can only come when thoughts are driven by feeling and will rather than programmatic logic. It's a fundamental misunderstanding to assume that feeling can be generated by equipment which is incapable of caring about itself. Without personal investment, there is no drive to develop right hemisphere awareness - to look around for enemies and friends, to be vigilant. These kinds of capacities cannot be burned into ROM, they have to be discovered through unscripted participation. They have to be able to lie and have a reason to do so.

I'm not sure about Deutsch's purported Popper fetish, but if that's true, I can see why that would be the case. My hunch is that although Ben Goertzel is being fair to Deutsch, he may be distorting Deutsch's position somewhat as far as I question that he is suggesting that we invest in developing Philosophy instead of technology. Maybe he is, but it seems like an exaggeration. It seems to me that Deutsch is advocating the very reasonable position that we evaluate our progress with AGI before doubling down on the same strategy for the next 60 years. Nobody whats to cut off AGI funding - certainly not me, I just think that the approach has become unscientific and sentimental like alchemists with their dream of turning lead into gold. Start playing with biology and maybe you'll have something. It will be a little messier though, since with biology and unlike with silicon computers, when you start getting close to something with human like intelligence, people tend to object when you leave twitching half-persons moaning around the laboratory. You will know you have real AGI because there will be a lot of people screaming.

Craig


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To view this discussion on the web visit https://groups.google.com/d/msg/everything-list/-/-iG7-y2ddXsJ.

To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Bruno Marchal

unread,
Oct 9, 2012, 11:27:39 AM10/9/12
to everyth...@googlegroups.com

On 08 Oct 2012, at 23:39, Russell Standish wrote:

> On Mon, Oct 08, 2012 at 01:13:35PM -0400, Richard Ruquist wrote:
>> The real reasons we don’t have AGI yet
>> A response to David Deutsch’s recent article on AGI
>> October 8, 2012 by Ben Goertzel
>>
>>
>
> Thanks for posting this, Richard. I was thinking of writing my own
> detailed response to David Deutsch's op ed, but Ben Goertzel has done
> such a good job, I now don't have to!
>
> My response, similar to Ben's is that David does not convincingly
> explain why Popperian epistemology is the "secret sauce". In fact, it
> is not even at all obvious how to practically apply Popperian
> epistemology to the task at hand. Until some more detailed practical
> proposal is put forward, the best I can say is, meh, I'll believe it
> when it happens.

Strictly speaking, John Case has refuted Popperian epistemology(*), in
the sense that he showed that some Non Popperian machine can recognize
larger classes and more classes of phenomena than Popperian machine.
Believing in some non refutable theories can give an advantage with
respect of some classes of phenomena.




>
> The problem that exercises me (when I get a chance to exercise it) is
> that of creativity. David Deutsch correctly identifies that this is
> one of
> the main impediments to AGI. Yet biological evolution is a creative
> process, one for which epistemology apparently has no role at all.

Not sure it is more creative than the UMs, the UD, the Mandelbrot set,
or arithmetic.



>
> Continuous, open-ended creativity in evolution is considered the main
> problem in Artificial Life (and perhaps other fields). Solving it may
> be the work of a single moment of inspiration (I wish), but more
> likely it will involve incremental advances in topics such as
> information, complexity, emergence and other such partly philosophical
> topics before we even understand what it means for something to be
> open-ended creative.

I agree. That's probably why people take time to understand that UMs
and arithmetic are already creative.


> Popperian epistemology, to the extent it has a
> role, will come much further down the track.

Yes. With is good uses, and its misuses. Popper just made precise what
science is, except for its criteria of interesting and good theory. In
fact Popper theory was a real interesting theory, in the sense of
Popper, as it was refutable. But then people should not be so much
astonished that it has been refuted (of course in a theoretical
context(*)). I can accept that Popper analysis has a wide spectrum
where it works well, but in the foundations, it cannot be used a dogma.

Bruno

(*) CASE J. & NGO-MANGUELLE S., 1979, Refinements of inductive
inference by Popperian
machines. Tech. Rep., Dept. of Computer Science, State Univ. of New-
York, Buffalo.


http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Oct 9, 2012, 11:33:54 AM10/9/12
to everyth...@googlegroups.com
On 09 Oct 2012, at 08:16, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

    Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model.  They are not members of a social community.  Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).

Unlike PA and ZF and Lôbian entity which have already the maximal possible noyion of self (both in the 3p and 1p sense). 

But PA and ZF have no amount at all of reasonable local incarnation (reasonable with respect of doing things on Earth, or on Mars).
Mars Rovers is far beyond PA and ZF in that matter, I mean of being connected to some "real mundane life". 

Bruno




Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Alberto G. Corona

unread,
Oct 9, 2012, 11:43:44 AM10/9/12
to everyth...@googlegroups.com
I thin that natural selection is tautological (is selected what has
fitness, fitness is what is selected) but at the same time is not
empty and it is scientifc because it can be falsified. At the same
time, if it is agreed that is the direct mechanism that design the
minds then this is the perfect condition for a foundation of
eplistemology, and an absolute meaning of truth.

2012/10/9 Bruno Marchal <mar...@ulb.ac.be>:
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-li...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>



--
Alberto.

meekerdb

unread,
Oct 9, 2012, 12:01:06 PM10/9/12
to everyth...@googlegroups.com
I don't see why not.� A genetic-algorithm might be a subprogram that seeks an efficient code for some function within some larger program.� Of course it would need some definition or measure of what counts as 'efficient'.

Brent

Bruno Marchal

unread,
Oct 9, 2012, 12:28:27 PM10/9/12
to everyth...@googlegroups.com
On 09 Oct 2012, at 13:22, Stephen P. King wrote:

On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

    Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model.  They are not members of a social community.  Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).

Brent
--

    Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...

Betting on self-consistency, and variant of that idea, shorten the proofs and speed the computations, sometimes in the "wrong direction".

On almost all inputs, universal machine (creative set, by Myhill theorem, and in a sense of Post) have the alluring property to be arbitrarily speedable.

Of course the rtick is in "on almost all inputs" which means all, except a finite number of exception, and this concerns more evolution than reason.

Evolution is basically computation + the halting oracle. Implemented with the physical time (which is is based itself on computation + self-reference + arithmetical truth).

Bruno



-- 
Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Stephen P. King

unread,
Oct 9, 2012, 2:36:07 PM10/9/12
to everyth...@googlegroups.com
--

��� How about "capable of finding the required solution given a finite quantity of resources".

-- 
Onward!

Stephen

Stephen P. King

unread,
Oct 9, 2012, 2:39:33 PM10/9/12
to everyth...@googlegroups.com
On 10/9/2012 12:28 PM, Bruno Marchal wrote:
On 09 Oct 2012, at 13:22, Stephen P. King wrote:

On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

��� Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model.� They are not members of a social community.� Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).

Brent
--

��� Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...

Betting on self-consistency, and variant of that idea, shorten the proofs and speed the computations, sometimes in the "wrong direction".

Hi Bruno,

��� Could you elaborate a bit on the betting mechanism so that it is more clear how the shorting of proofs and speed-up of computations obtains?




On almost all inputs, universal machine (creative set, by Myhill theorem, and in a sense of Post) have the alluring property to be arbitrarily speedable.

��� This is a measure issue, no?


Of course the trick is in "on almost all inputs" which means all, except a finite number of exception, and this concerns more evolution than reason.

��� OK.



Evolution is basically computation + the halting oracle. Implemented with the physical time (which is is based itself on computation + self-reference + arithmetical truth).

Bruno


��� So you are equating selection by fitness in a local environment with a halting oracle?
-- 
Onward!

Stephen

Russell Standish

unread,
Oct 9, 2012, 5:16:02 PM10/9/12
to everyth...@googlegroups.com
Maybe I will take you up on this - I think my uni library card expired
years ago, and its a PITA to renew.

However, since one doesn't need a mind to be creative (and my interest
is actually in mindless creative processes), I'm not sure exactly how
relevant something titled "Mechanism of Mind" it will be.

BTW - very close to sending you a finished draft of Amoeba's Secret. I
just have to check the translations I wasn't sure of now that I have
access to a dictionary/Google translate, and also redo the citations
in a more regular manner.

Cheers
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>

Kim Jones

unread,
Oct 9, 2012, 11:58:23 PM10/9/12
to everyth...@googlegroups.com
It just may provide you that "flash of insight" you hanker for; that's my grand hope, anyway.

here's a snippet:

"There may be no reason to say something until after it has been said. Once it has been said a context develops to support it, and yet it would never have been produced by a context. It may not be possible to plan a new style in art, but once it comes about, it creates its own validity. It is usual to proceed forward step by step until one has got somewhere. But - it is also possible to get there first by any means and then look back and find the best route. A problem may be worked forward from the beginning but it may also be worked backward from the end.

Instead of proceeding steadily along a pathway, one jumps tpo a different point, or several different points in turn, and then waits for them to link together to form a coherent pattern. It is in the nature of the self-maximising system of the memory-surface that is mind to create a coherent pattern out of such separate points. If the pattern is effective then it cannot possibly matter whether it came about in a sequential fashion or not. A frame of reference is a context provided by the current arrangement of information. It is the direction of development implied by this arrangement. One cannot break out of this frame of reference by working from within it. It maybe necessary to jump out, and if the jump is successful then the frame of reference is itself altered". (p. 240 - description of the process known as "Lateral Thinking".)

Give me a bell in about a week and we will jump in somewhere for a beer and I will pass you this volume (if still interested after reading the above) - I will have a little less Uni work to do for a short while; I may be able to get down to a bit of finessing of our translation of Bruno's "Amoebas".

Kim Jones

Bruno Marchal

unread,
Oct 10, 2012, 1:02:48 PM10/10/12
to everyth...@googlegroups.com



A very sad news is that Eric Vandenbussche died. It is the guy I
called here often "the little genius", who solved notably the first
open problem in my PhD thesis (as you can consult on my url).

That was a *very* nice guy, a friend, also, an ally, a support. I miss
him greatly.

R.I.P. Eric.




Stephen P. King

unread,
Oct 10, 2012, 1:41:51 PM10/10/12
to everyth...@googlegroups.com
Dear Bruno,

I am sad to hear that. :_( I was very intrigued by his results!
Could you get permission to publish all of his work?


--
Onward!

Stephen


Bruno Marchal

unread,
Oct 10, 2012, 1:59:00 PM10/10/12
to everyth...@googlegroups.com
On 09 Oct 2012, at 20:39, Stephen P. King wrote:

On 10/9/2012 12:28 PM, Bruno Marchal wrote:
On 09 Oct 2012, at 13:22, Stephen P. King wrote:

On 10/9/2012 2:16 AM, meekerdb wrote:
On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

    Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model.  They are not members of a social community.  Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).

Brent
--

    Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...

Betting on self-consistency, and variant of that idea, shorten the proofs and speed the computations, sometimes in the "wrong direction".

Hi Bruno,

    Could you elaborate a bit on the betting mechanism so that it is more clear how the shorting of proofs and speed-up of computations obtains?

The (correct) machine tries to prove its consistency (Dt,  ~Bf) and never succeed, so bet that she can't do that. Then she prove Dt -> ~BDt, and infer interrogatively Dt and ~BDt.
Then either she adds the axiom Dt, with the D corresponding to the whole new theory. In  that case she becomes inconsistent. 
Or, she add Dt as a new axiom, without that "Dt" included, in that case it is not so complex to prove that she will have infinitely many proofs capable to be arbitrarily shortened. I might explain more after I sump up Church thesis and the phi_i and the W_i. That theorem admits a short proof. You can find one in Torkel's book on the use and misuse of Gödel's theorem, or you can read the original proof by Gödel in the book edited by Martin Davis "the undecidable" (now a Dover book).








On almost all inputs, universal machine (creative set, by Myhill theorem, and in a sense of Post) have the alluring property to be arbitrarily speedable.

    This is a measure issue, no?

No.




Of course the trick is in "on almost all inputs" which means all, except a finite number of exception, and this concerns more evolution than reason.

    OK.


Evolution is basically computation + the halting oracle. Implemented with the physical time (which is is based itself on computation + self-reference + arithmetical truth).

Bruno


    So you are equating selection by fitness in a local environment with a halting oracle?


Somehow. Newton would probably not have noticed the falling apple and F=ma, if dinosaurs didn't "stop" some times before.  The measure depends on 'computation in the limit' (= computation + halting oracle) because the first person experience is invariant of the UD's delays.

Bruno



Russell Standish

unread,
Oct 10, 2012, 8:55:57 PM10/10/12
to everyth...@googlegroups.com
Ideed, it would be a tragedy if Eric's insights were lost to the
world. Perhaps a posthumous article might be in order explaining his
insights? I would be happy to endorse on arXiv, assuming I can endorse
in the appropriate category (I found I couldn't endorse Colin's paper
a couple of years ago, though, so this may be an empty promise :).

Cheers

Roger Clough

unread,
Oct 11, 2012, 8:36:09 AM10/11/12
to everything-list
Hi Bruno Marchal

You would have to set up a carefully selected
intelligence test to test the intelligence of an AIG.

Would it then really have intelligence ?
I don't think so. You'd have to cheat with
pre-supplied answers.


Roger Clough, rcl...@verizon.net
10/11/2012
"Forever is a long time, especially near the end." -Woody Allen


----- Receiving the following content -----
From: Bruno Marchal
Receiver: everything-list
Time: 2012-10-10, 13:59:00
Subject: Re:_[foar]_Re:_The_real_reasons_we_don?_have_AGI_yet




On 09 Oct 2012, at 20:39, Stephen P. King wrote:


On 10/9/2012 12:28 PM, Bruno Marchal wrote:



On 09 Oct 2012, at 13:22, Stephen P. King wrote:


On 10/9/2012 2:16 AM, meekerdb wrote:

On 10/8/2012 3:49 PM, Stephen P. King wrote:
Hi Russell,

Question: Why has little if any thought been given in AGI to self-modeling and some capacity to track the model of self under the evolutionary transformations?

It's probably because AI's have not needed to operate in environments where they need a self-model. They are not members of a social community. Some simpler systems, like Mars Rovers, have limited self-models (where am I, what's my battery charge,...) that they need to perform their functions, but they don't have general intelligence (yet).

Brent
--


Could the efficiency of the computation be subject to modeling? My thinking is that if an AI could rewire itself for some task to more efficiently solve that task...



Betting on self-consistency, and variant of that idea, shorten the proofs and speed the computations, sometimes in the "wrong direction".

Hi Bruno,

Could you elaborate a bit on the betting mechanism so that it is more clear how the shorting of proofs and speed-up of computations obtains?



The (correct) machine tries to prove its consistency (Dt, ~Bf) and never succeed, so bet that she can't do that. Then she prove Dt -> ~BDt, and infer interrogatively Dt and ~BDt.
Then either she adds the axiom Dt, with the D corresponding to the whole new theory. In that case she becomes inconsistent.
Or, she add Dt as a new axiom, without that "Dt" included, in that case it is not so complex to prove that she will have infinitely many proofs capable to be arbitrarily shortened. I might explain more after I sump up Church thesis and the phi_i and the W_i. That theorem admits a short proof. You can find one in Torkel's book on the use and misuse of G?el's theorem, or you can read the original proof by G?el in the book edited by Martin Davis "the undecidable" (now a Dover book).

Bruno Marchal

unread,
Oct 11, 2012, 11:42:33 AM10/11/12
to everyth...@googlegroups.com

On 11 Oct 2012, at 02:55, Russell Standish wrote:

> On Wed, Oct 10, 2012 at 01:41:51PM -0400, Stephen P. King wrote:
>> On 10/10/2012 1:02 PM, Bruno Marchal wrote:
>>>
>>>
>>>
>>> A very sad news is that Eric Vandenbussche died. It is the guy I
>>> called here often "the little genius", who solved notably the
>>> first open problem in my PhD thesis (as you can consult on my
>>> url).
>>>
>>> That was a *very* nice guy, a friend, also, an ally, a support. I
>>> miss him greatly.
>>>
>>> R.I.P. Eric.
>>>
>>
>> Dear Bruno,
>>
>> I am sad to hear that. :_( I was very intrigued by his results!
>> Could you get permission to publish all of his work?
>>
>
> Ideed, it would be a tragedy if Eric's insights were lost to the
> world. Perhaps a posthumous article might be in order explaining his
> insights? I would be happy to endorse on arXiv, assuming I can endorse
> in the appropriate category (I found I couldn't endorse Colin's paper
> a couple of years ago, though, so this may be an empty promise :).
>

Thanks. We will try to do something.

Best,

Bruno


http://iridia.ulb.ac.be/~marchal/



Bruno Marchal

unread,
Oct 12, 2012, 11:43:43 AM10/12/12
to Fabric-o...@yahoogroups.com, everything-list List

On 12 Oct 2012, at 10:27, Brett Hall wrote:

 

On 12/10/2012, at 16:27, "Bruno Marchal" <mar...@ulb.ac.be> wrote:

> On 10 Oct 2012, at 10:44, a b wrote:
>
>> On Wed, Oct 10, 2012 at 2:04 AM, Brett Hall <brha...@hotmail.com>
>> wrote:
>>> On 09/10/2012, at 16:38, "hibbsa" <asb...@gmail.com> wrote:
>>>> http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
>>>
>>> Ben Goertzel's article that hibbsa sent and linked to above says in
>>> paragraph 7 that,"I salute David Deutsch’s boldness, in writing and
>>> thinking about a field where he obviously doesn’t have much
>>> practical grounding. Sometimes the views of outsiders with very
>>> different backgrounds can yield surprising insights. But I don’t
>>> think this is one of those times. In fact, I think Deutsch’s
>>> perspective on AGI is badly mistaken, and if widely adopted, would
>>> slow down progress toward AGI dramatically. The real reasons we
>>> don’t have AGI yet, I believe, have nothing to do with Popperian
>>> philosophy, and everything to do with:..." (Then he listed some
>>> things).
>>>
>>> That paragraph quoted seems an appeal to authority in an
>>> underhanded way. In a sense it says (in a condescending manner)
>>> that DD has little practical grounding in this subject and can
>>> probably be dismissed on that basis...but let's look at what he
>>> says anyways. As if "practical grounding" by the writer would
>>> somehow have made the arguments themselves valid or more valid (as
>>> though that makes sense). The irony is, Goertzel in almost the next
>>> breath writes that AGI has "nothing to do with Popperian
>>> philosophy..." Presumably, by his own criterion, he can only make
>>> that comment with any kind of validity if he has "practical
>>> grounding" in Popperian epistemology? It seems he has indeed
>>> written quite a bit on Popper...but probably as much as DD has
>>> written on stuff related to AI. So how much is enough before you
>>> should be taken seriously? I'm also not sure that Goertzel is
>>> expert in Popperian *epistemology*.
>>>
>>> Later he goes on to write, "I have conjectured before that once
>>> some proto-AGI reaches a sufficient level of sophistication in its
>>> behavior, we will see an “AGI Sputnik” dynamic — where various
>>> countries and corporations compete to put more and more money and
>>> attention into AGI, trying to get there first. The question is,
>>> just how good does a proto-AGI have to be to reach the AGI Sputnik
>>> level?"I'm not sure what "proto-AGO" means? It perhaps misses the
>>> central point that intelligence is a qualitative, not quantitative
>>> thing. Sputnik was a less advanced version of the International
>>> Space Station (ISS)...or a GPS satellite.
>>>
>>> But there is no "less advanced" version of being a universal
>>> explainer (i.e a person, i.e: intelligent, i.e: AGI) is there? So
>>> the analogy is quite false. As a side point is the "A" in AGI
>>> racist? Or does the A simply mean "intelligently designed" as
>>> opposed to "evolved by natural selection"? I'm not sure...what will
>>> Artificial mean to AGI when they are here? I suppose we might
>>> augment our senses in all sorts of ways so the distinction might be
>>> blurred anyways as it is currently with race.So I think the Sputnik
>>> analogy is wrong.
>>>
>>> A better analogy would be...say you wanted to develop a *worldwide
>>> communications system* in the time of (say) the American Indians in
>>> the USA (say around 1200 AD for argument's sake). Somehow you knew
>>> *it must be possible* to create a communications system that
>>> allowed transmission of messages across the world at very very high
>>> speeds but so far your technology was limited to ever bigger fires
>>> and more and more smoke. Then the difference between (say) a smoke
>>> signal and a real communications satellite that can transmit a
>>> message around the world (like Sputnik) would be more appropriate.
>>> Then the smoke signal is the current state of AGI...and Sputnik is
>>> real AGI - what you get once you understand something brand new
>>> about orbits, gravity and radio waves...and probably most
>>> importantly - that the world was a giant *sphere* plagued by high
>>> altitude winds and diverse weather systems and so forth that would
>>> never even have entered your mind. Things you can't even conceive
>>> of if all you are doing in trying to devise a better world-wide
>>> communications system is making ever bigger fires and more and more
>>> smoke...because *surely* that approach will eventually lead to
>>> world-wide communications. After all - it's just a matter of bigger
>>> fires create more smoke which travels greater distance. Right?But
>>> even that analogy is no good really because the smoke signal and
>>> the satellite still have too much in common, perhaps. They are
>>> *both ways of communicating*. And yet, current "AI" and real "I" do
>>> *not* have in common "intelligence" or "thinking".
>>>
>>> What on Earth could "proto-agi" be in Ben's Goertzel's world? What
>>> would be the criterion for recognising it as distinct from actual
>>> AGI?
>>>
>>> I get the impression Ben might have missed the point that
>>> intelligeatnce is just qualitatively different from non-
>>> intelligence because the entire article is fixated on it being all
>>> about improvements in hardware. If you're intelligent then you are
>>> a universal explainer. And you are either a universal
>>> explainer...or not. There's no "Sputnik" level of intelligence
>>> which will lead towards GPS and ISS levels of intelligence. Right?
>>>
>>> Brett.
>>
>> At the end of the day the guy has just been told he hasn't made any
>> progress so it seems nature [to me] that he'll hit back with some
>> arsey comments, one of which the line about Deutsch which is supposed
>> to mean something like ".....for a guy who knows shit about the
>> subject"
>>
>> Personally I think that if you know why someone is getting something
>> like that in, then it's better to just ignore it and look for the main
>> ideas. His idea that intrigues me, is about how to get some emergence
>> taking place out of the underlying components.
>>
>> This has to be part of the problem because an inner sense of self
>> cannot be written directly into code. One criticism of Deutsch's
>> article, for me, was that he seemed to trivialise this aspect by
>> calling it nothing more than 'self-reference'. It isn't self reference
>> alone, it's inner experience. it's what is going on my head right now,
>> me thinking I am here and me seeing things in my room.
>
> It is explained by the difference between "provable(p)" which involves
> self-reference, and "provable(p) & p", which involves self reference
> and truth. The first gives a theory of self-reference in a third
> person way, like when you say "I have two arms", and the second
> provides a self-reference in a first person subjective way. It defines
> a non nameable knower, verifying the axiom of the classical theory of
> knowledge (S4), and which happens already to be non definable by the
> subject itself. So if interested you can consult my sane04. It does
> confirm many ideas of Deutsch, but its uses a more standard vocabulary.

Bruno, I'm confused. But feel like I'm 'almost there'. If you are some entity that can do "provable(p)" then you recognise your image in a mirror...or...what exactly?

I am very literal on this. I am thinking to a machine with very few beliefs like the following one (together with axioms for equality, no need of logic here)

x + 0 = x  
x + s(y) = s(x + y) 

 x *0 = 0
 x*s(y) = x*y + x   

Or like this:

((K, x), y) = x
(((S, x), y), z) = ((x, z), (y, z))

Those little theories can be shown Turing universal.

Then I extend them with a bit of classical logic, and, importantly with some induction axiom. This makes them "Löbian" which means that they are maximally self-referential (they will remain Löbian whatever axioms you add, as far as they are arithmetically sound). Such machine can be shown (it is equivalent with Löbianity) that they know (in a technical sense) that they are universal. 

Those machines can (like the first one above) represent themselves. This is always long to show, but that is what Gödel did in his 1931 paper: he translated meta-arithmetic *in* arithmetic. There is no magic: the first theory above handle only the objects 0, s(0), s(s(0)), so you will have to repesent variables by such object (say by the positive even numbers: s(s(0)), s(s(s(s(0)))), ..., and then you will have to represent, formula, and proofs, with such object, and proves that you can represent all the working machinery of such theories *in* the language of the theory. You will thus represent (provable-by-this-theory(p) in term of a purely arithmetical relation). The predicate itself "provable" represent the machine ability to prove, in the lngauge of the machine.

Now such machine can prove its own Gödel's second incompleteness theorem: so both theory above, when supplemented with the induction axiom can prove

not-provable ("0 = s(0)") implies not-provable("not provable("0=s(0)")")

let us write provable(p) by Bp, and not by ~, and "0=s(0)" by f (for falsity).  (and "->' for implies)

The line above become

~Bf  -> ~B (~Bf)     you can read it if I don't ever prove a falsity then I can prove that I will never prove a falsity.

But ~Bf is equivalent with Bf -> f   (you can verify by the truth table method: ~p is the same as (p->f).

So if the machine is consistent, (and this can be proved for those simple machines, in some sense) we have that ~Bf is true, and so, together with ~Bf  -> ~B (~Bf) , we have that ~B (~Bf) is true too, and so ~Bf  is true (for the machine) and not provable by the machine. This means that Bf->f is true but not provable by the machine.

This means that Bp -> p will be true, but not, in general provable, by the machine. This makes Gödel realized that provability does not behave like a knowledge operator, as we ask them to obey to both Bp -> p, and B(Bp->p). 

But this makes possible to define a new operator K, by Bp & p, as now we know that for the amchine Bp does not always implies p. This explains we will have that 

Kp <-> Bp will be true, yet such truth cannot be justified by the machine. More, they will obey to different logics. Kp will obey to a knowledge logic (S4), but Bp obeys to the weird logic G. 

We have a case of two identical set of "beliefs", but obeying quite different laws, and they fit well the difference between the first person (K), and the "scientific view on oneself", G. 

Note this: we cannot define who we (first person) are, by any third person description. The machine is in the same situation, as the operator K can be shown NOT definable in the language of the machine (for reason similar to Tarski undefinablity theorem of truth). This makes such K a good candidate for the first person, as it is a knower (unable to doubt at some level: consciousness), which cannot give any third person description or account of what or who he is.

Modeling "belief" by "provability", the difference is really the difference between 

"I believe that the snow is white", and

"I believe that the snow is white,  and the snow is (actually) white".



With provable(p)and(p) then you recognise that the image in the mirror is you and you are a self.

I am OK, you are right.




Or something like that. I'm sure you have something more formal. I'll try again;

Is provable(p) something like "It can be shown that I have two arms" because (p) is just
"I have two arms"?

Yes.





Putting that into natural language though seems to suggest that provable(p) must as a prior necessity have "p" as true.


Not necessarily. The Löbian machine which adds the axiom Bf, remains consistent, and so we have B(Bf) and ~Bf. It is a case of Bp and ~p.
That is why provability is more akin to believe that knowledge (in the standard terminology)..



But that's not what you're saying. You seem to be saying it's *easier* for some entity which can do computations to get to provable(p) than (p).


The machine can be lucky, or well done, relatively to her environment, but Bp cannot necessitate p. The machine might be dreaming, for example.




I do think I understand the difference between a third-person self reference and first-person self reference. It's almost like the difference between pointing at a mirror and asserting:

"That is me" (where the "me" is not a "self" but rather just a bunch of atoms you are in control of.)

And pointing at a mirror and asserting "That's my *reflection*. And this is me." (Where the "me" corresponds to some feeling that establishes ones own existence to one's own satisfaction).

Yes, indeed. The amazing thing is that such a difference already makes sense for very little machine.

The mirror test, well a weakening of it,  illustrates Löbianity/induction. It is enough to "induce" that there is some reality beyond the mirror, and to show astonishment when you discover there is no such reality. Amazingly some spider are already like that:



Bruno



Reply all
Reply to author
Forward
0 new messages