Penrose and algorithms

6 views
Skip to first unread message

chris peck

unread,
Jun 9, 2007, 7:03:34 AM6/9/07
to everyth...@googlegroups.com
Hello

The time has come again when I need to seek advice from the everything-list
and its contributors.

Penrose I believe has argued that the inability to algorithmically solve the
halting problem but the ability of humans, or at least Kurt Godel, to
understand that formal systems are incomplete together demonstrate that
human reason is not algorithmic in nature - and therefore that the AI
project is fundamentally flawed.

What is the general consensus here on that score. I know that there are many
perspectives here including those who agree with Penrose. Are there any
decent threads I could look at that deal with this issue?

All the best

Chris.

_________________________________________________________________
PC Magazine’s 2007 editors’ choice for best Web mail—award-winning Windows
Live Hotmail.
http://imagine-windowslive.com/hotmail/?locale=en-us&ocid=TXT_TAGHM_migration_HM_mini_pcmag_0507

Bruno Marchal

unread,
Jun 9, 2007, 12:40:50 PM6/9/07
to everyth...@googlegroups.com
Hi Chris,

Le 09-juin-07, ŕ 13:03, chris peck a écrit :

>
> Hello
>
> The time has come again when I need to seek advice from the
> everything-list
> and its contributors.
>
> Penrose I believe has argued that the inability to algorithmically
> solve the
> halting problem but the ability of humans, or at least Kurt Godel, to
> understand that formal systems are incomplete together demonstrate that
> human reason is not algorithmic in nature - and therefore that the AI
> project is fundamentally flawed.
>
> What is the general consensus here on that score. I know that there
> are many
> perspectives here including those who agree with Penrose. Are there any
> decent threads I could look at that deal with this issue?
>
> All the best
>
> Chris.


This is a fundamental issue, even though things are clear for the
logicians since 1921 ...
But apparently it is still very cloudy for the physicists (except
Hofstadter!).

I have no time to explain, but let me quote the first paragraph of my
Siena papers (your question is at the heart of the interview of the
lobian machine and the arithmetical interpretation of Plotinus).

But you can find many more explanation in my web pages (in french and
in english). In a nutshell, Penrose, though quite courageous and more
lucid on the mind body problem than the average physicist, is deadly
mistaken on Godel. Godel's theorem are very lucky event for mechanism:
eventually it leads to their theologies ...

The book by Franzen on the misuse of Godel is quite good. An deep book
is also the one by Judson Webb, ref in my thesis). We will have the
opportunity to come back on this deep issue, which illustrate a gap
between logicians and physicists.

Best,

Bruno


------ (excerp of "A Purely Arithmetical, yet Empirically Falsifiable,
Interpretation of Plotinusą Theory of Matter" Cie 2007 )
1) Incompleteness and Mechanism
There is a vast literature where G odeląs first and second
incompleteness theorems are used to argue that human beings are
different of, if not superior to, any machine. The most famous attempts
have been given by J. Lucas in the early sixties and by R. Penrose in
two famous books [53, 54]. Such type of argument are not well
supported. See for example the recent book by T. Franzen [21]. There is
also a less well known tradition where G odeląs theorems is used in
favor of the mechanist thesis. Emil Post, in a remarkable anticipation
written about ten years before G odel published his incompleteness
theorems, already discovered both the main łG odelian motivation˛
against mechanism, and the main pitfall of such argumentations [17,
55]. Post is the first discoverer 1 of Church Thesis, or Church Turing
Thesis, and Post is the first one to prove the first incompleteness
theorem from a statement equivalent to Church thesis, i.e. the
existence of a universal‹Post said łcomplete˛‹normal (production)
system 2. In his anticipation, Post concluded at first that the
mathematicianąs mind or that the logical process is essentially
creative. He adds : łIt makes of the mathematician much more than a
clever being who can do quickly what a machine could do ultimately. We
see that a machine would never give a complete logic ; for once the
machine is made we could prove a theorem it does not prove˛(Post
emphasis). But Post quickly realized that a machine could do the same
deduction for its own mental acts, and admits that : łThe conclusion
that man is not a machine is invalid. All we can say is that man cannot
construct a machine which can do all the thinking he can. To illustrate
this point we may note that a kind of machine-man could be constructed
who would prove a similar theorem for his mental acts.˛
This has probably constituted his motivation for lifting the term
creative to his set theoretical formulation of mechanical universality
[56]. To be sure, an application of Kleeneąs second recursion theorem,
see [30], can make any machine self-replicating, and Post should have
said only that man cannot both construct a machine doing his thinking
and proving that such machine do so. This is what remains from a
reconstruction of Lucas-Penrose argument : if we are machine we cannot
constructively specify which machine we are, nor, a fortiori, which
computation support us. Such analysis begins perhaps with Benacerraf
[4], (see [41] for more details). In his book on the subject, Judson
Webb argues that Church Thesis is a main ingredient of the Mechanist
Thesis. Then, he argues that, given that incompleteness is an easy‹one
double diagonalization step, see above‹consequence of Church Thesis,
G odeląs 1931 theorem, which proves incompleteness without appeal to
Church Thesis, can be taken as a confirmation of it. Judson Webb
concludes that G odeląs incompleteness theorem is a very lucky event
for the mechanist philosopher [70, 71]. Torkel Franzen, who
concentrates mainly on the negative (antimechanist in general) abuses
of G odeląs theorems, notes, after describing some impressive
self-analysis of a formal system like Peano Arithmetic (PA) that :
łInspired by this impressive ability of PA to understand itself, we
conclude, in the spirit of the metaphorical łapplications˛ of the
incompleteness theorem, that if the human mind has anything like the
powers of profound self-analysis of PA or ZF, we can expect to be able
to understand ourselves perfectly˛. Now, there is nothing metaphorical
in this conclusion if we make clear some assumption of classical
(platonist) mechanism, for example under the (necessarily non
constructive) assumption that there is a substitution level where we
are turing-emulable. We would not personally notice any digital
functional substitution made at that level or below [38, 39, 41]. The
second incompleteness theorem can then be conceived as an łexact law of
psychology˛ : no consistent machine can prove its own consistency from
a description of herself made at some (relatively) correct substitution
level‹which exists by assumption (see also [50]). What is remarkable of
course is that all machine having enough provability abilities, can
prove such psychological laws, and as T. Franzen singles out, there is
a case for being rather impressed by the profound self-analysis of
machines like PA and ZF or any of their consistent recursively
enumerable extensions 3. This leads us to the positive‹open minded
toward the mechanist hypothesis‹ use of incompleteness. Actually, the
whole of recursion theory, mainly intensional recursion theory [59],
can be seen in that way, and this is still more evident when we look at
the numerous application of recursion theory in theoretical artificial
intelligence or in computational learning theory. I refer the reader to
the introductory paper by Case and Smith, or to the book by Osherson
and Martin [14] [46]. In this short paper we will have to consider
machines having both provability abilities and inference inductive
abilities, but actually we will need only trivial such inference
inductive abilities. I call such machine łL obian˛ for the proheminant
r–ole of L obąs theorem, or formula, in our setting, see below. Now,
probably due to the abundant abuses of G odeląs theorems in philosophy,
physics and theology, negative feelings about any possible applications
of incompleteness in those fields could have developed. Here, on the
contrary, it is our purpose to illustrate that the incompleteness
theorems and some of their generalisations, provide a rather natural
purely arithmetical interpretation of Plotinusą Platonist, non
Aristotelian, łtheology˛ including his łMatter Theory˛. As a theory
bearing on matter, such a theory is obviously empirically falsifiable :
it is enough to compare empirical physics with the arithmetical
interpretation of Plotinusą theory of Matter. A divergence here would
not refute Plotinus, of course, but only the present arithmetical
interpretation. This will illustrate the internal consistency and the
external falsifiability of some theology.

....

------

http://iridia.ulb.ac.be/~marchal/

chris peck

unread,
Jun 11, 2007, 3:18:58 PM6/11/07
to everyth...@googlegroups.com
cheers Bruno. :)


>From: Bruno Marchal <mar...@ulb.ac.be>
>Reply-To: everyth...@googlegroups.com
>To: everyth...@googlegroups.com
>Subject: Re: Penrose and algorithms
>Date: Sat, 9 Jun 2007 18:40:50 +0200
>
>
>Hi Chris,
>
>Le 09-juin-07, à 13:03, chris peck a écrit :

>Interpretation of Plotinus¹ Theory of Matter" Cie 2007 )
>1) Incompleteness and Mechanism
>There is a vast literature where G odel¹s first and second


>incompleteness theorems are used to argue that human beings are
>different of, if not superior to, any machine. The most famous attempts
>have been given by J. Lucas in the early sixties and by R. Penrose in
>two famous books [53, 54]. Such type of argument are not well
>supported. See for example the recent book by T. Franzen [21]. There is

>also a less well known tradition where G odel¹s theorems is used in


>favor of the mechanist thesis. Emil Post, in a remarkable anticipation
>written about ten years before G odel published his incompleteness

>theorems, already discovered both the main ³G odelian motivation²


>against mechanism, and the main pitfall of such argumentations [17,
>55]. Post is the first discoverer 1 of Church Thesis, or Church Turing
>Thesis, and Post is the first one to prove the first incompleteness
>theorem from a statement equivalent to Church thesis, i.e. the

>existence of a universal‹Post said ³complete²‹normal (production)


>system 2. In his anticipation, Post concluded at first that the

>mathematician¹s mind or that the logical process is essentially
>creative. He adds : ³It makes of the mathematician much more than a


>clever being who can do quickly what a machine could do ultimately. We
>see that a machine would never give a complete logic ; for once the
>machine is made we could prove a theorem it does not prove²(Post
>emphasis). But Post quickly realized that a machine could do the same

>deduction for its own mental acts, and admits that : ³The conclusion


>that man is not a machine is invalid. All we can say is that man cannot
>construct a machine which can do all the thinking he can. To illustrate
>this point we may note that a kind of machine-man could be constructed
>who would prove a similar theorem for his mental acts.²
>This has probably constituted his motivation for lifting the term
>creative to his set theoretical formulation of mechanical universality

>[56]. To be sure, an application of Kleene¹s second recursion theorem,


>see [30], can make any machine self-replicating, and Post should have
>said only that man cannot both construct a machine doing his thinking
>and proving that such machine do so. This is what remains from a
>reconstruction of Lucas-Penrose argument : if we are machine we cannot
>constructively specify which machine we are, nor, a fortiori, which
>computation support us. Such analysis begins perhaps with Benacerraf
>[4], (see [41] for more details). In his book on the subject, Judson
>Webb argues that Church Thesis is a main ingredient of the Mechanist
>Thesis. Then, he argues that, given that incompleteness is an easy‹one
>double diagonalization step, see above‹consequence of Church Thesis,

>G odel¹s 1931 theorem, which proves incompleteness without appeal to


>Church Thesis, can be taken as a confirmation of it. Judson Webb

>concludes that G odel¹s incompleteness theorem is a very lucky event


>for the mechanist philosopher [70, 71]. Torkel Franzen, who
>concentrates mainly on the negative (antimechanist in general) abuses

>of G odel¹s theorems, notes, after describing some impressive


>self-analysis of a formal system like Peano Arithmetic (PA) that :

>³Inspired by this impressive ability of PA to understand itself, we
>conclude, in the spirit of the metaphorical ³applications² of the


>incompleteness theorem, that if the human mind has anything like the
>powers of profound self-analysis of PA or ZF, we can expect to be able
>to understand ourselves perfectly². Now, there is nothing metaphorical
>in this conclusion if we make clear some assumption of classical
>(platonist) mechanism, for example under the (necessarily non
>constructive) assumption that there is a substitution level where we
>are turing-emulable. We would not personally notice any digital
>functional substitution made at that level or below [38, 39, 41]. The

>second incompleteness theorem can then be conceived as an ³exact law of


>psychology² : no consistent machine can prove its own consistency from
>a description of herself made at some (relatively) correct substitution
>level‹which exists by assumption (see also [50]). What is remarkable of
>course is that all machine having enough provability abilities, can
>prove such psychological laws, and as T. Franzen singles out, there is
>a case for being rather impressed by the profound self-analysis of
>machines like PA and ZF or any of their consistent recursively
>enumerable extensions 3. This leads us to the positive‹open minded
>toward the mechanist hypothesis‹ use of incompleteness. Actually, the
>whole of recursion theory, mainly intensional recursion theory [59],
>can be seen in that way, and this is still more evident when we look at
>the numerous application of recursion theory in theoretical artificial
>intelligence or in computational learning theory. I refer the reader to
>the introductory paper by Case and Smith, or to the book by Osherson
>and Martin [14] [46]. In this short paper we will have to consider
>machines having both provability abilities and inference inductive
>abilities, but actually we will need only trivial such inference

>inductive abilities. I call such machine ³L obian² for the proheminant
>r-ole of L ob¹s theorem, or formula, in our setting, see below. Now,
>probably due to the abundant abuses of G odel¹s theorems in philosophy,


>physics and theology, negative feelings about any possible applications
>of incompleteness in those fields could have developed. Here, on the
>contrary, it is our purpose to illustrate that the incompleteness
>theorems and some of their generalisations, provide a rather natural

>purely arithmetical interpretation of Plotinus¹ Platonist, non
>Aristotelian, ³theology² including his ³Matter Theory². As a theory


>bearing on matter, such a theory is obviously empirically falsifiable :
>it is enough to compare empirical physics with the arithmetical

>interpretation of Plotinus¹ theory of Matter. A divergence here would


>not refute Plotinus, of course, but only the present arithmetical
>interpretation. This will illustrate the internal consistency and the
>external falsifiability of some theology.
>
>....
>
>------
>
>
>
>
>
>http://iridia.ulb.ac.be/~marchal/
>
>
>>

_________________________________________________________________
Win tickets to the sold out Live Earth concert! http://liveearth.uk.msn.com

Pete Carlton

unread,
Jun 21, 2007, 12:29:11 PM6/21/07
to everyth...@googlegroups.com
You could look up "Murmurs in the Cathedral", Daniel Dennett's review of Penrose's "The Emperor's New Mind", in the Times literary supplement (and maybe online somewhere?)

Here's an excerpt from a review of the review:


--

However, Penrose's main thesis, for which all this scientific exposition is mere supporting argument, is that algorithmic computers cannot ever be intelligent, because our mathematical insights are fundamentally non-algorithmic. Dennett is having none of it, and succinctly points out the underlying fallacy, that, even if there could not be an algorithm for a particular behaviour, there could still be an algorithm that was very very good (if not perfect) at that behaviour:

<Dennett>
"The following argument, then, in simply fallacious:
  1. X is superbly capable of achieving checkmate.
  2. There is no (practical) algorithm guaranteed to achieve checkmate,
    therefore
  3. X does not owe its power to achieve checkmate to an algorithm.
So even if mathematicians are superb recognizers of mathematical truth, and even if there is no algorithm, practical or otherwise, for recognizing mathematical truth, it does not follow that the power of mathematicians to recognize mathematical truth is not entirely explicable in terms of their brains executing an algorithm. Not an algorithm for intuiting mathematical truth - we can suppose that Penrose has proved that there could be no such thing. What would the algorithm be for, then? Most plausibly it would be an algorithm - one of very many - for trying to stay alive, an algorithm that, by an extraordinarily convoluted and indirect generation of byproducts, "happened" to be a superb (but not foolproof) recognizer of friends, enemies, food, shelter, harbingers of spring, good arguments - and mathematical truths. "
</Dennett>

it is disconcerting that he does not even address the issue, and often writes as if an algorithm could have only the powers it could be proven mathematically to have in the worst case.




On Jun 9, 2007, at 4:03 AM, chris peck wrote:


Hello

The time has come again when I need to seek advice from the everything-list
and its contributors.

Penrose I believe has argued that the inability to algorithmically solve the
halting problem but the ability of humans, or at least Kurt Godel, to
understand that formal systems are incomplete together demonstrate that
human reason is not algorithmic in nature - and therefore that the AI
project is fundamentally flawed.

What is the general consensus here on that score. I know that there are many
perspectives here including those who agree with Penrose. Are there any
decent threads I could look at that deal with this issue?

All the best

Chris.

_________________________________________________________________
PC Magazine's 2007 editors' choice for best Web mail--award-winning Windows
Live Hotmail.




Russell Standish

unread,
Jun 21, 2007, 9:11:51 AM6/21/07
to everyth...@googlegroups.com
On Thu, Jun 21, 2007 at 09:29:11AM -0700, Pete Carlton wrote:
>
> it is disconcerting that he does not even address the issue, and
> often writes as if an algorithm could have only the powers it could
> be proven mathematically to have in the worst case.
>
>
>

I agree with Dennett here. Just because the travelling salesman
problem is NP-hard (and presumably incapable of being solved by a
polynomial time algorithm), doesn't mean there aren't algorithms
capable of getting within a few percent of the optimum route in
polynomial time. These algorithms do exist, I've worked with them.

So just because something is uncomputable, doesn't mean there isn't an
algorithm for computing an approximation to it. \Omega, is the archetypical
noncomputational number, yet someone has computed the first 1000 bits
of it (See Li and Vitanyi for the details). And \Omega is probably a far
worse problem algorithmically speaking than intuiting mathematical truth.

Cheers

--

----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

LauLuna

unread,
Jun 28, 2007, 10:32:12 AM6/28/07
to Everything List

This is not fair to Penrose. He has convincingly argued in 'Shadows of
the Mind' that human mathematical intelligence cannot be a knowably
sound algorithm.

Assume X is an algorithm representing the human mathematical
intelligence. The point is not that man cannot recognize X as
representing his own intellingence, it is rather that human
intellingence cannot know X to be sound (independently of whether X is
recognized as what it is). And this is strange because humans could
exhaustively inspect X and they should find it correct since it
contains the same principles of reasoning human intelligence employs!

One way out is claiming that human intelligence is insonsistent.
Another, that such a thing as human intelligence could not exist,
since it is not well defined. The latter seems more of a serious
objection to me. So, I consider Penrose's argument inconclusive.

Anyway, the use Lucas and Penrose make of Gödel's theorem make it seem
less likely that human reason can be reproduced by machines. This must
be granted.

Regards


On 9 jun, 18:40, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Hi Chris,
>

> Le 09-juin-07, à 13:03, chris peck a écrit :

> Interpretation of Plotinus¹ Theory of Matter" Cie 2007 )
> 1) Incompleteness and Mechanism
> There is a vast literature where G odel¹s first and second


> incompleteness theorems are used to argue that human beings are
> different of, if not superior to, any machine. The most famous attempts
> have been given by J. Lucas in the early sixties and by R. Penrose in
> two famous books [53, 54]. Such type of argument are not well
> supported. See for example the recent book by T. Franzen [21]. There is

> also a less well known tradition where G odel¹s theorems is used in


> favor of the mechanist thesis. Emil Post, in a remarkable anticipation
> written about ten years before G odel published his incompleteness

> theorems, already discovered both the main ³G odelian motivation²


> against mechanism, and the main pitfall of such argumentations [17,
> 55]. Post is the first discoverer 1 of Church Thesis, or Church Turing
> Thesis, and Post is the first one to prove the first incompleteness
> theorem from a statement equivalent to Church thesis, i.e. the

> existence of a universal‹Post said ³complete²‹normal (production)


> system 2. In his anticipation, Post concluded at first that the

> mathematician¹s mind or that the logical process is essentially
> creative. He adds : ³It makes of the mathematician much more than a


> clever being who can do quickly what a machine could do ultimately. We
> see that a machine would never give a complete logic ; for once the
> machine is made we could prove a theorem it does not prove²(Post
> emphasis). But Post quickly realized that a machine could do the same

> deduction for its own mental acts, and admits that : ³The conclusion


> that man is not a machine is invalid. All we can say is that man cannot
> construct a machine which can do all the thinking he can. To illustrate
> this point we may note that a kind of machine-man could be constructed
> who would prove a similar theorem for his mental acts.²
> This has probably constituted his motivation for lifting the term
> creative to his set theoretical formulation of mechanical universality

> [56]. To be sure, an application of Kleene¹s second recursion theorem,


> see [30], can make any machine self-replicating, and Post should have
> said only that man cannot both construct a machine doing his thinking
> and proving that such machine do so. This is what remains from a
> reconstruction of Lucas-Penrose argument : if we are machine we cannot
> constructively specify which machine we are, nor, a fortiori, which
> computation support us. Such analysis begins perhaps with Benacerraf
> [4], (see [41] for more details). In his book on the subject, Judson
> Webb argues that Church Thesis is a main ingredient of the Mechanist
> Thesis. Then, he argues that, given that incompleteness is an easy‹one
> double diagonalization step, see above‹consequence of Church Thesis,

> G odel¹s 1931 theorem, which proves incompleteness without appeal to


> Church Thesis, can be taken as a confirmation of it. Judson Webb

> concludes that G odel¹s incompleteness theorem is a very lucky event


> for the mechanist philosopher [70, 71]. Torkel Franzen, who
> concentrates mainly on the negative (antimechanist in general) abuses

> of G odel¹s theorems, notes, after describing some impressive


> self-analysis of a formal system like Peano Arithmetic (PA) that :

> ³Inspired by this impressive ability of PA to understand itself, we
> conclude, in the spirit of the metaphorical ³applications² of the


> incompleteness theorem, that if the human mind has anything like the
> powers of profound self-analysis of PA or ZF, we can expect to be able
> to understand ourselves perfectly². Now, there is nothing metaphorical
> in this conclusion if we make clear some assumption of classical
> (platonist) mechanism, for example under the (necessarily non
> constructive) assumption that there is a substitution level where we
> are turing-emulable. We would not personally notice any digital
> functional substitution made at that level or below [38, 39, 41]. The

> second incompleteness theorem can then be conceived as an ³exact law of


> psychology² : no consistent machine can prove its own consistency from
> a description of herself made at some (relatively) correct substitution
> level‹which exists by assumption (see also [50]). What is remarkable of
> course is that all machine having enough provability abilities, can
> prove such psychological laws, and as T. Franzen singles out, there is
> a case for being rather impressed by the profound self-analysis of
> machines like PA and ZF or any of their consistent recursively
> enumerable extensions 3. This leads us to the positive‹open minded
> toward the mechanist hypothesis‹ use of incompleteness. Actually, the
> whole of recursion theory, mainly intensional recursion theory [59],
> can be seen in that way, and this is still more evident when we look at
> the numerous application of recursion theory in theoretical artificial
> intelligence or in computational learning theory. I refer the reader to
> the introductory paper by Case and Smith, or to the book by Osherson
> and Martin [14] [46]. In this short paper we will have to consider
> machines having both provability abilities and inference inductive
> abilities, but actually we will need only trivial such inference

> inductive abilities. I call such machine ³L obian² for the proheminant
> r–ole of L ob¹s theorem, or formula, in our setting, see below. Now,
> probably due to the abundant abuses of G odel¹s theorems in philosophy,


> physics and theology, negative feelings about any possible applications
> of incompleteness in those fields could have developed. Here, on the
> contrary, it is our purpose to illustrate that the incompleteness
> theorems and some of their generalisations, provide a rather natural

> purely arithmetical interpretation of Plotinus¹ Platonist, non

> Aristotelian, ³theology² including his ³Matter Theory². As a theory


> bearing on matter, such a theory is obviously empirically falsifiable :
> it is enough to compare empirical physics with the arithmetical

> interpretation of Plotinus¹ theory of Matter. A divergence here would


> not refute Plotinus, of course, but only the present arithmetical
> interpretation. This will illustrate the internal consistency and the
> external falsifiability of some theology.
>
> ....
>
> ------
>

> http://iridia.ulb.ac.be/~marchal/- Ocultar texto de la cita -
>
> - Mostrar texto de la cita -

Jesse Mazer

unread,
Jun 28, 2007, 4:05:13 PM6/28/07
to everyth...@googlegroups.com
LauLuna wrote:

>
>
>
>This is not fair to Penrose. He has convincingly argued in 'Shadows of
>the Mind' that human mathematical intelligence cannot be a knowably
>sound algorithm.
>
>Assume X is an algorithm representing the human mathematical
>intelligence. The point is not that man cannot recognize X as
>representing his own intellingence, it is rather that human
>intellingence cannot know X to be sound (independently of whether X is
>recognized as what it is). And this is strange because humans could
>exhaustively inspect X and they should find it correct since it
>contains the same principles of reasoning human intelligence employs!

But why do you think human mathematical intelligence should be based on
nothing more than logical deductions from certain "principles of reasoning",
like an axiomatic system? It seems to me this is the basic flaw in the
argument--for an axiomatic system we can look at each axiom individually,
and if we think they're all true statements about mathematics, we can feel
confident that any theorems derived logically from these axioms should be
true as well. But if someone gives you a detailed simulation of the brain of
a human mathematician, there's nothing analogous you can do to feel 100%
certain that the simulated brain will never give you a false statement. It
helps if you actually imagine such a simulation being performed, and then
think about what Godel's theorem would tell you about this simulation, as I
did in this post:

http://groups.google.com/group/everything-list/browse_thread/thread/f97ba8b2903333f7/5627eb66017304f2?lnk=gst&rnum=1#5627eb66017304f2

Jesse

_________________________________________________________________
Make every IM count. Download Messenger and join the i’m Initiative now.
It’s free. http://im.live.com/messenger/im/home/?source=TAGHM_June07

LauLuna

unread,
Jun 28, 2007, 7:47:35 PM6/28/07
to Everything List
For any Turing machine there is an equivalent axiomatic system;
whether we could construct it or not, is of no significance here.

Reading your link I was impressed by Russell Standish's sentence:

'I cannot prove this statement'

and how he said he could not prove it true and then proved it true.

Isn't it more likely that the sentence is paradoxical and therefore
non propositional. This is what could make a difference between humans
and computers: the correspinding sentence for a computer (when 'I' is
replaced with the description of a computer) could not be non
propositional: it would be a gödelian sentence.

Regards

On Jun 28, 10:05 pm, "Jesse Mazer" <laserma...@hotmail.com> wrote:
> LauLuna wrote:
>
> >This is not fair to Penrose. He has convincingly argued in 'Shadows of
> >the Mind' that human mathematical intelligence cannot be a knowably
> >sound algorithm.
>
> >Assume X is an algorithm representing the human mathematical
> >intelligence. The point is not that man cannot recognize X as
> >representing his own intellingence, it is rather that human
> >intellingence cannot know X to be sound (independently of whether X is
> >recognized as what it is). And this is strange because humans could
> >exhaustively inspect X and they should find it correct since it
> >contains the same principles of reasoning human intelligence employs!
>
> But why do you think human mathematical intelligence should be based on
> nothing more than logical deductions from certain "principles of reasoning",
> like an axiomatic system? It seems to me this is the basic flaw in the
> argument--for an axiomatic system we can look at each axiom individually,
> and if we think they're all true statements about mathematics, we can feel
> confident that any theorems derived logically from these axioms should be
> true as well. But if someone gives you a detailed simulation of the brain of
> a human mathematician, there's nothing analogous you can do to feel 100%
> certain that the simulated brain will never give you a false statement. It
> helps if you actually imagine such a simulation being performed, and then
> think about what Godel's theorem would tell you about this simulation, as I
> did in this post:
>

> http://groups.google.com/group/everything-list/browse_thread/thread/f...

Jesse Mazer

unread,
Jun 28, 2007, 8:13:59 PM6/28/07
to everyth...@googlegroups.com
LauLuna wrote:

>
>
>For any Turing machine there is an equivalent axiomatic system;
>whether we could construct it or not, is of no significance here.

But for a simulation of a mathematician's brain, the axioms wouldn't be
statements about arithmetic which we could inspect and judge whether they
were true or false individually, they'd just be statements about the initial
state and behavior of the simulated brain. So again, there'd be no way to
inspect the system and feel perfectly confident the system would never
output a false statement about arithmetic, unlike in the case of the
axiomatic systems used by mathematicians to prove theorems.

>
>Reading your link I was impressed by Russell Standish's sentence:
>
>'I cannot prove this statement'
>
>and how he said he could not prove it true and then proved it true.

But "prove" does not have any precisely-defined meaning here. If you wanted
to make it closer to Godel's theorem, then again, you'd have to take a
detailed simulation of a human mind which can output various statements, and
then look at the statement "The simulation will never output this
statement"--certainly the simulated mind can see that if he doesn't make a
mistake he *will* never output that statement, but he can't be 100% sure
he'll never make a mistake, and the statement itself is only about the
well-defined notion of what output the simulation gives, not in more
ill-defined notions of what the simulation "knows" or can "prove" in its own
mind.

Jesse

_________________________________________________________________
Get a preview of Live Earth, the hottest event this summer - only on MSN
http://liveearth.msn.com?source=msntaglineliveearthhm

Bruno Marchal

unread,
Jun 29, 2007, 5:57:28 AM6/29/07
to everyth...@googlegroups.com

Le 28-juin-07, à 16:32, LauLuna a écrit :

>
>
> This is not fair to Penrose. He has convincingly argued in 'Shadows of
> the Mind' that human mathematical intelligence cannot be a knowably
> sound algorithm.
>
> Assume X is an algorithm representing the human mathematical
> intelligence. The point is not that man cannot recognize X as
> representing his own intellingence, it is rather that human
> intellingence cannot know X to be sound (independently of whether X is
> recognized as what it is). And this is strange because humans could
> exhaustively inspect X and they should find it correct since it
> contains the same principles of reasoning human intelligence employs!


As far as "human intelligence" is sound and finitely describable as X,
human intelligence cannot recognize X as being "human intelligence".

>
> One way out is claiming that human intelligence is insonsistent.
> Another, that such a thing as human intelligence could not exist,
> since it is not well defined. The latter seems more of a serious
> objection to me. So, I consider Penrose's argument inconclusive.


Of course this will not work assuming comp, i.e. the (non constructive)
assumption that there is a level of description such that I can be
described correctly at that level. The conclusion is only that I cannot
prove to myself that such a level is the correct one, so the "yes
doctor" has to be a non-constructive bet. Practically it needs some
"platonic" act of faith. Assuming comp, we don't have to define what is
intelligence or consciousness ... to make reasoning.

>
> Anyway, the use Lucas and Penrose make of Gödel's theorem make it seem
> less likely that human reason can be reproduced by machines. This must
> be granted.


The Lucas Penrose (of the "Emperor's new clothes") argument is just
incorrect, and its maximal correct reconstruction just shows that human
reason/body cannot build machine provably or knowably endowed with
human reason/body.
It can do that in some non provable way, and we could there is a case
that animals do something similar since the invention of asexual and
sexual reproduction.

Penrose is correct in the shadows of the mind, (by adding the
"knowably" you refer to above) but he does not take seriously the
correction into account.
But the whole of the' arithmetical interpretation of Plotinus
hypostases including its matter theory is build on that nuance.
Assuming comp, we really cannot know (soundly) which machine we are,
and thus which computations support us. This gives the arithmetical
interpretation of the first person comp indeterminacies. It predicts
also that any sound lobian machine looking at itself below its
substitution level will discover a sharable form of indeterminacy, like
QM confirms (and illustrates).

I do appreciate Penrose (I talked with him in Siena). Unlike many
physicist, he is quite aware of the existence and hardness of the mind
body problem, and agrees that you cannot have both materialism and
computationalism (but for different reason than me, and as I said
slightly incorrect one which forces him to speculate on the wrongness
of both QM and comp). I get the same conclusion by keeping comp and
(most probably) QM, but then by abandoning physicalism/materialism.

Regards,

Bruno

>> r-ole of L ob¹s theorem, or formula, in our setting, see below. Now,


>> probably due to the abundant abuses of G odel¹s theorems in
>> philosophy,
>> physics and theology, negative feelings about any possible
>> applications
>> of incompleteness in those fields could have developed. Here, on the
>> contrary, it is our purpose to illustrate that the incompleteness
>> theorems and some of their generalisations, provide a rather natural
>> purely arithmetical interpretation of Plotinus¹ Platonist, non
>> Aristotelian, ³theology² including his ³Matter Theory². As a theory
>> bearing on matter, such a theory is obviously empirically falsifiable
>> :
>> it is enough to compare empirical physics with the arithmetical
>> interpretation of Plotinus¹ theory of Matter. A divergence here would
>> not refute Plotinus, of course, but only the present arithmetical
>> interpretation. This will illustrate the internal consistency and the
>> external falsifiability of some theology.
>>
>> ....
>>
>> ------
>>
>> http://iridia.ulb.ac.be/~marchal/- Ocultar texto de la cita -
>>
>> - Mostrar texto de la cita -
>
>
> >
>

http://iridia.ulb.ac.be/~marchal/

LauLuna

unread,
Jun 29, 2007, 10:17:58 AM6/29/07
to Everything List

On 29 jun, 02:13, "Jesse Mazer" <laserma...@hotmail.com> wrote:
> LauLuna wrote:
>
> >For any Turing machine there is an equivalent axiomatic system;
> >whether we could construct it or not, is of no significance here.
>
> But for a simulation of a mathematician's brain, the axioms wouldn't be
> statements about arithmetic which we could inspect and judge whether they
> were true or false individually, they'd just be statements about the initial
> state and behavior of the simulated brain. So again, there'd be no way to
> inspect the system and feel perfectly confident the system would never
> output a false statement about arithmetic, unlike in the case of the
> axiomatic systems used by mathematicians to prove theorems.
>

Yes, but this is not the point. For any Turing machine performing
mathematical skills there is also an equivalent mathematical axiomatic
system; if we are sound Turing machines, then we could never know that
mathematical system sound, in spite that its axioms are the same we
use.

And the impossibility has to be a logical impossibility, not merely a
technical or physical one since it depends on Gödel's theorem. That's
a bit odd, isn't it?

Regards

Jesse Mazer

unread,
Jun 29, 2007, 1:10:18 PM6/29/07
to everyth...@googlegroups.com
LauLuna wrote:

>
>
>On 29 jun, 02:13, "Jesse Mazer" <laserma...@hotmail.com> wrote:
> > LauLuna wrote:
> >
> > >For any Turing machine there is an equivalent axiomatic system;
> > >whether we could construct it or not, is of no significance here.
> >
> > But for a simulation of a mathematician's brain, the axioms wouldn't be
> > statements about arithmetic which we could inspect and judge whether
>they
> > were true or false individually, they'd just be statements about the
>initial
> > state and behavior of the simulated brain. So again, there'd be no way
>to
> > inspect the system and feel perfectly confident the system would never
> > output a false statement about arithmetic, unlike in the case of the
> > axiomatic systems used by mathematicians to prove theorems.
> >
>
>Yes, but this is not the point. For any Turing machine performing
>mathematical skills there is also an equivalent mathematical axiomatic
>system; if we are sound Turing machines, then we could never know that
>mathematical system sound, in spite that its axioms are the same we
>use.

I agree, a simulation of a mathematician's brain (or of a giant simulated
community of mathematicians) cannot be a *knowably* sound system, because we
can't do the trick of examining each axiom and seeing they are individually
correct statements about arithmetic as with the normal axiomatic systems
used by mathematicians. But that doesn't mean it's unsound either--it may in
fact never produce a false statement about arithmetic, it's just that we
can't be sure in advance, the only way to find out is to run it forever and
check.

But Penrose was not just arguing that human mathematical ability can't be
based on a knowably sound algorithm, he was arguing that it must be
*non-algorithmic*. I think my thought-experiment shows why this doesn't make
sense--we can see that Godel's theorem doesn't prove that an uploaded brain
living in a closed computer simulation S would think any different from us,
just that it wouldn't be able to correctly output a theorem about arithmetic
equivalent to "the simulation S will never output this statement". But this
doesn't show that the uploaded mind somehow is not self-aware or that we
know something it doesn't, since *we* can't correctly judge that statement
to be true either! It might very well be that the simulated brain will slip
up and make a mistake, giving that statement as output even though the act
of doing so proves it's a false statement about arithmetic--we have no way
to prove this will never happen, the only way to know is to run the program
forever and see.

>
>And the impossibility has to be a logical impossibility, not merely a
>technical or physical one since it depends on Gödel's theorem. That's
>a bit odd, isn't it?

No, I don't see anything very odd about the idea that human mathematical
abilities can't be a knowably sound algorithm--it is no more odd than the
idea that there are some cellular automata where there is no shortcut to
knowing whether they'll reach a certain state or not other than actually
simulating them, as Wolfram suggests in "A New Kind of Science". In fact I'd
say it fits nicely with our feeling of "free will", that there should be no
way to be sure in advance that we won't break some rules we have been told
to obey, apart from actually "running" us and seeing what we actually end up
doing.

Jesse

_________________________________________________________________
Need a break? Find your escape route with Live Search Maps.
http://maps.live.com/default.aspx?ss=Restaurants~Hotels~Amusement%20Park&cp=33.832922~-117.915659&style=r&lvl=13&tilt=-90&dir=0&alt=-1000&scene=1118863&encType=1&FORM=MGAC01

LauLuna

unread,
Jul 5, 2007, 3:14:50 PM7/5/07
to Everything List

Yes, but how can there be a logical impossibility for us to
acknowledge as sound the same principles and rules we are using?


>
> But Penrose was not just arguing that human mathematical ability can't be
> based on a knowably sound algorithm, he was arguing that it must be
> *non-algorithmic*.

No, he argues in Shadows of the Mind exactly what I say. He goes on
arguing why a sound algorithm representing human intelligence is
unlikely to be not knowably sound.

>
>
> >And the impossibility has to be a logical impossibility, not merely a
> >technical or physical one since it depends on Gödel's theorem. That's
> >a bit odd, isn't it?
>
> No, I don't see anything very odd about the idea that human mathematical
> abilities can't be a knowably sound algorithm--it is no more odd than the
> idea that there are some cellular automata where there is no shortcut to
> knowing whether they'll reach a certain state or not other than actually
> simulating them, as Wolfram suggests in "A New Kind of Science".

The point is that the axioms are exactly our axioms!

>In fact I'd
> say it fits nicely with our feeling of "free will", that there should be no
> way to be sure in advance that we won't break some rules we have been told
> to obey, apart from actually "running" us and seeing what we actually end up
> doing.

I don't see how to reconcile free will with computationalism either.

Regards

Jesse Mazer

unread,
Jul 5, 2007, 4:14:03 PM7/5/07
to everyth...@googlegroups.com
LauLuna wrote:

The axioms in a simulation of a brain would have nothing to do with the
high-level conceptual "principles and rules" we use when thinking about
mathematics, they would be axioms concerning the most basic physical laws
and microscopic initial conditions of the simulated brain and its simulated
environment, like the details of which brain cells are connected by which
synapses or how one cell will respond to a particular electrochemical signal
from another cell. Just because I think my high-level reasoning is quite
reliable in general, that's no reason for me to believe a detailed
simulation of my brain would be "sound" in the sense that I'm 100% certain
that this precise arrangement of nerve cells in this particular simulated
environment, when allowed to evolve indefinitely according to some
well-defined deterministic rules, would *never* make a mistake in reasoning
and output an incorrect statement about arithmetic (or even that it would
never choose to intentionally output a statement it believed to be false
just to be contrary).


> >
> > But Penrose was not just arguing that human mathematical ability can't
>be
> > based on a knowably sound algorithm, he was arguing that it must be
> > *non-algorithmic*.
>
>No, he argues in Shadows of the Mind exactly what I say. He goes on
>arguing why a sound algorithm representing human intelligence is
>unlikely to be not knowably sound.

He does argue that as a first step, but then he goes on to conclude what I
said he did, that human intelligence cannot be algorithmic. For example, on
p. 40 he makes quite clear that his arguments throughout the rest of the
book are intended to show that there must be something non-computational in
human mental processes:

"I shall primarily be concerned, in Part I of this book, with the issue of
what it is possible to achieve by use of the mental quality of
'understanding.' Though I do not attempt to define what this word means, I
hope that its meaning will indeed be clear enough that the reader will be
persuaded that this quality--whatever it is--must indeed be an essentail
part of that mental activity needed for an acceptance of the arguments of
2.5. I propose to show that the appresiation of these arguments must involve
something non-computational."

Later, on p. 54:

"Why do I claim that this 'awareness', whatever it is, must be something
non-computational, so that no robot, controlled by a computer, based merely
on the standard logical ideas of a Turing machine (or equivalent)--whether
top-down or bottom-up--can achieve or even simulate it? It is here that the
Godelian argument plays its crucial role."

His whole Godelian argument is based on the idea that for any computational
theorem-proving machine, by examining its construction we can use this
"understanding" to find a mathematical statement which *we* know must be
true, but which the machine can never output--that we understand something
it doesn't. But I think my argument shows that if you were really to build a
simulated mathematician or community of mathematicians in a computer, the
Godel statement for this system would only be true *if* they never made a
mistake in reasoning or chose to output a false statement to be perverse,
and that therefore there is no way for us on the outside to have any more
confidence about whether they will ever output this statement than they do
(and thus neither of us can know whether the statement is actually a true or
false theorem of arithmetic).

It's true that on p. 76, Penrose does restrict his conclusions about "The
Godelian Case" to the following statement (which he denotes 'G'):

"Human mathematicians are not using a knowably sound algorithm in order to
ascertain mathematical truth."

I have no objection to this proposition on its own, but then in Chapter 3,
"The case for non-computability in mathematical thought" he does go on to
argue (as the chapter title suggest) that this proposition G justifies the
claim that human reasoning must be non-computable. In discussing objections
to this argument, he dismisses the possibility that G might be correct but
that humans are using an unknowable algorithm, or an unsound algorithm, but
as far as I can see he never discusses the possibility I have been
suggesting, that an algorithm that faithfully simulated the reasoning of a
human mathematician (or community of mathematicians) might be both knowable
(in the sense that the beings in the simulation are free to examine their
own algorithm) and sound (meaning that if the simulation is run forever,
they never output a false statement about arithmetic), but just not knowably
sound (meaning that neither they nor us can find a *proof* that will tell us
in advance that the simulation will never output a false statement, the only
way to check is to run it forever and see).

>
> >
> >
> > >And the impossibility has to be a logical impossibility, not merely a
> > >technical or physical one since it depends on Gödel's theorem. That's
> > >a bit odd, isn't it?
> >
> > No, I don't see anything very odd about the idea that human mathematical
> > abilities can't be a knowably sound algorithm--it is no more odd than
>the
> > idea that there are some cellular automata where there is no shortcut to
> > knowing whether they'll reach a certain state or not other than actually
> > simulating them, as Wolfram suggests in "A New Kind of Science".
>
>The point is that the axioms are exactly our axioms!

Again, the "axioms" would be detailed statements about the initial
conditions and behavior of the most basic elements of the simulation--the
initial position and velocity of each simulated molecule along with rules
for the molecules' behavior, perhaps--not the sort of high-level conceptual
axioms we use in our minds when thinking about mathematics. If we can't even
predict whether some very simple cellular automata will ever reach a given
state, I don't see why it should be surprising that we can't predict whether
some very complex physical simulation of an immortal brain and its
environment will ever reach a given state (the state in which it decides to
output the system's Godel statement, whether because of incorrect reasoning
or just out of contrariness).

>
> >In fact I'd
> > say it fits nicely with our feeling of "free will", that there should be
>no
> > way to be sure in advance that we won't break some rules we have been
>told
> > to obey, apart from actually "running" us and seeing what we actually
>end up
> > doing.
>
>I don't see how to reconcile free will with computationalism either.

I am only talking about the feeling of free will which is perfectly
compatible with ultimate determinism (see
http://en.wikipedia.org/wiki/Compatibilism ), not the philosophical idea of
"libertarian free will" (see
http://en.wikipedia.org/wiki/Libertarianism_(metaphysics) ) which requires
determinism to be false. If we had some unerring procedure for predicting
whether other people or even ourselves would make a certain decision in the
future, it's hard to see how we could still have the same subjective sense
of making choices whose outcomes aren't certain until we actually make them.

Jesse

_________________________________________________________________
http://im.live.com/messenger/im/home/?source=hmtextlinkjuly07

Bruno Marchal

unread,
Jul 6, 2007, 6:08:47 AM7/6/07
to everyth...@googlegroups.com

Le 05-juil.-07, à 22:14, Jesse Mazer a écrit :

> His [Penrose] whole Godelian argument is based on the idea that for

> any computational
> theorem-proving machine, by examining its construction we can use this
> "understanding" to find a mathematical statement which *we* know must
> be
> true, but which the machine can never output--that we understand
> something
> it doesn't. But I think my argument shows that if you were really to
> build a
> simulated mathematician or community of mathematicians in a computer,
> the
> Godel statement for this system would only be true *if* they never
> made a
> mistake in reasoning or chose to output a false statement to be
> perverse,
> and that therefore there is no way for us on the outside to have any
> more
> confidence about whether they will ever output this statement than
> they do
> (and thus neither of us can know whether the statement is actually a
> true or
> false theorem of arithmetic).

I think I agree with your line of argumentation, but you way of talking
could be misleading. Especially if people interpret "arithmetic" by
If we are in front of a machine that we know to be sound, then we can
indeed know that the Godelian proposition associated to the machine is
true. For example, nobody (serious) doubt that PA (Peano Arithmetic,
the first order formal arithmetic theory/machine) is sound. So we know
that all the godelian sentences are true, and PA cannot know that. But
this just proves that I am not PA, and that I have actually stronger
ability than PA.
I could have taken ZF instead (ZF is Zermelo Fraenkel formal
theory/machine of sets), although I must say that if I have entire
confidence in PA, I have only 99,9998% confidence in ZF (and thus I can
already be only 99,9998% sure of the ZF godelian sentences).
About NF (Quine's New Foundation formal theory machine) I have only 50%
confidence!!!

Now all (sufficiently rich) theories/machine can prove their own
Godel's theorem. PA can prove that if PA is consistent then PA cannot
prove its consitency. A somehow weak (compared to ZF) theory like PA
can even prove the corresponding theorem for the richer ZF: PA can
prove that if ZF is consistent then ZF can prove its own consistency.
So, in general a machine can find its own godelian sentences, and can
even infer their truth in some abductive way from very minimal
inference inductive abilities, or from assumptions.

No sound (or just consistent) machine can ever prove its own godelian
sentences, in particular no machine can prove its own consistency, but
then machine can bet on them or "know" them serendipitously). This is
comparable with consciousness. Indeed it is easy to manufacture thought
experiements illustrating that no conscious being can prove it is
conscious, except that "consciousness" is more truth related, so that
machine cannot even define their own consciousness (by Tarski
undefinability of truth theorem).

Bruno


http://iridia.ulb.ac.be/~marchal/

Jason

unread,
Jul 6, 2007, 8:00:44 AM7/6/07
to Everything List

On Jul 5, 2:14 pm, LauLuna <laureanol...@yahoo.es> wrote:
>
> I don't see how to reconcile free will with computationalism either.
>

It seems like you are an incompatibilist concerning free will.
Freewill can be reconciled with computationalism (or any deterministic
system) if one accepts compatabilism ( http://en.wikipedia.org/wiki/Free_will#Compatibilism
). More worrisome than determinism's affect on freewill, however, is
many-worlds (or other everything/ultimate ensemble theories). Whereas
determinism says the future is written in stone, many-worlds would say
all futures are written in stone.

Jason

LauLuna

unread,
Jul 6, 2007, 8:53:54 AM7/6/07
to Everything List

But again, for any set of such 'physiological' axioms there is a
corresponding equivalent set of 'conceptual' axioms. There is all the
same a logical impossibility for us to know the second set is sound.
No consistent (and strong enough) system S can prove the soundness of
any system S' equivalent to S: otherwise S' would prove its own
soundness and would be inconsistent. And this is just what is odd.

Yes, he ultimately argues for that.

> His whole Godelian argument is based on the idea that for any computational
> theorem-proving machine, by examining its construction we can use this
> "understanding" to find a mathematical statement which *we* know must be
> true, but which the machine can never output--that we understand something
> it doesn't.

I'd say this is rather Lucas's argument. Penrose's is like this:

1. Mathematicians are not using a knowably sound algorithm to do math.
2. If they were using any algorithm whatsoever, they would be using a
knowably sound one.
3. Ergo, they are not using any algorithm at all.

> compatible with ultimate determinism (seehttp://en.wikipedia.org/wiki/Compatibilism), not the philosophical idea of
> "libertarian free will" (seehttp://en.wikipedia.org/wiki/Libertarianism_(metaphysics) ) which requires


> determinism to be false. If we had some unerring procedure for predicting
> whether other people or even ourselves would make a certain decision in the
> future, it's hard to see how we could still have the same subjective sense
> of making choices whose outcomes aren't certain until we actually make them.
>
> Jesse
>

> _________________________________________________________________http://im.live.com/messenger/im/home/?source=hmtextlinkjuly07- Hide quoted text -
>
> - Show quoted text -

LauLuna

unread,
Jul 6, 2007, 9:02:38 AM7/6/07
to Everything List
As I see it, compatibilism changes the definition of free will from a
metaphysical to a psychological one. So, I am probably a compatibilist
according to the compatibilist definition of free will, which is not
my own.

Regards

On Jul 6, 2:00 pm, Jason <jasonre...@gmail.com> wrote:
> On Jul 5, 2:14 pm, LauLuna <laureanol...@yahoo.es> wrote:
>
>
>
> > I don't see how to reconcile free will with computationalism either.
>
> It seems like you are an incompatibilist concerning free will.
> Freewill can be reconciled with computationalism (or any deterministic

> system) if one accepts compatabilism (http://en.wikipedia.org/wiki/Free_will#Compatibilism

Bruno Marchal

unread,
Jul 6, 2007, 10:47:21 AM7/6/07
to everyth...@googlegroups.com

Le 06-juil.-07, à 14:00, Jason a écrit :


Like comp already say. At least with QM we know that the future are
weighted and free-will will correspond to choosing among normal worlds.
With comp, there is only promising results in that direction, (which
could lead to a refutation of comp).
John Bell (the physicist, not the quantum logician) has also crticized
the MWI with respect to free-will, but this does not follow from the
SWE. The SWE does not say all future are equal. It says that all future
are realized, but some have negligible probability, and this left room
for genuine free-will. For example I can choose the stairs, the lift or
the windows to go outside, but only with the stairs and lift can I stay
in relatively normal worlds. By going outside by jumping through the
windows, I take the risk of surviving in a white rabbit world and then
to remain in the relatively normal world with respect to that not
normal world. This is why I think quantum immortality is a form of
terrifying thinking ... if you think twice and take it seriously. Of
course reality (with or without QM or comp) is more complex in any
case, so it is much plausibly premature to panic from so theoretical
elaborations. Actually computer science predicts possible unexpectable
jump ...
Is it worth exploring the possible comp-hell, to search the limit of
the "unbearable"? Well, news indicate humans have some incline to point
on such direction. That could be the price of "free-will". Have you
read the delicious texts by Smullyan (in Mind'sI I think) about the guy
who asks God to take away his free-will (and its associated guilt
feeling) ?

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Jul 6, 2007, 1:43:01 PM7/6/07
to everyth...@googlegroups.com
Bruno Marchal wrote:
...

> Now all (sufficiently rich) theories/machine can prove their own
> Godel's theorem. PA can prove that if PA is consistent then PA cannot
> prove its consitency. A somehow weak (compared to ZF) theory like PA
> can even prove the corresponding theorem for the richer ZF: PA can
> prove that if ZF is consistent then ZF can prove its own consistency.

Of course you meant "..then ZF cannot prove its own consistency."

Brent Meeker

> So, in general a machine can find its own godelian sentences, and can
> even infer their truth in some abductive way from very minimal
> inference inductive abilities, or from assumptions.
>
> No sound (or just consistent) machine can ever prove its own godelian
> sentences, in particular no machine can prove its own consistency, but
> then machine can bet on them or "know" them serendipitously). This is
> comparable with consciousness. Indeed it is easy to manufacture thought
> experiements illustrating that no conscious being can prove it is
> conscious, except that "consciousness" is more truth related, so that
> machine cannot even define their own consciousness (by Tarski
> undefinability of truth theorem).

But this is within an axiomatic system - whose reliability already depends on knowing the truth of the axioms. ISTM that concepts of consciousness, knowledge, and truth that are relative to formal axiomatic systems are already to weak to provide fundamental explanations.

Brent Meeker

Bruno Marchal

unread,
Jul 7, 2007, 6:59:34 AM7/7/07
to everyth...@googlegroups.com

Le 06-juil.-07, à 14:53, LauLuna a écrit :


> But again, for any set of such 'physiological' axioms there is a
> corresponding equivalent set of 'conceptual' axioms. There is all the
> same a logical impossibility for us to know the second set is sound.
> No consistent (and strong enough) system S can prove the soundness of
> any system S' equivalent to S: otherwise S' would prove its own
> soundness and would be inconsistent. And this is just what is odd.


It is odd indeed. But it is.


> I'd say this is rather Lucas's argument. Penrose's is like this:
>
> 1. Mathematicians are not using a knowably sound algorithm to do math.
> 2. If they were using any algorithm whatsoever, they would be using a
> knowably sound one.
> 3. Ergo, they are not using any algorithm at all.


Do you agree that from what you say above, "2." is already invalidate?

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 7, 2007, 7:17:16 AM7/7/07
to everyth...@googlegroups.com

Le 06-juil.-07, à 19:43, Brent Meeker a écrit :

>
> Bruno Marchal wrote:
> ...
>> Now all (sufficiently rich) theories/machine can prove their own
>> Godel's theorem. PA can prove that if PA is consistent then PA cannot
>> prove its consitency. A somehow weak (compared to ZF) theory like PA
>> can even prove the corresponding theorem for the richer ZF: PA can
>> prove that if ZF is consistent then ZF can prove its own consistency.
>
> Of course you meant "..then ZF cannot prove its own consistency."


Yes. (Sorry).

>
>> So, in general a machine can find its own godelian sentences, and can
>> even infer their truth in some abductive way from very minimal
>> inference inductive abilities, or from assumptions.
>>
>> No sound (or just consistent) machine can ever prove its own godelian
>> sentences, in particular no machine can prove its own consistency, but
>> then machine can bet on them or "know" them serendipitously). This is
>> comparable with consciousness. Indeed it is easy to manufacture
>> thought
>> experiements illustrating that no conscious being can prove it is
>> conscious, except that "consciousness" is more truth related, so that
>> machine cannot even define their own consciousness (by Tarski
>> undefinability of truth theorem).
>
> But this is within an axiomatic system - whose reliability already
> depends on knowing the truth of the axioms. ISTM that concepts of
> consciousness, knowledge, and truth that are relative to formal
> axiomatic systems are already to weak to provide fundamental
> explanations.


With UDA (Universal Dovetailer Argument) I ask you to implicate
yourself in a "thought experiment". Obviously I bet, hope, pray, that
you will reason reasonably and soundly.
With the AUDA (the Arithmetical version of UDA, or Plotinus now) I ask
the Universal Machine to implicate herself in a formal reasoning. As a
mathematician, I limit myself to sound (and thus self-referentially
correct) machine, for the same reason I pray you are sound.
Such a restriction is provably non constructive: there is no algorithm
to decide if a machine is sound or not ... But note that the comp
assumption and even just the coherence of Church thesis relies on non
constructive assumptions at the start.


Bruno


http://iridia.ulb.ac.be/~marchal/

LauLuna

unread,
Jul 7, 2007, 10:39:01 AM7/7/07
to Everything List

On Jul 7, 12:59 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Le 06-juil.-07, à 14:53, LauLuna a écrit :
>
> > But again, for any set of such 'physiological' axioms there is a
> > corresponding equivalent set of 'conceptual' axioms. There is all the
> > same a logical impossibility for us to know the second set is sound.
> > No consistent (and strong enough) system S can prove the soundness of
> > any system S' equivalent to S: otherwise S' would prove its own
> > soundness and would be inconsistent. And this is just what is odd.
>
> It is odd indeed. But it is.

No, it is not necessary so; the alternative is that such algorithm
does not exist. I will endorse the existence of that algorithm only
when I find reason enough to do it. I haven't yet, and the oddities
its existence implies count, obviously, against its existence.


> > I'd say this is rather Lucas's argument. Penrose's is like this:
>
> > 1. Mathematicians are not using a knowably sound algorithm to do math.
> > 2. If they were using any algorithm whatsoever, they would be using a
> > knowably sound one.
> > 3. Ergo, they are not using any algorithm at all.
>
> Do you agree that from what you say above, "2." is already invalidate?

Not at all. I still find it far likelier that if there is a sound
algorithm ALG and an equivalent formal system S whose soundness we can
know, then there is no logical impossibility for our knowing the
soundness of ALG.

What I find inconclusive in Penrose's argument is that he refers not
just to actual numan intellectual behavior but to some idealized
(forever sound and consistent) human intelligence. I think the
existence of a such an ability has to be argued.

If someone asked me: 'do you agree that Penrose's argument does not
prove there are certain human behaviors which computers can't
reproduce?', I'd answered: 'yes, I agree it doesn't'. But if someone
asked me: 'do you agree that Penrose's argument does not prove human
intelligence cannot be simulated by computers?' I'd reply: 'as far
as that abstract intelligence you speak of exists at all as a real
faculty, I'd say it is far more probable that computers cannot
reproduce it'.

I.e. some versions of computationalism assume, exactly like Penrose,
the existence of that abstract human intelligence; I would say those
formulations of computationalism are nearly refuted by Penrose.

I hope I've made my point clear.

Best


Bruno Marchal

unread,
Jul 8, 2007, 9:58:30 AM7/8/07
to everyth...@googlegroups.com

Le 07-juil.-07, à 16:39, LauLuna a écrit :

>
>
>
> On Jul 7, 12:59 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> Le 06-juil.-07, à 14:53, LauLuna a écrit :
>>
>>> But again, for any set of such 'physiological' axioms there is a
>>> corresponding equivalent set of 'conceptual' axioms. There is all the
>>> same a logical impossibility for us to know the second set is sound.
>>> No consistent (and strong enough) system S can prove the soundness of
>>> any system S' equivalent to S: otherwise S' would prove its own
>>> soundness and would be inconsistent. And this is just what is odd.
>>
>> It is odd indeed. But it is.
>
> No, it is not necessary so; the alternative is that such algorithm
> does not exist. I will endorse the existence of that algorithm only
> when I find reason enough to do it. I haven't yet, and the oddities
> its existence implies count, obviously, against its existence.


If the algorithm exists, then the knowable algorithm does not exist. We
can only bet on comp, not prove it. But it is refutable.

>
>
>>> I'd say this is rather Lucas's argument. Penrose's is like this:
>>
>>> 1. Mathematicians are not using a knowably sound algorithm to do
>>> math.
>>> 2. If they were using any algorithm whatsoever, they would be using a
>>> knowably sound one.
>>> 3. Ergo, they are not using any algorithm at all.
>>
>> Do you agree that from what you say above, "2." is already invalidate?
>
> Not at all. I still find it far likelier that if there is a sound
> algorithm ALG and an equivalent formal system S whose soundness we can
> know, then there is no logical impossibility for our knowing the
> soundness of ALG.


We do agree. You are just postulating not-comp. I have no trouble with
that.

>
> What I find inconclusive in Penrose's argument is that he refers not
> just to actual numan intellectual behavior but to some idealized
> (forever sound and consistent) human intelligence. I think the
> existence of a such an ability has to be argued.


A rather good approximation for machine could be given by the
transfinite set of effective and finite sound extensions of a Lobian
machine. Like those proposed by Turing. They all obey locally to G and
G* (as shown by Beklemishev). The infinite and the transfinite does not
help the machine with regard to the incompleteness phenomenon, except
if the infinite is made very highly non effective. But in that case you
tend to the "One" or truth a very non effective notion).


>
> If someone asked me: 'do you agree that Penrose's argument does not
> prove there are certain human behaviors which computers can't
> reproduce?', I'd answered: 'yes, I agree it doesn't'. But if someone
> asked me: 'do you agree that Penrose's argument does not prove human
> intelligence cannot be simulated by computers?' I'd reply: 'as far
> as that abstract intelligence you speak of exists at all as a real
> faculty, I'd say it is far more probable that computers cannot
> reproduce it'.


Why? All you need to do consists in providing more and more
"time-space-memory" to the machine. Humans are "universal" by extending
their mind by pictures on walls, ... magnetic tape ....

>
> I.e. some versions of computationalism assume, exactly like Penrose,
> the existence of that abstract human intelligence; I would say those
> formulations of computationalism are nearly refuted by Penrose.

There is a lobian abstract intelligence, but it can differentiate in
many kinds, and cannot be defined *effectively* (with a program) by any
machine. It corresponds loosely to the first non effective or
non-nameable ordinal (the OMEGA_1^Church-Kleene ordinal).


>
> I hope I've made my point clear.


OK. Personally I am just postulating the comp hyp and study the
consequences. If we are machine or sequence of machine then we cannot
which machine we are, still less which sequence of machines we belong
too ... (introducing eventually verifiable 1-person indeterminacies).
I argue that the laws of observability (physics) emerges from that
comp-indeterminacy. I think we agree on Penrose.

Bruno


http://iridia.ulb.ac.be/~marchal/

Reply all
Reply to author
Forward
0 new messages