Philosophical Implications of Gödel and Turing/Church

65 views
Skip to first unread message

Michael DeBellis

unread,
Sep 18, 2024, 9:47:37 AM9/18/24
to ontolog-forum
I just finished a book called Incompleteness the Proof and Paradox of Kurt Gödel which was a very enjoyable read. Not for any of the theory, the author had a philosophy degree and has a decent understanding of Gödel's work but that wasn't the main gist of it and even I found one error (she refers to Ordinality where it should be Cardinality) and I'm sure those of you who understand this better than I do (which is almost everyone in the group) would find more. Her focus is on the social issues, the kind of person Gödel was, his psychiatric problems, stories about the Institute of Advanced Studies at Princeton. Stories about Wittgenstein (always entertaining), the Logical Positivists, etc. 

But one thing she came back to a few times is something that has always really puzzled me. Here's a quote:
-------------------------------------------------------------------------------------------------------------
Roger Penrose in his two bestselling books, The Emperor’s New Mind and Shadows of the Mind, has made the incompleteness theorems central to his argument that our minds, whatever they are, cannot be digital computers. What Gödel’s theorems prove, he argues, is that even in our most technical, rule-bound thinking—that is, mathematics—we are engaging in truth-discovering processes that can’t be reduced to the mechanical procedures programmed into computers.
Goldstein, Rebecca (2006-02-16T22:58:59.000). Incompleteness: The Proof and Paradox of Kurt Gödel (Great Discoveries) (Kindle Locations 200-204). W. W. Norton & Company. Kindle Edition. 
--------------------------------------------------------------------------------------------------------------

Now Penrose is a brilliant guy but I have never understood this argument. It isn't that I disagree it's that I don't even see a coherent argument. The argument seems to be:
"Gödel (and I've heard the same in regard to Turing/Church, I'll refer to them as GT/C from now on) proved that a formal system/computer can't do X so that means the brain can't be modeled as a formal system/computer" 

This reminds me of an early episode of South Park where gnomes were stealing some kid's under pants. When they tracked down the underpants gnomes they showed the South Park kids their business model: 
1) Steal underpants
2)
3) Profit 

(Also reminds me of some startup business plans). But Penrose's (and of course he's not the only one) argument seems as coherent as the underpants gnome business model. There is something missing, namely the claim that the brain can do X (I'm avoiding discussing the content of the proofs because I'll probably get them wrong and I think they aren't relevant to my question). But of course as far as we now know the brain can't do the things that these proofs showed were limitations of logic either. There have been discussions with lots of hand waving about "intuition" (even Gödel himself believed that there might eventually be ways around his proof) and perhaps some day someone will somehow prove things that are unprovable according to GT/C But no one ever has and to my knowledge no sane logician has even tried. 

I'm just curios to know what others think? Is there something to these arguments I'm missing?

Also, IMO the brilliance of GT/C and also Heisenberg is that there are inevitably going to be limitations on our knowledge (because we are finite beings and our brains didn't evolve to do science but to allow Late Pleistocene hunter gatherers to maximize reproductive success) and the mark of a truly mature discipline is that we begin to understand those limitations. That isn't an argument against reason or logic but rather evidence that our understanding of logic as the theoretical foundation for mathematics is quite mature. 

Cheers,
Michael

Barry Smith

unread,
Sep 18, 2024, 12:56:36 PM9/18/24
to ontolo...@googlegroups.com
On Wed, Sep 18, 2024 at 9:47 AM Michael DeBellis <mdebe...@gmail.com> wrote:
I just finished a book called Incompleteness the Proof and Paradox of Kurt Gödel which was a very enjoyable read. Not for any of the theory, the author had a philosophy degree and has a decent understanding of Gödel's work but that wasn't the main gist of it and even I found one error (she refers to Ordinality where it should be Cardinality) and I'm sure those of you who understand this better than I do (which is almost everyone in the group) would find more. Her focus is on the social issues, the kind of person Gödel was, his psychiatric problems, stories about the Institute of Advanced Studies at Princeton. Stories about Wittgenstein (always entertaining), the Logical Positivists, etc. 

But one thing she came back to a few times is something that has always really puzzled me. Here's a quote:
-------------------------------------------------------------------------------------------------------------
Roger Penrose in his two bestselling books, The Emperor’s New Mind and Shadows of the Mind, has made the incompleteness theorems central to his argument that our minds, whatever they are, cannot be digital computers. What Gödel’s theorems prove, he argues, is that even in our most technical, rule-bound thinking—that is, mathematics—we are engaging in truth-discovering processes that can’t be reduced to the mechanical procedures programmed into computers.
Goldstein, Rebecca (2006-02-16T22:58:59.000). Incompleteness: The Proof and Paradox of Kurt Gödel (Great Discoveries) (Kindle Locations 200-204). W. W. Norton & Company. Kindle Edition. 
--------------------------------------------------------------------------------------------------------------

Now Penrose is a brilliant guy but I have never understood this argument. It isn't that I disagree it's that I don't even see a coherent argument. The argument seems to be:
"Gödel (and I've heard the same in regard to Turing/Church, I'll refer to them as GT/C from now on) proved that a formal system/computer can't do X so that means the brain can't be modeled as a formal system/computer" 

This reminds me of an early episode of South Park where gnomes were stealing some kid's under pants. When they tracked down the underpants gnomes they showed the South Park kids their business model: 
1) Steal underpants
2)
3) Profit 

(Also reminds me of some startup business plans). But Penrose's (and of course he's not the only one) argument seems as coherent as the underpants gnome business model. There is something missing, namely the claim that the brain can do X

You can find a thorough statement of this argument, which goes way beyond Penrose,

 
(I'm avoiding discussing the content of the proofs because I'll probably get them wrong and I think they aren't relevant to my question). But of course as far as we now know the brain can't do the things that these proofs showed were limitations of logic either.

I think you need to read a bit more on the implications of Gödel's theorem, also a bit more on the limits of Turing computability.

There have been discussions with lots of hand waving about "intuition" (even Gödel himself believed that there might eventually be ways around his proof) and perhaps some day someone will somehow prove things that are unprovable according to GT/C But no one ever has and to my knowledge no sane logician has even tried. 

I'm just curios to know what others think? Is there something to these arguments I'm missing?

In creating the proof of his theorem,  Gödel showed how it is always possible, given any provably consistant axiom system, to identify a proposition which a human can recognize as true but which cannot be proven in that system. 
BS

Also, IMO the brilliance of GT/C and also Heisenberg is that there are inevitably going to be limitations on our knowledge (because we are finite beings and our brains didn't evolve to do science but to allow Late Pleistocene hunter gatherers to maximize reproductive success) and the mark of a truly mature discipline is that we begin to understand those limitations. That isn't an argument against reason or logic but rather evidence that our understanding of logic as the theoretical foundation for mathematics is quite mature. 

Cheers,
Michael

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/dde814b4-2c34-455e-89dc-0ceee488ff1fn%40googlegroups.com.

Ravi Sharma

unread,
Sep 18, 2024, 7:13:19 PM9/18/24
to ontolo...@googlegroups.com

Micheal
Those of us brought up learning Eastern philosophies,including Buddhism, uncertainty is a basic concept and mind is no exception. Brain is also not a static description but as an organ brain changes are normally gradual than speed of thought.
If Penrose and others are now questioning the computability aspects of mind, there is a lot hidden in those archives and some are in practice as well about the wandering naur of mind.

I recall a lecture by my Jain Professor DSC Dr D S Kothari who was student of Rutherford and he showed and took us (150 or so science engineering professionals) through the Logic from Goedel that proved something like "Truth is Not Truth" or a dilemma like this.
Hence Computation of mind and thoughts is complex!
Regards.

Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member



Michael DeBellis

unread,
Sep 18, 2024, 8:54:53 PM9/18/24
to ontolo...@googlegroups.com
Barry said:  
In creating the proof of his theorem,  Gödel showed how it is always possible, given any provably consistant axiom system, to identify a proposition which a human can recognize as true but which cannot be proven in that system.

I could well be wrong but that isn't how I understand it. My understanding is that there are theorems that can't be proven, although we can never be sure if a theorem is one of those or not. So for example until recently Fermat's last theorem for a long while we didn't know if it was provable so it was possible that it was such a theorem but then someone proved it. Godlbach's conjecture is something that most people think is probably true but there is no proof for it and it may be (I thought) one of those unprovable theorems. 

In the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/goedel-incompleteness/ 
they say about the first incompleteness proof: 
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F
.
Which is consistent with what I thought. There are statements which can't be proven (or disproven) either by a computer or a human although we can't know if any statement is such a statement, we can only know that it isn't (when someone proves or disproves it). I know that's an over simplification because as the article says there may be other systems where the statement can be proven (e.g., the statement is an axiom) but the Stanford article goes on to say:

Now there are, by Gödel’s first theorem, arithmetical truths that are not provable even in ZFC. Proving them would thus require a formal system that incorporates methods going beyond ZFC. There is thus a sense in which such truths are not provable using today’s “ordinary” mathematical methods and axioms, nor can they be proved in a way that mathematicians would today regard as unproblematic and conclusive.

What is an example of a proposition that a human can recognize as true but that can't be proven in an axiomatic system? If it can't be proven, how can we say that a human knows it is true?

Michael

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/3RSt6hp95Fk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAAN3-5fWaS5PCYHYxpQO6zW4nzL_M1k%3DkqXw%3DNGodkBo7%2B_G1g%40mail.gmail.com.

Michael DeBellis

unread,
Sep 18, 2024, 9:03:04 PM9/18/24
to ontolo...@googlegroups.com
Ravi said: 
uncertainty is a basic concept and mind is no exception.

I agree. 

On Wed, Sep 18, 2024 at 4:13 PM Ravi Sharma <drravi...@gmail.com> wrote:
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/3RSt6hp95Fk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAAN3-5fWaS5PCYHYxpQO6zW4nzL_M1k%3DkqXw%3DNGodkBo7%2B_G1g%40mail.gmail.com.

Ravi Sharma

unread,
Sep 18, 2024, 9:19:16 PM9/18/24
to ontolo...@googlegroups.com
Michael
We are today including our understanding of science or notion of Truth which is different from what some believed in past, more facts and hows are answered!

So unproven (to others) but self internalized visualizations, such as those of Gell-mann on 8 fold way and SM of particles and some of Ramanujan's then unproven things have come to be true?
Hence " If it can't be proven, how can we say that a human knows it is true?" there is a whole background on it in Eastern thought.
Also Particle Physics today exists on probabilities rather than deterministic truth!
regards.

Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member


Barry Smith

unread,
Sep 18, 2024, 9:23:25 PM9/18/24
to ontolo...@googlegroups.com
On Wed, Sep 18, 2024 at 8:54 PM Michael DeBellis <mdebe...@gmail.com> wrote:
Barry said:  
In creating the proof of his theorem,  Gödel showed how it is always possible, given any provably consistant axiom system, to identify a proposition which a human can recognize as true but which cannot be proven in that system.

I could well be wrong but that isn't how I understand it. My understanding is that there are theorems that can't be proven, although we can never be sure if a theorem is one of those or not. So for example until recently Fermat's last theorem for a long while we didn't know if it was provable so it was possible that it was such a theorem but then someone proved it. Godlbach's conjecture is something that most people think is probably true but there is no proof for it and it may be (I thought) one of those unprovable theorems. 


These are real theorems.
The unprovable axioms Gödel talked about would not be classed as theorems. But given any axiom system with certain properties, Gödel showed how to construct a proposition which humans could recognise them as true, even though not provable in that system 

In the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/goedel-incompleteness/ 
they say about the first incompleteness proof: 
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F

Exactly. That is the negative side of Gödel's proof.
.
Which is consistent with what I thought. There are statements which can't be proven (or disproven) either by a computer or a human although we can't know if any statement is such a statement, we can only know that it isn't (when someone proves or disproves it).

I think you get the wrong end of the stick re Gödel
Hilbert, perhaps, is more relevant to the question of statements which can't be proven (or disproven) either by a computer or a human

 
I know that's an over simplification because as the article says there may be other systems where the statement can be proven (e.g., the statement is an axiom) but the Stanford article goes on to say:

Now there are, by Gödel’s first theorem, arithmetical truths that are not provable even in ZFC. Proving them would thus require a formal system that incorporates methods going beyond ZFC. There is thus a sense in which such truths are not provable using today’s “ordinary” mathematical methods and axioms, nor can they be proved in a way that mathematicians would today regard as unproblematic and conclusive.

What is an example of a proposition that a human can recognize as true but that can't be proven in an axiomatic system? If it can't be proven, how can we say that a human knows it is true?
The crowning glory of Gödel's proof is that he showed how to construct, for each axiom system S a proposition, which (simplifying somewhat) is a numerical counterpart of the assertion 'I cannot be proved in S'
BS

Alex Shkotin

unread,
Sep 19, 2024, 6:39:52 AM9/19/24
to ontolo...@googlegroups.com

Michael,


Slightly but permissibly simplifying:

Rebecca writes that Roger has a proof and it is somehow based on Gödel's incompleteness theorem.

Are you trying to reconstruct Penrose's proof?

Wouldn't it be easier to find and cite the proof itself?


Alex



ср, 18 сент. 2024 г. в 16:47, Michael DeBellis <mdebe...@gmail.com>:
--

Michael DeBellis

unread,
Sep 19, 2024, 8:34:06 AM9/19/24
to ontolo...@googlegroups.com
Ravi, I get that. My point is I still don't see any explanation that makes the argument from Penrose (and I've heard various versions of this going back to the early days of AI) a coherent argument. As far as I can tell, nothing that you described can't in principle (and more and more with LLMs and other ML systems in reality) be done by a computer. ML is all about probabilistic reasoning rather than deduction.  I've heard the name Gellman but don't know his work so perhaps that is an example of something that could not be done with a computer... but I'm skeptical. 

One thing that I think is important is to distinguish between objective truth and subjective truth. So for example, I find some works of literature and music intensely beautiful and inspiring in terms of their ideas That is a kind of truth as well but not scientific truth. It could be that a computer could never appreciate that kind of "truth" but if that is the case it has nothing to do with Godel or Turing/Church. 

I'm also not claiming that this is a settled issue.I'm willing to admit that it could very well be that at some point we will come to understand that there is something about a biological brain that is fundamentally different than an engineered computer. I'm just saying that the arguments based on Godel and Turing/Church results of the limits of formal systems do not seem at all convincing to me because I've never seen a convincing argument (although I haven't read Barry's latest comment yet so perhaps he has one) that there is some kind of mathematical truth that a formal system/computer can't understand but a human can. For example, in that Stanford Encyclopedia article they talk about how other formal systems can be used to prove some of the truths that fail in ZFC due to Godel. But if an alternative formal system can prove some of these truths there is no reason that alternative formal system couldn't be run on a computer. As I said in a previous comment the only thing we know with certainty IMO about the mind/brain is that we know virtually nothing with certainty. There is a neuroscientist who is a brilliant guy at Berkeley named Jack Gallant. I audited his Intro to Neuroscience class many years ago and I loved the way he started the first lecture: "Everything you will hear in this class is wrong, hopefully some of it is a little less wrong" 

Cheers,
Michael

Michael DeBellis

unread,
Sep 19, 2024, 8:45:15 AM9/19/24
to ontolo...@googlegroups.com
Alex said:
Rebecca writes that Roger has a proof and it is somehow based on Gödel's incompleteness theorem.
Are you trying to reconstruct Penrose's proof?
Wouldn't it be easier to find and cite the proof itself?
What she says is:

 For example, Roger Penrose in his two bestselling books, The Emperor’s New Mind and Shadows of the Mind, has made the incompleteness theorems central to his argument that our minds, whatever they are, cannot be digital computers.
Goldstein, Rebecca (2006-02-16T22:58:59.000). Incompleteness: The Proof and Paradox of Kurt Gödel (Great Discoveries) (Kindle Locations 200-202). W. W. Norton & Company. Kindle Edition. 

She talks about an "argument" not a proof. A while ago I read one of Penrose's books, think it was Emperor's New Mind and I never saw any such proof. I just tried going to his Wikipedia page: https://en.wikipedia.org/wiki/Roger_Penrose and did a search for "proof" and the word never occurs in the article. Same with the article about the Emperor's New Mind: https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind  searching for "proof" and no occurrences  Also, in the SEP article on quantum consciousness: .  https://plato.stanford.edu/entries/qt-consciousness/ same result. Searched for "proof" no occurrences. 

You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/3RSt6hp95Fk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAFxxROT0oqEQCMeD8G3cqR6uZAfCs4xqk6g4acOZ16wxMPVGbA%40mail.gmail.com.

John F Sowa

unread,
Sep 19, 2024, 4:53:12 PM9/19/24
to ontolo...@googlegroups.com
Barry, Michael, Ravi, List,

I have not had a chance to do much email for the past few days, but I'll make a few late remarks on this topic.   In general, I have a high regard for the books by Penrose, but I have very little regard for anybody's claim that decidability makes certain puzzles stated in ordinary language unanswerable.

In general, nobody but an expert logician knows how to state an unprovable theorem in logic.  And even an expert logician would find it difficult to state a problem in English or other NL that could be mapped to an undecidable statement in logic.  Therefore, I do not believe that anybody who is not an expert logician could ever state an undecidable statement in ordinary language.

There are much more mundane reasons why  a certain question is difficult to answer:  (1) Lack of information.   (2) Lack of vocabulary/ontology for stating the problem.  (3) Confusion about continuous vs discrete issues in the ontology.  (4) And most often, vagueness in the statement of the problem.

I apologize for not having the time to analyze the various positions in this discussion to determine which of the above issues may be relevant.

John

Daniel Arista

unread,
Sep 19, 2024, 5:13:38 PM9/19/24
to ontolo...@googlegroups.com
I haven’t read through the entire thread so apology in advance if this was already posted, but this may be of interest:


A refutation of Penrose's Godelian case against artificial intelligence

Abstract

Having, as it is generally agreed, failed to destroy the computational conception of mind with the G\"{o}delian attack he articulated in his {\em The Emperor's New Mind}, Penrose has returned, armed with a more elaborate and more fastidious G\"{o}delian case, expressed in and 3 of his {\em Shadows of the Mind}. The core argument in these chapters is enthymematic, and when formalized, a remarkable number of technical glitches come to light. Over and above these defects, the argument, at best, is an instance of either the fallacy of denying the antecedent, the fallacy of {\em petitio principii}, or the fallacy of equivocation. More recently, writing in response to his critics in the electronic journal {\em Psyche}, Penrose has offered a G\"{o}delian case designed to improve on the version presented in {\em SOTM}. But this version is yet again another failure. In falling prey to the errors we uncover, Penrose's new G\"{o}delian case is unmasked as the same confused refrain J.R. Lucas initiated 35 years ago




Alex Shkotin

unread,
Sep 20, 2024, 4:57:45 AM9/20/24
to ontolo...@googlegroups.com

Michael,


I am sorry my "Slightly but permissibly simplifying" turned out to be unacceptable.

So let me share my findings and points.

With such a kind of subtle topic we need only the text of Penrose himself. 

So on my e-bookshelf I have a Russian translation of his book Emperor's New Mind. And I searched for "Гедел" to permit different ending of Russian word "Гедель".

There are 94 occurrences in maybe 10 places.

And luckily in Introduction before the first chapter I got the first appearance of the argument [rus] translated by Google-trans as


"Let us assume that we have some computational procedure P that enables us to formulate mathematical statements (for the sake of definiteness, let us agree that these will be statements of some one kind, analogous, say, to Fermat's famous theorem (see Chapter 2: "The Unsolvability of Hilbert's Problem"). Then, if we are prepared to regard the rules of procedure P as reliable - in the sense that we will assume that every mathematical statement obtained by means of this procedure is indisputably true - then we must equally accept the indisputable validity of some statement G(P) that lies outside the scope of the rules of procedure P (see Chapter 4: "Formal Mathematical Systems"). Thus, as soon as we have learned to automate some part of our mathematical thinking, we immediately have an understanding of how to go beyond it. In my view, this clearly indicates that mathematical understanding contains certain elements that cannot be completely reduced to computational methods.


We may discuss this proposition and argumentation. But of course the perfect way is to find out the initial English text of RP.

What follows after this citation in the book is a detailed argumentation, including Goodstein's theorem as an example. 

And there are some interesting ideas there. Let me share one [rus2] "conscious perception is an essentially "non-computable" process"


For me the discussion on this topic between RP and SW (Stephen Wolfram) would be great 🙌


Alex



[rus]

"Пусть мы располагаем какой‑нибудь вычислительной процедурой Р, позволяющей нам формулировать математические утверждения (для определенности договоримся, что это будут утверждения какого‑то одного вида, аналогичные, допустим, знаменитой теореме Ферма (см. гл.2: «Неразрешимость проблемы Гильберта»). Тогда, если мы готовы считать правила процедуры Р надежными – в том смысле, что мы будем полагать всякое математическое утверждение, полученное при помощи этой процедуры, неоспоримо  верным, – то равным образом мы должны принимать и неоспоримую справедливость некоторого утверждения G(P), которое лежит за пределами действия правил процедуры Р (см. гл.4: «Формальные математические системы»). Таким образом, как только мы научились автоматизировать некоторую часть нашего математического мышления, у нас сразу же появляется понимание, как выйти за его границы. В моем представлении это однозначно свидетельствует о том, что математическое понимание содержит определенные элементы, которые не могут быть полностью сведены к вычислительным методам."

[rus2] 

"сознательное восприятие – процесс существенно «невычислимый»"


Alex Shkotin

unread,
Sep 20, 2024, 5:04:03 AM9/20/24
to ontolo...@googlegroups.com
Daniel, thanks 🏋️

пт, 20 сент. 2024 г. в 00:13, Daniel Arista <daniel...@gmail.com>:

Alex Shkotin

unread,
Sep 20, 2024, 5:56:56 AM9/20/24
to ontolo...@googlegroups.com

John,


Indeed we should differentiate "to state" and "to prove". There are cases where it's easy to state and very hard to prove. The Great Fermat's theorem we all can state.


Nowadays if all  expert logicians are busy there is a chance to talk with AI like this.

I2Gemini:

Is there any example of an unprovable theorem in logic?

G2I (extract)[G2I]:

A famous example of an unprovable statement is the Gödel sentence:

  • Gödel sentence: "This sentence is unprovable."  

If the sentence is true, then it is indeed unprovable, confirming its truth. If the sentence is false, then it is provable, which means it is true. Either way, we have a contradiction. This demonstrates the inherent limitations of formal systems.

◼️

And Gemini refers to Gödel's Incompleteness Theorems where we do have examples of unprovable statements but not "This sentence is unprovable."  

`Trust but verify.`(proverb)


Alex


[G2I]

Gödel's Incompleteness Theorems

Gödel's Incompleteness Theorems are two foundational results in mathematical logic that demonstrate the inherent limitations of formal axiomatic systems. They show that any consistent formal system capable of expressing basic arithmetic must also contain statements that are true but cannot be proven within the system.  

Here's a simplified explanation:

  1. Consistency: A system is consistent if it doesn't contain contradictions (e.g., "A and not A").

  2. Expressiveness: A system is expressive if it can represent basic arithmetic.

  3. Incompleteness: If a system is consistent and expressive, then there must be true statements that cannot be proven within the system.

In essence, Gödel's theorems show that no matter how powerful or complex a formal system may be, there will always be truths that lie beyond its grasp.  

A famous example of an unprovable statement is the Gödel sentence:

  • Gödel sentence: "This sentence is unprovable."  

If the sentence is true, then it is indeed unprovable, confirming its truth. If the sentence is false, then it is provable, which means it is true. Either way, we have a contradiction. This demonstrates the inherent limitations of formal systems.  

While these theorems may seem abstract, they have profound implications for mathematics, philosophy, and computer science, as they highlight the limits of formal reasoning and the potential for undecidable problems.



чт, 19 сент. 2024 г. в 23:53, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Sep 20, 2024, 6:52:16 AM9/20/24
to ontolo...@googlegroups.com
IN ADDITION:
Gemini is a bit talkative, but for the end of the week this may be acceptable.
I2G:  What about undecidable statement in logic?
G:

Undecidable statements are those that cannot be proven true or false within a given formal system. They are closely related to Gödel's Incompleteness Theorems.  

Key points about undecidable statements:

  • Within the system: Undecidable statements are not provable or disprovable within the specific formal system being considered.  
  • Beyond the system: However, it's possible that the statement could be proven or disproven using additional axioms or a different system.
  • Examples:
    • Halting problem: Given a computer program and an input, will the program eventually halt or run forever? This is a classic example of an undecidable problem.  
    • Consistency of mathematics: The question of whether mathematics is consistent (free of contradictions) is another undecidable statement.

In essence, undecidable statements represent the limits of what can be proven or disproven within a given formal framework. They highlight the inherent limitations of formal reasoning and the possibility of truths that lie beyond the reach of our current systems.


пт, 20 сент. 2024 г. в 12:56, Alex Shkotin <alex.s...@gmail.com>:

Michael DeBellis

unread,
Sep 23, 2024, 5:01:20 PM9/23/24
to ontolo...@googlegroups.com
John, no problem. If you do have time though, I would like your feedback on this (even if off list) because some of the things people are saying aren't consistent with what I thought I knew and I would like to be sure I'm understanding this. So when you say:

nobody but an expert logician knows how to state an unprovable theorem in logic

I agree but I don't think that's relevant to (my understanding) of Gödel's proof that there exist certain theorem's that are true but not provable. My understanding is that what Gödel proved is that there are theorems that are true but unprovable and what is more we can never know if a specific unproven theorem is one of those theorems or not. For example the Goldbach Conjecture. According to that Wikipedia article it has been demonstrated  that the conjecture holds for all integers less than 4 * 10**8. But despite many efforts it hasn't been proven. So it is possible that the Goldbach Conjecture is one of those Gödel theorems that is true but not provable.

So then my question remains: why do smart people like Penrose think this says anything about whether or not the brain can be modeled as a formal system? The argument seems to be that the brain can do something that Gödel proved computers can't do but I've not seen anyone give an example of this. Indeed, when Gödel created his proof, computers didn't even exist. What he was doing is showing that the logicist goal of having a provably complete and consistent axiomatic system that is the foundation for mathematics is unachievable. It can be consistent (which is the more important thing, and according to his second proof to do this requires a metasystem, you can't prove consistency from within the system) but not complete (because there will be true theorems that can't be proven).  

Anyone that wants to support the Penrose argument: that's my request to you. Give an example of something that a human can prove to be true but a computer can't. Or restate the argument in a way that corrects whatever I'm not understanding to show why providing such an example isn't required and that the argument does say something meaningful about what can't be proven by a formal system but can be with a human brain. 

Michael



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/3RSt6hp95Fk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
Sep 23, 2024, 5:13:38 PM9/23/24
to ontolo...@googlegroups.com
Alex, 

Then, if we are prepared to regard the rules of procedure P as reliable - in the sense that we will assume that every mathematical statement obtained by means of this procedure is indisputably true - then we must equally accept the indisputable validity of some statement G(P) that lies outside the scope of the rules of procedure P

Why do we have to accept the "indisputable validity" of these statements that lie outside the scope of P? Also, we need to be clear that there are different kinds of reasoning and different kinds of truth. So of course there are scientific models that we gather from induction and can't be proven. IMO this is completely irrelevant to whether the mind can be modeled as a formal system because formal systems clearly are capable of learning by induction just as humans can. 

My point is that yes, there are statements that lie outside P and are true. But they are not true in the sense that things that are proven G(P) are true. 

Actually, the more I read about this the more I'm becoming convinced that the problem here is a very common problem in modern philosophy: that philosophers tend to  use common sense terms like "truth" and "validity" without giving them formal definitions. So for example if we are talking about Linguistics and saying "what Joe said was true" that is a completely different meaning of truth than saying some theorem is provably true. 

Michael

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/3RSt6hp95Fk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.

Nadin, Mihai

unread,
Sep 24, 2024, 1:37:20 PM9/24/24
to ontolo...@googlegroups.com

The challenge from Michael DeBellis:

Give an example of something that a human can prove to be true but a computer can’t  

is tricky. If you know that a computer is the unity between a language of an alphabet of 2 letters and of Boolean logic it follows that everything related to the distinction TRUE and FALSE is by necessity definitory of this KIND of computation. Therefore no example can be given.

Mihai Nadin  

You received this message because you are subscribed to the Google Groups "ontolog-forum" group.

To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CALGFikcP1dGOVuytF3ojvD9YbaNaNnUA9Bw%2BzKLADDExuEB0kQ%40mail.gmail.com.

Alex Shkotin

unread,
Sep 25, 2024, 4:11:25 AM9/25/24
to ontolo...@googlegroups.com

Michael, 


As we know there is a large and subtle discussion around Penrose thesis. I am not in it. I am sure that there should be a forum for this topic.  And your question 

"Why do we have to accept the "indisputable validity" of these statements that lie outside the scope of P?"

is to RP not to me. Sorry.


And any discussion outside some theory, which should be definitely pointed to concrete theory, is just mind to mind games.

In Linguistics there are a lot of theories fighting. For example D. Everett has one theory and N. Chomsky has another. 

Outside scope on any theory we just gymnastics the mind. Why not!


But the challenge is to formalize one or another existing theory 🎯


For example D. Hilbert wrote the axiomatic theory of Euclid's geometry. Do we have formalization? No 😂


How many theories outside math are formalized? 0.


But our robots are waiting for them.


By the way topic of truth values is one of the subtle in math logic 👍


Everyone, who has created a formal ontology, formalized some theoretical knowledge. From what theory? Where is this theory expressed? How to justify his formalization by this theory? 

We can always verbalize formalization and then must find in this particular theory a justification or ask experts.


Theory first, robots second 🏋️


Alex



вт, 24 сент. 2024 г. в 00:13, Michael DeBellis <mdebe...@gmail.com>:
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CALGFikci_t7GwW%2BGagaPRv5RXWEiLZN1CaQNL00ToN3cSNe%2BGQ%40mail.gmail.com.

Alex Shkotin

unread,
Sep 25, 2024, 5:23:10 AM9/25/24
to ontolo...@googlegroups.com
IN ADDITION:
I have generalized a little from here:
But math isn’t just useful for introducing or developing a new theory. Long after scientists have established a theory, mathematicians comb over it with the rigor that their field demands, intent on placing it on more solid logical footing. This cleanup work can take decades, but it’s necessary to establish deeper trust in scientific ideas.


ср, 25 сент. 2024 г. в 11:11, Alex Shkotin <alex.s...@gmail.com>:

Michael DeBellis

unread,
Sep 26, 2024, 11:41:12 AM9/26/24
to ontolo...@googlegroups.com
As we know there is a large and subtle discussion around Penrose thesis. I am not in it. I am sure that there should be a forum for this topic.  And your question
"Why do we have to accept the "indisputable validity" of these statements that lie outside the scope of P?"
is to RP not to me. Sorry.

Alex, no need to apologize. Although I'm starting to wonder how subtle the discussion really is. I'm starting to suspect this is a case of the emperor having no clothes (ironic given the title of one of Penrose's book). Specifically, 95% of the people who read Penrose's book don't have the capability to understand Godel at all, and of the 5% that can at least somewhat grasp Godel (most of us) even fewer (like Paolo Mancosu, the guy at Berkeley who taught a class I audited a long time ago on Godel, Turing, etc.) who are the real experts aren't interested enough to point out obvious errors. 

For example D. Hilbert wrote the axiomatic theory of Euclid's geometry. Do we have formalization? No

I don't quite understand that. Isn't an "axiomatic theory" a formalization? 

But our robots are waiting for them.

One of the ironies of this whole discussion IMO is that in some circumstances (LLMs) it is now easier to communicate to software agents using natural language than formal language. I still need to do a lot more work on this but so far that is what I'm finding in my work with ChatGPT, it says it can understand Turtle but it seems to understand natural language definitions better. One of the things I plan to work on is a simple generator to generate basic NL descriptions of classes, properties, and axioms. It shouldn't be hard. If something like this exists I would appreciate a pointer 

How many theories outside math are formalized?

One of the philosophers in the Vienna circle, think it was Carnap, thought that we SHOULD formalize other disciplines of science and attempted to do that for physics. I've always thought one of the reasons he failed was because to create such a complex model required some kind of tool. For a while I even thought it might be possible to do with OWL. I actually tried but it was soon apparent that 1) I don't know physics well enough and 2) Even if I did (here I'm in violent agreement with John Sowa) OWL wasn't powerful enough for this kind of model. 

Michael

John F Sowa

unread,
Sep 26, 2024, 1:18:09 PM9/26/24
to ontolo...@googlegroups.com, CG
Michael,

The word 'understand' is a vague word that has no formal definition of any kind.  In any specific example, it's essential to replace it with a statement of the operations that had been performed.

MDB:  ChatGPT ... can understand Turtle but it seems to understand natural language definitions better.

Short summary:   LLMs process patterns of symbols.  Their most reliable applications map strings of symbols (in natural or artificial languages) to strings of symbols in other languages.  Those are not perfect, but they are the most reliable because they do the least amount of processing.  But they do nothing that remotely resembles what humans do when they "understand" the source and target languages.

The more steps in the processing, the more unreliable the results.  Translation is the most reliable because the source and target patterns are very closely related.  LLMs can also be used to find information when a pattern in a question has a more distant relation to a pattern in the target that is found.

That kind of search is more reliable the closer the search pattern is to the pattern that is found.  But LLMs don't do any evaluation.  If a search pattern has a closer match to bad data than good data, the bad data will be retrieved.

The most spectacular, but also the most unreliable applications of LLMs search for a pattern that does some kind of transformation and then apply that transformation to some source data to produce some target data.  People often call these transformations "reasoning".  But the kind of reasoning should be called "guessing" or "hypothesis" or "abduction".  

Humans who understand what LLMs do can often find them very useful because they are sufficiently knowledgeable about the subject  that (1) they recognize bad guesses and ignore them; and (2) they do further tests and checks to evaluate the answers before using them.

There is much more to say about the details.  But never, ever use the words 'reasoning' or 'understanding' for what LLMs do.  However, it may be permissible to use the word 'reasoning' for a hybrid system that uses symbolic methods for deduction and evaluation of the results that LLMs find or generate.

And most of all, it's best to reject , discard, or ignore any publication that claims LLMs are approaching a human level of understanding or reasoning.  I have never seen any article about artificial general intelligence (AGI) that has any insight whatsoever into human intelligence.

John
 
PS:  Penrose wrote some excellent books on physics.   When he wanders outside of his level of expertise, his ideas may be interesting -- or not.

From: "Michael DeBellis" <mdebe...@gmail.com>
Sent: 9/26/24 11:41 AM

Kingsley Idehen

unread,
Sep 26, 2024, 1:44:32 PM9/26/24
to ontolo...@googlegroups.com, Michael DeBellis

Hi Michael,

I’m starting a new discussion thread, hence the title change.

On 9/26/24 at 11:40 AM, Michael DeBellis wrote:

One of the ironies of this whole discussion, in my opinion, is that in some circumstances (LLMs), it is now easier to communicate with software agents using natural language than formal language. I still need to do a lot more work on this, but so far, that’s what I’m finding in my work with ChatGPT. It says it can understand Turtle, but it seems to comprehend natural language definitions better.

Yes, these LLMs indeed understand natural language better than formal languages like Turtle, JSON, etc. I’ve started a newsletter [1] where I share my findings on these topics, alongside a GitHub repository provided by OpenLink Software [2] to host actual examples.

My latest example is an OPML, RSS, and Atom Reader Assistant that provides a productive workflow for tracking feeds from preferred sources and then acting on them in ways that ultimately produce knowledge, persisted for flexible reuse. Naturally, I’m leveraging functionality long provided by our Virtuoso platform, which is now much easier to understand and utilize, thanks to natural language and the interaction interface (as opposed to command lines and GUI widgets).

OPML and RSS Agent Depiction

Links:

[1] AI & Data Driven Enterprise Newsletter

[2] OpenLink Assistant GitHub Repository

-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

John F Sowa

unread,
Sep 26, 2024, 2:42:50 PM9/26/24
to ontolo...@googlegroups.com, Michael DeBellis
Kingsley,

As you know, I consider your system  to be a good example of the use of LLMs to develop a much better user interface to a rather complex computer system.

But as I mentioned in a previous note, we should avoid any suggestion that the computer system "understands" what it is doing. 

Kingsley:   these LLMs indeed understand natural language better than formal languages like Turtle, JSON, etc 

Instead of 'understand', there are more neutral and accurate terms for what LLMs do:  process, interpret, translate, compute, respond to...  The word 'understand' is vague, and it implies a wide range of human capabilities that computer systems do note have. 

And there is an important reason why LLMs interpret natural languages better than those computer languages:  NLs have a lot of redundancy, which makes the NL input to be considerably longer than the computer form.   Many variations will map to equivalent computational forms.

But computer languages tend to be shorter and more precise.  Therefore, a tiny mistake in choice of terms or their ordering can have a major -- and sometimes disastrous -- effect on the result.

John
 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>

Hi Michael,

I’m starting a new discussion thread, hence the title change.

On 9/26/24 at 11:40 AM, Michael DeBellis wrote:

One of the ironies of this whole discussion, in my opinion, is that in some circumstances (LLMs), it is now easier to communicate with software agents using natural language than formal language. I still need to do a lot more work on this, but so far, that’s what I’m finding in my work with ChatGPT. It says it can understand Turtle, but it seems to comprehend natural language definitions better.

Yes, these LLMs indeed understand natural language better than formal languages like Turtle, JSON, etc. I’ve started a newsletter [1] where I share my findings on these topics, alongside a GitHub repository provided by OpenLink Software [2] to host actual examples.

My latest example is an OPML, RSS, and Atom Reader Assistant that provides a productive workflow for tracking feeds from preferred sources and then acting on them in ways that ultimately produce knowledge, persisted for flexible reuse. Naturally, I’m leveraging functionality long provided by our Virtuoso platform, which is now much easier to understand and utilize, thanks to natural language and the interaction interface (as opposed to command lines and GUI widgets).

Links:

-- 
Regards,

Kingsley Idehen

Kingsley Idehen

unread,
Sep 26, 2024, 4:08:19 PM9/26/24
to ontolo...@googlegroups.com

Hi John,

On 9/26/24 2:42 PM, John F Sowa wrote:
Kingsley,

As you know, I consider your system  to be a good example of the use of LLMs to develop a much better user interface to a rather complex computer system.

But as I mentioned in a previous note, we should avoid any suggestion that the computer system "understands" what it is doing.


Yes, I agree!



Kingsley:   these LLMs indeed understand natural language better than formal languages like Turtle, JSON, etc 

Instead of 'understand', there are more neutral and accurate terms for what LLMs do:  process, interpret, translate, compute, respond to...  The word 'understand' is vague, and it implies a wide range of human capabilities that computer systems do note have.


Yes, I should have said:

They are better at decoding natural language than formal languages like Turtle, JSON etc.. Which is clearly what's actually going on.



And there is an important reason why LLMs interpret natural languages better than those computer languages:  NLs have a lot of redundancy, which makes the NL input to be considerably longer than the computer form.   Many variations will map to equivalent computational forms.


Yes.



But computer languages tend to be shorter and more precise.  Therefore, a tiny mistake in choice of terms or their ordering can have a major -- and sometimes disastrous -- effect on the result.


Yes, totally agree!


Kingsley


John
 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>

Hi Michael,

I’m starting a new discussion thread, hence the title change.

On 9/26/24 at 11:40 AM, Michael DeBellis wrote:

One of the ironies of this whole discussion, in my opinion, is that in some circumstances (LLMs), it is now easier to communicate with software agents using natural language than formal language. I still need to do a lot more work on this, but so far, that’s what I’m finding in my work with ChatGPT. It says it can understand Turtle, but it seems to comprehend natural language definitions better.

Yes, these LLMs indeed understand natural language better than formal languages like Turtle, JSON, etc. I’ve started a newsletter [1] where I share my findings on these topics, alongside a GitHub repository provided by OpenLink Software [2] to host actual examples.

My latest example is an OPML, RSS, and Atom Reader Assistant that provides a productive workflow for tracking feeds from preferred sources and then acting on them in ways that ultimately produce knowledge, persisted for flexible reuse. Naturally, I’m leveraging functionality long provided by our Virtuoso platform, which is now much easier to understand and utilize, thanks to natural language and the interaction interface (as opposed to command lines and GUI widgets).

Links:

[1] AI & Data Driven Enterprise Newsletter

[2] OpenLink Assistant GitHub Repository

-- 
Regards,

Kingsley Idehen	      
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Adrian Medina

unread,
Sep 26, 2024, 5:31:17 PM9/26/24
to ontolo...@googlegroups.com
I suppose this reflects to me how difficult it is to objectively evaluate LLMs: I have found that GPT-4 class models are far more precise both in comprehending input and producing semantically rigorous output when forced to communicate using RDF in various formats, including Turtle, via system messages. Natural language can be ambiguous. OWL restrictions, in my experimentation, produce superior results in interactive LLM use when you desire semantically coherent models as output, but it does take work on the part of the human to enforce this and keep them on task. 

Alex Shkotin

unread,
Sep 27, 2024, 6:32:20 AM9/27/24
to ontolo...@googlegroups.com

MDB:I don't quite understand that. Isn't an "axiomatic theory" a formalization?

Alex:

An axiomatic theory has a beautiful structure: primary sorts, predicates and functions, axioms, rules of inference, definitions, theorems, proofs.

All these wonderful units of knowledge can be expressed in any natural language and a suitable formal language.

So Euclidean geometry was first axiomatized in Greek, then more precisely in German by D. Hilbert, but we do not have this theory in Isabelle, Coq, HOL4, etc., i.e. fully formalized, as far as I know.

This is the idea of my proposal for a framework of the theory. There we should store a concrete theory (each unit of knowledge mentioned above) in as many different languages (natural and formal) as we need.

I formalized the beginning of the theory of undirected graphs, just to show the structure in action. For example, how this unit of knowledge (definitions)

rus

Пусть g есть граф, v есть вершина, такая что v принадлежит g. степень [вершины] v в [графе] g есть количество простых рёбер графа g инцидентных v плюс умноженное на два количество петель графа g инцидентных v.

eng

Let g be a graph, v is a vertex such that v belongs to g. The degree of a vertex v in a graph g is the number of simple edges of the graph g that are incident with v plus the number of loops of the graph g that are incident with v multiplied by two.

yfl

declaration d func(Nat graph vertex) (g v) ≝ {vertex(g)(v)} (#e:edge(g) simple(e) & (e inc v)) + 2*(#e:edge(g) loop(e) & (e inc v)).

Here in 3 languages: Russian, English and YAFOLL we have definition of "degree of a vertex v in a graph g" 👏

This Ugraph theory has one sort, one binary predicate, and two axioms.

To compare with D. Hilbert's Geometry, the last has 

-3 sorts (points, lines, planes), let abbreviate points 2 po, lines 2 li, planes 2 pl;

-6 predicates: 

Betweenness(po po po) - ternary, 

Lies_on - polymorphic: Lies_on(po li), Lies_on(po pl), Lies_on(li pl); 

Congruence - polymorphic: Congruence(line_segments line_segments), Congruence(angles angles).

Where line_segments, angles are definable sorts.

-21 axioms - 8+4+6+1+2.

And then you begin to guess why no one has taken it upon themselves to formalize all this, for example in Isabell or Coq.


This is about full formalization of Geometry. 

But the same in Mechanics: we have axiomatic theories (Newton\Velichenko, Lagrange, Truesdale). And no systematic formalization using Isabelle, Coq etc.


I doubt the axioms of Geometry or Mechanics can be expressed on OWL2.


Let me stop here and ask: do we agree that every axiomatic theory may be formalized in languages like Isabelle, Coq etc have? 🎆

From my point of view this is just another level of accuracy ☀️

And a related question: when you create a formal ontology, what are you formalizing?


Alex



чт, 26 сент. 2024 г. в 18:41, Michael DeBellis <mdebe...@gmail.com>:

Alex Shkotin

unread,
Sep 27, 2024, 9:07:23 AM9/27/24
to ontolo...@googlegroups.com

Alex:But our robots are waiting for them.

MDB:

One of the ironies of this whole discussion IMO is that in some circumstances (LLMs) it is now easier to communicate to software agents using natural language than formal language. I still need to do a lot more work on this but so far that is what I'm finding in my work with ChatGPT, it says it can understand Turtle but it seems to understand natural language definitions better. One of the things I plan to work on is a simple generator to generate basic NL descriptions of classes, properties, and axioms. It shouldn't be hard. If something like this exists I would appreciate a pointer

Alex:

First, what I meant by robots. A robot has a formal mathematical model of the world around it. Of course, in some approximations. How a robot communicates with a person and other robots is a separate question. And speaking from the point of view of a multilingual formal theory (as proposed in the framework of the theory), it will communicate in any language that is in the framework.


Secondly, the task of transforming formal text into natural text is verbalization. This is a whole old direction and here, as everywhere, the question is: who will check the result of the LLM work?

Here is a bit from Google search for "owl2 verbalization tools":

-my favorite Attempto project https://github.com/Kaljurand/owl-verbalizer

-my experience of this year https://www.linkedin.com/pulse/claude-3-sonnet-owl-2-verbalization-alex-shkotin-2i8ke/

and three more urls:

OWL to English: a tool for generating organised easily-navigated hypertexts from ontologies | Request PDF

Extending OWL Ontology Visualizations with Interactive Contextual Verbalization 

Comparisons of OWL verbalisers | Download Table


MDB:"It shouldn't be hard."

Alex:...to begin ⛲ as always with NL 🏋️


Alex



чт, 26 сент. 2024 г. в 18:41, Michael DeBellis <mdebe...@gmail.com>:
As we know there is a large and subtle discussion around Penrose thesis. I am not in it. I am sure that there should be a forum for this topic.  And your question

Alex Shkotin

unread,
Sep 27, 2024, 9:28:40 AM9/27/24
to ontolo...@googlegroups.com

MDB:One of the philosophers in the Vienna circle, think it was Carnap, thought that we SHOULD formalize other disciplines of science and attempted to do that for physics. I've always thought one of the reasons he failed was because to create such a complex model required some kind of tool. For a while I even thought it might be possible to do with OWL. I actually tried but it was soon apparent that 1) I don't know physics well enough and 2) Even if I did (here I'm in violent agreement with John Sowa) OWL wasn't powerful enough for this kind of model.

Alex:

Thanks for mentioning Carnap. I asked Gemini a couple of questions. We, formalizers of sciences, should know each other's work.

As for formalization of specific physical theories, there is one work on formalization of Lagrange theory on Isabelle.


Best,


Alex



чт, 26 сент. 2024 г. в 18:41, Michael DeBellis <mdebe...@gmail.com>:
As we know there is a large and subtle discussion around Penrose thesis. I am not in it. I am sure that there should be a forum for this topic.  And your question

Michael DeBellis

unread,
Sep 27, 2024, 10:17:09 AM9/27/24
to ontolo...@googlegroups.com
John,

And most of all, it's best to reject , discard, or ignore any publication that claims LLMs are approaching a human level of understanding or reasoning.  I have never seen any article about artificial general intelligence (AGI) that has any insight whatsoever into human intelligence.

Agree absolutely. A while ago there were some pronouncements from people that I would expect to know better (I don't remember who exactly, but execs at Deep Mind or Open AI) about how GenAI is just around the corner and we need to appreciate the dangers of AI becoming sentient and I just thought it was complete nonsense.  The cynical side of me thinks perhaps it is a way for people to make their technology seem more important than it is as well  as to divert attention from the real dangers of AI such as loss of privacy and loss of jobs for people who can't easily find other jobs. 

PS:  Penrose wrote some excellent books on physics.   When he wanders outside of his level of expertise, his ideas may be interesting -- or not.

Also agree. 

But they do nothing that remotely resembles what humans do when they "understand" the source and target languages.

Also agree but I don't think there is anything wrong with using terminology like "ChatGPT seems to understand natural language better than Turtle" especially when talking to people who work in the field like this group. I assume that everyone knows that when I use the word "understand" in this context all I mean is that the results I get from loading Turtle don't reflect the information in the ontology as well as the results I get when I load the same information in natural language and I don't mean it the same way as if I said "I tried explaining a message broker to the client but he just didn't understand it". 

IMO a good analogy is when I talk to people in Evolutionary Psychology or Biology. We always say things like "nature designed us to do Foo"  Does that mean we think that nature is some conscious entity and we are all creationists? Of course not. It's just simpler than saying "pressure from reproductive success over billions of year resulted in a change to our genome that has the appearance of being designed to do Foo" Or another example: back when I used to work with end users much more than I do now I would often say things like "its thinking" (i.e., it's not an error, we just need to make the code more efficient for this specific task) or "it wants you to name the report" (shorthand for "there is an entry box up on the screen and you need to type a name for this report into it before the system will continue"

Michael

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/3RSt6hp95Fk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
Sep 27, 2024, 10:38:12 AM9/27/24
to ontolo...@googlegroups.com
Alex, first I just have to append to what I said:

MD: "It shouldn't be hard"
MD probably sometime later: "Why do I keep on saying that?!"

Thanks for those links, they look very interesting. I've been focusing on engineering a lot and want to step back a bit and some of that stuff looks great. I have to confess I don't know Isabelle or Coq but I will check them out. There is a very interesting extension of Game Theory by a guy named Gintis in his excellent book The Bounds of Reason: Game Theory and the Unification of the Social Sciences.  There are several classic games where the behavior of subjects seems to not be rational. Gintis shows that when you add knowledge and uncertainty as rigorous parts of the equations then what seemed irrational becomes rational. i've been trying to develop an OWL model of Gintis' extensions to game theory but it seems to be really stretching what OWL can do so I'll check those out. 

Cheers,
Michael

Chuck Woolery

unread,
Sep 27, 2024, 1:16:18 PM9/27/24
to ontolo...@googlegroups.com

Finally!  A topic I can relate too.   But only with an in depth analysis explained in math that I couldn’t follow...RE Game theory (on Youtube).  It’s conclusion did resonates with all of my experiences in working on national and global security threats...

It concluded :  there are 4 factors that if consistently followed will ‘win’ most of the time.

  1. Be kind
  2. Be forgiving.
  3. Tit for tat!  (I’m taking this to mean demonstrate to your opponent what it’s like being on the receiving end of something that you don’t want to happen again).  A taste of their own medicine?
  4. Clear communication! 

 
#4 is why I joined this group about two months ago.  Most of the discussions are way over my head...but in attempting to apply Game Theory to the real world to bring some sanity to humanity --before we lose everything...is to do my advocacy work using unambiguous words in generating  political will...instead of the ambiguous words most politicians, economists, musicians, regular people, and  faith leaders usually use when working  for “Peace” (a very ambiguous word) or using “Democracy” (another ambiguous word) to sustainably maximize human freedoms and security (according to Thomas Paine in “Common Sense” ‘the only legitimate purpose of government) in our current world where our national and global governance systems are based on the illusion of ‘independence”.   A concept Einstein believes does NOT exist anywhere in the known Universe. It only exists in our minds and in our words which often results in tragic consequences.  Fortuntely, only a few years ago it was informally discredited for the first time by a US federal agency.  One that Trump created half way into his term  (but even a broken clock is right twice a day...;-)  But in reality
“Everything is connected, everything is interdependent, so everything is vulnerable.... And that’s why this has to be a more than whole of government, a more than whole of nation [effort]. It really has to be a global effort....” Jen Easterly. Cyber and Infrastructure Security Agency’s director in a speech Oct. 29, 2021. [CISA is our nation’s newest federal agency established by the Trump Administration in 2018]  

I’m guessing Ms. Easterly understands that our environment is our most vital infrastructure! And human security everywhere is inherently and irreversibly connected to it everywhere.  And security is an illusion.   My two new favorite words “everwhere” and “autological”...I have recently fallen in ‘love’ with both..;-)

 

cw

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of Michael DeBellis
Sent: Friday, September 27, 2024 10:38 AM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Philosophical Implications of Gödel and Turing/Church

 

Alex, first I just have to append to what I said:

Alex Shkotin

unread,
Sep 28, 2024, 5:31:45 AM9/28/24
to ontolo...@googlegroups.com

Michael, 


It is a fact that the expressive power of OWL2, i.e. DL, is often lacking. We noticed that it was at its limit back when we were working with definitions of igneous rocks. It would be cool to try to make an ontology in Isabelle or Coq! There is a high entry threshold: their approach is not simple, and the documentation is non-trivial.

It is interesting that a few years ago the Coq community discussed how to formalize Einstein's puzzle, but it seems they backed down. And this is despite the fact that it is formalized in OWL2!

When choosing a language for formalization, we first of all look to see if there is a Processor (Interpreter, VM...) for it. I would suggest my language YAFOLL but it is not ready yet 😂

I would start with those that hets.eu or Wolfram Language. By the way, usually if OWL2 is not enough, they switch to some kind of FOL++ and write the formula in the annotation. It would be necessary to search in OBO ontologies what they use.

The number of theories and mathematical methods in social sciences is boundless. If Gintis has a theory, it would be interesting to try to formalize it. Keep me informed. Maybe you can immediately make a framework?


Cheers,


Alex



пт, 27 сент. 2024 г. в 17:38, Michael DeBellis <mdebe...@gmail.com>:

John F Sowa

unread,
Oct 4, 2024, 6:46:44 PM10/4/24
to ontolo...@googlegroups.com
Adrian,

I was going through my backlog of email, and I came across your note.

As Kingsley and I (and many others) agree, LLMs are probabilistic methods for finding a close match to some input string in some huge collection of strings derived from texts on the WWW.   As a method based on probabilities, it is biased by the kinds of strings in the collection and the methods of evaluation.  There is a huge amount of research on all these issues.  Discussion about the technical methods can be valuable.

Any talk about  LLMs doing any kind of understanding or intelligence is a metaphor that has no basis in any tested and widely accepted theory of human psychology.  Comparisons of human methods to LLM methods can be useful, but it's important to have careful and well documented methods for interpreting the results.

Any talk that throws around loose words without explaining the methods of evaluation is just loose talk.

John
 


From: "Adrian Medina" <adr...@wikipunk.net>
Reply all
Reply to author
Forward
0 new messages