First off, please forgive the sensational headline. The talk was really good. I see how discovering mathematical analogs for ontological patterns could be really valuable.
But let me explain. Michael promised to tackle two points framed by Hilbert’s sixth problem:
1. ontology for a physical theory
2. ontology as a physical theory
In doing this he brought up an ontology of chemical molecules.
The first point was that this ontology should be adequate to model all the molecules that chemists have found within some area or research. It is also good if the ontology is precise so as not to include unintended molecules. Sounds good.
To the second point, we can suppose that there are molecules that the ontology suggests are either possible or impossible. Sounds great.
Right there, I was expecting a bold jump into idealism and instead I heard a retreat into pragmatism.
How so? Because when the ontology treads into the unknown, the only recourse he presented was empirical. We go back to the chemists and they tell us whether the molecule exists, doesn’t exist or couldn’t exist and then we place some extension or constraint on the ontology to account for this. Fine.
But given that LLMs are already being used to predict molecules (.ie AlphaFold), doesn’t this make ontologies like this feel a little meh (to unhiply quote my children)?
Should there not be a spark of idealism in the ontologist that wants to believe that their model has what it takes not just to describe reality but to anticipate it (I would say predict but AI already overloaded that term)?
What’s my point? If symbolic AI is to keep up with and inform statistical AI, don’t we need to ask more of our ontologies? Don’t they need to anticipate the molecules that do, don’t yet, and never could exist? Or will ontologizing always be a reactionary extension or constraint here or there, as needed, like Ptolemaic epicycles bound by pragmatic interests, or shouldn’t we continue to be searching for the correct, and dare I say ideal theories?
And where this leads is not to axiomatizing ontologies in first order logic, but rather axiomatizing first order logic in ontologies. The former approach leaves us right where we are in pragmatism. Yeah we can express stuff, but that only gets us so far. How about we crack open some new ideas about what an ontology can do, like anticipate science? Pragmatism pays the bills. I respect that. But a little idealism can change the world. And aren’t we hungry for that?
Greg,
Would the work since the early 2000’s building and exploiting the Gene Ontology provide an example of the potential utility of ontology for predictive purposes? One use of that ontology is as a tool to predict the likely functions of proteins based on their structure and other properties. An example of this use in conjunction with more recent machine learning based computational protein function prediction is addressed in the citation below.
Mike
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/a5349840-4537-43fb-ad16-1280756ccc66n%40googlegroups.com.
Greg,
I think it's a kind of misunderstanding, as we are talking here about formal ontologies - a special kind of computer based artifact.
So it's better to read MG's sentences like this
1. formal ontology for a physical theory
2. formal ontology as a physical theory
Today, formal ontology (see for ex. OBO Foundry) is a sophisticated dictionary of terms, in our case from one or another physical theory.
What is important and unique, formal ontology keeps some knowledge formalized, i.e. in the form of formulas.
First of all we have physical theory and we create formal ontology for it.
Secondly, we begin to formalize knowledge (axioms, definitions) in formal ontology and this means that now we look at formal ontology as a formal physical theory.
It is hard to formalize physical theory. Even the Static part of Mechanics.
Formal ontology is just formalized theory.
Consider one time we open a Chemistry textbook and it's written on FOL, actually HOL.
There are some nuances.
For example OWL2 ontology keeps theory in TBox and can keep data in ABox, but usually we keep data in DB.
But to check data against a theory we import a part needed in ABox.
If data are accepted by the theory we call it a model of this theory, following A. Tarski.
If data corresponds to the part of reality it is also a model of this part if theory is good enough for this particular part.
I hope we get "a spark of idealism" on Track 2
https://ontologforum.com/index.php/OntologySummit2025#Track_2:_Theoretical_Knowledge_and_Reality.
Alex
--
>How so? Because when the ontology treads into the unknown, the only recourse he presented was empirical. We go >back to the chemists and they tell us whether the molecule exists, doesn’t exist or couldn’t exist and then we place >some extension or constraint on the ontology to account for this
--
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/CAMhe4f2ALMj71Sw5vC5cis3RfXKXF1etRv9LKz6S2Rvb%2B8TECA%40mail.gmail.com.
Greg, I am not trying to dismiss either machine learning or ontology as offering predictive power. I believe they both should be able to, but your allusion to explanation as a differentiator I would claim runs in the opposite direction you suggest when you say: ”I suspect many an ontologist has experienced that sinking dissolution of power once all has been described but nothing has been explained”. Computational ontology is readily able to trace the reasoning path that leads to its conclusions. Machine learning and neural networks are notoriously unable to do that at all.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/a05a3d9f-cfb1-4485-93a5-b84dcf6b8b07n%40googlegroups.com.
FWD of reply:
John,
Post-training techniques to enhance LLM output, of course, can vary across models and applications in order to refine parameter weightings and augment prompting, but I do not see how that capability touches on any distinction between computational ontology and LLMs. DeepSeek apparently allows more or easier fine-tuning (post-training) than some other LLMs. Also, I do believe there are ways during inference (post-training) in which LLMs can apply some forms of logical or probabilistic reasoning to adjust the context, but none of that is retained or ever affects the underlying model. What would be nice is if generative AI could reliably induct semantically coherent logical axioms and rules from the global corpus that could be processed under a panel of inference rules within ontology or other symbolic AI systems to build theories and classify propositions (extract knowledge) from a subset or specialized corpus. Then things would become explainable and validatable.
Mike
From: John Bottoms [mailto:jo...@firststarsystems.com]
Sent: Saturday, February 8, 2025 1:38 PM
To: Michael Denny
Subject: Re: [ontolog-forum] Why Michael's talk disappointed me.
Mike,
The difference between ChatGPT and DeepSeek appears to be in the training architecture. ChatGPT, as I read it, use human in the loop for critique, while DeepSeek is using digital agents to do the checking. There are design elements that have not been revealed in both cases but the MIT Review article sounds reasonable.
"How DeepSeek ripped up the AI playbook-and why everyone's going to follow lead" -MIT TECHNOLOGY REVIEW
John Bottoms
* * * * * *
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/000f01db799b%2481b89110%248529b330%24%40com.
Mike,w
I didn't word my view clearly enough. The brute fact is that the companies that participate in the $00B project are going to define how the ontologies will be constructed and used. Most of the time when writing specifications and standard there are fields that attempt to meet the needs of users with parameterized approaches.
My views on DeepSeek and related architectures is that they have showed specific efficiencies that drive their architectures. It is fortunate that China has shared the basic approaches. In my view the work of ontology science will be driven by the economics. And we should look to what roles this group can provide in facilitating identifying and covering the needs of all participants.
-John Bottoms
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/001801db7a84%24b20b2c30%2416218490%24%40com.
Greg,
True. I expect (or hope) such dissolution may be curbed by building ontologies to exploit the inference power their logic language offers in order to attack well defined problems rather than simply serve as catalog systems or knowledge repositories.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/1772ce10-c8a5-426e-b16d-d6576311989dn%40googlegroups.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/AF276A17-5879-4D42-899F-1950202D64BF%40gmail.com.