Language, Thought, Ontology, and AI

55 views
Skip to first unread message

John F Sowa

unread,
Nov 17, 2025, 9:05:52 PM (6 days ago) Nov 17
to ontolog-forum, CG
The topics on the subject line are important issues to discuss in Ontolog Forum, CG list, and the 2026 Ontology Summit.   I'd like to mention two  sources of topics to consider:  (1) an article about the relationship between language and thought.  (2) The answer by Google AI to the question "What value  can 
ontology provide to AI applications?" 

1, What is the relationship between language and thought? 

This question has long been a topic in cognitive science, philosophy, and linguistics. The question is: Do we think in language, or does language merely serve as a tool to express pre-existing thoughts?  The following excerpts are from https:\\davidcycleback.com/2025/01/17/the-relationship-between-language-and-thought

The Sapir-Whorf hypothesis

The Sapir-Whorf hypothesis, named after pioneering linguists Edward Sapir and Benjamin Lee Whorf, explores the connection between language and thought. It is commonly divided into two components: linguistic relativity and linguistic determinism.

Linguistic relativity says that the structure of a language shapes how its speakers perceive and interpret the world. In contrast, linguistic determinism is much more rigid and claims that language entirely governs thought, confining cognition within the limits of linguistic expression. Linguistic determinism has been largely dismissed due to insufficient empirical evidence.

The milder form, linguistic relativity, is widely accepted. Research indicates that language structure influences cognitive functions, including how we perceive colors, navigate spatial relationships, and conceptualize time.

Language and perception

Research has shown that speakers of different languages may perceive the same external sensory information differently, due to how their language categorizes those stimuli.

A classic example is the way languages divide the color spectrum. English, for instance, uses basic color terms like “red,” “blue,” and “green,” where other languages, such as Russian, distinguish between lighter and darker shades of blue with separate terms. Studies have shown that Russian speakers are faster at distinguishing between these two types of blue compared to English speakers.

Similarly, languages with different ways of conceptualizing spatial relationships can lead to differences in how speakers think about space. For example, some indigenous languages, such as Guugu Yimithirr of Australia, use cardinal directions (north, south, east, west) instead of relative terms (left, right, in front, behind) to describe spatial relationships. Research has found that speakers of such languages are better at keeping track of direction and orientation due to the linguistic emphasis on geographical awareness.

Language and memory

Research suggests that our language influences how we encode, store, and retrieve memories.

One of the most famous studies in this area is the work of University of California Irvine psychology and law professor Elizabeth Loftus. She demonstrated how the language used in eyewitness testimony shape people’s recollections of events.

In her study, participants who were asked to describe a car accident using different verbs—such as “smashed” or “contacted”—reported different estimates of the speed of the vehicles and even recalled seeing broken glass when there was none. This finding underscores how language shapes the way we reconstruct our memories, making them susceptible to suggestion and other manipulation.

Language also aids in the organization of memory. For instance, we use language to label and categorize experiences, helping us to recall them more easily. Without language, our ability to structure and make sense of the world around us would be significantly impaired.

Language and problem solving

Cognitive science also shows that language plays a role in problem-solving. When confronted with a challenge, people often verbalize the problem or “talk through” possible solutions, either out loud or internally. This process helps to structure thoughts, clarify the problem, and organize potential strategies. For example, research has shown that people tend to think more clearly and effectively when they use language to break down complex tasks into manageable steps.

In some cases, however, language slows problem-solving. For example, the well-known verbal overshadowing effect refers to when verbalizing a solution interferes with nonverbal tasks, such as remembering the details of an image or solving a spatial puzzle. This shows that, while language is an important tool for organizing thought, it is not always the most efficient medium for certain kinds of cognitive tasks.

Bilingualism and cognitive flexibility

Another fascinating aspect of the relationship between language and thought is bilingualism.

Research has shown that bilingual people often exhibit greater cognitive flexibility and enhanced executive functions compared to monolingual people. The constant switching between languages provides mental exercise, improving the brain’s ability to control attention, inhibit distractions, and solve problems. Bilingual people also tend to have an increased ability to take on different perspectives, as they are accustomed to navigating multiple linguistic and cultural frameworks.

Interestingly, the influence of language on cognition may vary depending on the context. Bilingual people often switch languages based on the social context, and this ability to move back and forth between linguistic systems has broader cognitive benefits, leading to enhanced problem-solving abilities and greater adaptability in novel situations.


2. What value can ontology provide to AI applications?

Answer by Google AI:  Ontologies provide significant value to AI applications by serving as a structured, machine-readable framework of domain knowledge that enables context-aware reasoning, data integration, explainability, and the development of robust, scalable AI systems. 
Key Values of Ontology in AI Applications
  • Semantic Understanding and Context: Ontologies define concepts, their properties, and, crucially, the relationships between them within a specific domain. This gives AI systems a rich, shared understanding (a "common vocabulary") of the real world, allowing them to interpret data with context and meaning, rather than just processing raw data or statistical patterns.
  • Enhanced Data Integration and Interoperability: In large organizations, data is often siloed across disparate systems using inconsistent terminologies. Ontologies act as a "Rosetta stone," mapping these varied data sources into a single, unified semantic layer. This allows different AI systems and applications to communicate and share information seamlessly.
  • Improved Reasoning and Decision-Making: By using the explicit rules and axioms within an ontology, AI systems can perform logical inferences and deduce new knowledge that is not explicitly programmed. This allows them to answer complex, unanticipated questions and make more robust, intelligent decisions, as seen in applications like medical diagnosis or fraud detection.
  • Explainable AI (XAI) and Trust: A major challenge with "black box" AI models is understanding how they arrive at a decision. Ontologies enhance explainability by grounding AI's reasoning in transparent, human-understandable concepts and rules. This auditability is critical for building trust and ensuring compliance in high-stakes domains like finance, healthcare, and law.
  • Reduced Hallucinations in LLMs:  For Large Language Models (LLMs), ontologies can be used in techniques like Retrieval-Augmented Generation (RAG) to provide real-time, factual context from a knowledge graph. This grounds the AI's responses in a verifiable source of truth, significantly reducing the generation of incorrect or fabricated information ("hallucinations").
  • Knowledge Reuse and Scalability: Ontologies offer a modular and reusable framework for representing knowledge. Models and logic developed for one AI project can be easily adapted or extended for others, leading to economies of scale and faster deployment of new AI solutions across an enterprise.
  • Support for Multi-Agent Systems: In complex environments where multiple AI agents need to collaborate (e.g., in a smart grid or supply chain), ontologies provide a common conceptual map and communication framework, ensuring coherence and effective coordination of actions.
  • Summary:  Ontologies transform raw data into a connected, intelligent asset, providing a robust foundation that moves AI beyond pattern recognition to genuinely understand and reason within an operational environment. 

Alex Shkotin

unread,
Nov 18, 2025, 5:17:19 AM (6 days ago) Nov 18
to ontolo...@googlegroups.com, CG

John,


Great topics, interesting sources.

Let me propose to change "Thought" to "Matter" in your subject line. To keep it nearer to applied ontologies.


Alex



вт, 18 нояб. 2025 г. в 05:05, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/b3cb32081a724115a36236aa6aaa8759%40f6e3024c309e46789007007b2ee66392.

John F Sowa

unread,
Nov 18, 2025, 1:35:23 PM (5 days ago) Nov 18
to ontolo...@googlegroups.com, CG
Alex,

Every item in an ontology is a thought.  Nothing in mathematics is made of matter.  Nothing in language is made of matter.  You can't see. feel, or weigh the number 7, the verb 'think', or the data encoded in bits or bytes of a computer.

You can see, feel, or weigh the hardware, but not the data encoded in it.  You can add or erase billions of bits of data without changing the weight of your computer.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Nov 19, 2025, 5:49:40 AM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG

John,


So many thoughts are crowding my head!


You've touched on at least two crucial topics.


vision and video processing

Starting at about three months, when a human infant realizes that if they focus both eyes on the same spot, a wonderful three-dimensional image appears, they begin to process vision and video images. This gradually refines their mind. What language do they use?

Look around you at the surfaces glowing in different colors. Is it possible to describe them in any human language so precisely that someone else can reproduce them even in their imagination?

There is a hypothesis—a postulate, I think, by R. Hubbard—that a person remembers everything they have seen, specifically the full 3D picture. However, many have blocked or partial access to this primary knowledge.

A thought is always a thought about something that is not usually a thought. If we have a thought about a thought, it is already a second-order or higher-order thought.

In our minds, we often work with various images—snapshots from reality. For example, one person tells another how they were awarded the Nobel Prize.

When reading fiction, many people imagine both people and circumstances.


the structure of knowledge, including in formal ontology

Formal ontology consists of theoretical propositions and facts. Advanced formal ontology also contains proofs and other types of reasoning.

Theory, in turn, consists of axioms, definitions, theorems, proofs, and refutations.


And it's true that we usually encounter one or another stream of thoughts. The rest are images: pictures, sounds, and, less often, warmth and touch.


But before a theory can emerge, we need an accurate description of the phenomena that typically repeat themselves.

This is usually a description of luminous surfaces in their movement and change. Less often, we turn to luminous points, for example, when looking at the stars.


And last but not least: 7 is not a thought, it's a math object - the gem of math.

When Weierstrass gave the definition of a continuous real function that is not differentiable at any point, Poincaré is said to have said: we don’t need such objects.


Alex



вт, 18 нояб. 2025 г. в 21:35, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Rogério Lourenco

unread,
Nov 19, 2025, 9:50:02 AM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG
John, Alex, and colleagues,

Your exchange on language, thought, and mathematical objects (e.g., Alex's point that "7 is not a thought, it's a math object") directly intersects with my recent research on the semiotic unity of language, number, and image.

I'm a linguistic anthropologist examining how students reason through hybrid inscriptions that blend text, drawings, and numerical expressions—data from the Brazilian Mathematics Olympiad reveals cognition operates through alphanumeric integration, not disciplinary isolation.

This builds on Michel Pêcheux's discourse analysis and François Sigaut's technographic approach to argue that alphabetic writing's privileging of sound created the illusion that words and numbers belong to distinct ontological domains.

7-minute video summary: https://www.youtube.com/watch?v=bvGAJ6DYX6g

The video is based my article The Alphanumeric Image: A Discursive Anthropology of mathematics Education, 2025

John's point that "nothing in mathematics is made of matter" resonates with my core argument: both mathematics and verbal language are cultural technologies derived from a single semiotic origin. The video shows classroom data demonstrating this in practice.

Would you (and the list) have time to watch? I'd value your specific thoughts on whether this alphanumeric framework could inform formal ontology's treatment of mathematical objects as hybrid semiotic entities rather than pure abstractions.

Best regards,

Rogerio Lourenco



--


► beauty follows dignity*https://metaimagem.wordpress.com


Dima, Alden A. (Fed)

unread,
Nov 19, 2025, 11:30:01 AM (4 days ago) Nov 19
to ontolo...@googlegroups.com
John,

That is certainly true from any practical point of view and I’m guessing that measuring any mass change would exceed the limits of current measurements. But from a theoretical view, doesn’t storing  and reading information require energy? If that information is stored using energy (for example, electrical or chemical), then doesn't the mass of the memory cell change (Einstein's mass-energy equivalence), even if it is infinitesimal? If a bit can be stored without changing stored energy (say for instance, by shifting the location of a magnetic bubble),  wouldn’t there invariably be heat dissipation which would also temporarily alter the computer’s mass (Landauer’s Principle)? I believe these principles apply to biological systems, like brains, and their phenomena, like thoughts.

I would argue that, in principle, any ontology as it is conceived and committed to memory or permanent storage has physical effects - it ever so slightly changes the universe.

Alden

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

do...@foxvog.org

unread,
Nov 19, 2025, 11:54:35 AM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG, Alex Shkotin
Many point here.

* Thoughts can be with or without language.
-- Language based thoughts can draw on knowlege about the linguistic
terms, which aids reasoning.
-- Language based thoughts can be easier to remember because connections
to other thoughs have been constructed.

* Thoughts can be with or without images.
-- A mind does not encode images directly, just salient points of the
image
-- In recalling an image one can observe features that one did not
observe earlier.
-- In recalling an image one can observe features that never were in the
original scene viewed (e.g., broken glass).

* Thoughts can be with or without other sensory connections
-- This includes taste, smell, hunger, thirst, pain, sensory pleasure

* Thoughts can with or without emotional connections
-- This includes love, hate, fear, desire (of various kind), disgust,
etc.

* Thoughts can be about thoughts and mental constructs
-- This includes numbers, shapes, theories, conceptual works, etc.

* Thoughts can combine items of any or all of the 5 above types

One knows how one perceives. One can only guess how another perceives.
One has no way of knowing if one's own perception of a color is the same
as another's perception of the same color. Events such as the the
debate whether an image on the web is blue or brown, for example,
suggest that folk perceive colors differently, but also deal with
whether folk are aware or not of internal adjustments for external
phenomena which affect perceived color, such as ambient lighting.

So, no. One can not describe an unstructured scene in such detail that
another can reproduce it. If it is a scene of simple objects on a low
resolution screen, yes, one can.

Is a pre-linguistic child using language? Of course, not. Are they
thinking? Yes.

Are math objects thoughts? No. But they do not have physical
existance, either.

To the extent anything can be described in language, it can be encoded
in an ontology.

A problem with ontology is that to be formal, an ontological term needs
to have a firm definition, specifying whether a specific newly
considered thing is or is not an instance of the concept the term
encodes. All boundary conditions are not known when the term is
defined. When a newly considered object/event/concept is considered, a
change in definition may be desired to include or exclude the new thing
from the class of things represented by the term. In languages, the
meanings of words constantly change to accomidate such novelty -- often
without people realizing that the meaning of the words are changing.
-- For example an ontology of #Arm-Weapon from 1790 would not include
modern firearms that used bullets with encased shells, but it would
include crossbows, swords, and pikes.
>> -------------------------
>>
>> From: "Alex Shkotin" <alex.s...@gmail.com>
>>
>> John,
>>
>> Great topics, interesting sources.
>>
>> Let me propose to change "Thought" to "Matter" in your subject line.
>> To keep it nearer to applied ontologies.
>>
>> Alex
>>
>> вт, 18 нояб. 2025 г. в 05:05, John F Sowa
>> <so...@bestweb.net>:
>>
>>> The topics on the subject line are important issues to discuss in
>>> Ontolog Forum, CG list, and the 2026 Ontology Summit. I'd like
>>> to mention two sources of topics to consider: (1) an article
>>> about the relationship between language and thought. (2) The
>>> answer by Google AI to the question "What value can
>>> ontology provide to AI applications?"
>>>
>>> 1, What is the relationship between language and thought?
>>>
>>> This question has long been a topic in cognitive science,
>>> philosophy, and linguistics. The question is: Do we think in
>>> language, or does language merely serve as a tool to express
>>> pre-existing thoughts? The following excerpts are from
>>>
>>
> https:\\davidcycleback.com/2025/01/17/the-relationship-between-language-and-thought
>>> [1]
>>>
>>> THE SAPIR-WHORF HYPOTHESIS
>>>
>>> The Sapir-Whorf hypothesis, named after pioneering linguists
>>> Edward Sapir and Benjamin Lee Whorf, explores the connection
>>> between language and thought. It is commonly divided into two
>>> components: _linguistic relativity_ and _linguistic determinism._
>>>
>>> _Linguistic relativity_ says that the structure of a language
>>> shapes how its speakers perceive and interpret the world. In
>>> contrast, _linguistic determinism_ is much more rigid and claims
>>> that language entirely governs thought, confining cognition within
>>> the limits of linguistic expression. Linguistic determinism has
>>> been largely dismissed due to insufficient empirical evidence.
>>>
>>> The milder form, linguistic relativity, is widely accepted.
>>> Research indicates that language structure influences cognitive
>>> functions, including how we perceive colors, navigate spatial
>>> relationships, and conceptualize time.
>>>
>>> LANGUAGE AND PERCEPTION
>>>
>>> Research has shown that speakers of different languages may
>>> perceive the same external sensory information differently, due to
>>> how their language categorizes those stimuli.
>>>
>>> A classic example is the way languages divide the color spectrum.
>>> English, for instance, uses basic color terms like “red,”
>>> “blue,” and “green,” where other languages, such as
>>> Russian, distinguish between lighter and darker shades of blue
>>> with separate terms. Studies have shown that Russian speakers are
>>> faster at distinguishing between these two types of blue compared
>>> to English speakers.
>>>
>>> Similarly, languages with different ways of conceptualizing
>>> spatial relationships can lead to differences in how speakers
>>> think about space. For example, some indigenous languages, such as
>>> Guugu Yimithirr of Australia, use cardinal directions (north,
>>> south, east, west) instead of relative terms (left, right, in
>>> front, behind) to describe spatial relationships. Research has
>>> found that speakers of such languages are better at keeping track
>>> of direction and orientation due to the linguistic emphasis on
>>> geographical awareness.
>>>
>>> LANGUAGE AND MEMORY
>>>
>>> Research suggests that our language influences how we encode,
>>> store, and retrieve memories.
>>>
>>> One of the most famous studies in this area is the work of
>>> University of California Irvine psychology and law professor
>>> Elizabeth Loftus. She demonstrated how the language used in
>>> eyewitness testimony shape people’s recollections of events.
>>>
>>> In her study, participants who were asked to describe a car
>>> accident using different verbs—such as “smashed” or
>>> “contacted”—reported different estimates of the speed of the
>>> vehicles and even recalled seeing broken glass when there was
>>> none. This finding underscores how language shapes the way we
>>> reconstruct our memories, making them susceptible to suggestion
>>> and other manipulation.
>>>
>>> Language also aids in the organization of memory. For instance, we
>>> use language to label and categorize experiences, helping us to
>>> recall them more easily. Without language, our ability to
>>> structure and make sense of the world around us would be
>>> significantly impaired.
>>>
>>> LANGUAGE AND PROBLEM SOLVING
>>>
>>> Cognitive science also shows that language plays a role in
>>> problem-solving. When confronted with a challenge, people often
>>> verbalize the problem or “talk through” possible solutions,
>>> either out loud or internally. This process helps to structure
>>> thoughts, clarify the problem, and organize potential strategies.
>>> For example, research has shown that people tend to think more
>>> clearly and effectively when they use language to break down
>>> complex tasks into manageable steps.
>>>
>>> In some cases, however, language slows problem-solving. For
>>> example, the well-known _verbal overshadowing_ effect refers to
>>> when verbalizing a solution interferes with nonverbal tasks, such
>>> as remembering the details of an image or solving a spatial
>>> puzzle. This shows that, while language is an important tool for
>>> organizing thought, it is not always the most efficient medium for
>>> certain kinds of cognitive tasks.
>>>
>>> BILINGUALISM AND COGNITIVE FLEXIBILITY
>>>
>>> Another fascinating aspect of the relationship between language
>>> and thought is bilingualism.
>>>
>>> Research has shown that bilingual people often exhibit greater
>>> cognitive flexibility and enhanced executive functions compared to
>>> monolingual people. The constant switching between languages
>>> provides mental exercise, improving the brain’s ability to
>>> control attention, inhibit distractions, and solve problems.
>>> Bilingual people also tend to have an increased ability to take on
>>> different perspectives, as they are accustomed to navigating
>>> multiple linguistic and cultural frameworks.
>>>
>>> Interestingly, the influence of language on cognition may vary
>>> depending on the context. Bilingual people often switch languages
>>> based on the social context, and this ability to move back and
>>> forth between linguistic systems has broader cognitive benefits,
>>> leading to enhanced problem-solving abilities and greater
>>> adaptability in novel situations.
>>>
>>> 2. What value can ontology provide to AI applications?
>>>
>>> Answer by Google AI: Ontologies provide significant value to AI
>>> applications by serving as a structured, machine-readable
>>> framework of domain knowledge that enables context-aware
>>> reasoning, data integration, explainability, and the development
>>> of robust, scalable AI systems.
>>> Key Values of Ontology in AI Applications
>>>
>>> * Semantic Understanding and Context: Ontologies define concepts,
>>> their properties, and, crucially, the relationships between them
>>> within a specific domain. This gives AI systems a rich, shared
>>> understanding (a "common vocabulary") of the real world, allowing
>>> them to interpret data with context and meaning, rather than just
>>> processing raw data or statistical patterns.
>>> * Enhanced Data Integration and Interoperability: In large
>>> organizations, data is often siloed across disparate systems using
>>> inconsistent terminologies. Ontologies act as a "Rosetta stone,"
>>> mapping these varied data sources into a single, unified semantic
>>> layer. This allows different AI systems and applications to
>>> communicate and share information seamlessly.
>>> * Improved Reasoning and Decision-Making: By using the explicit
>>> rules and axioms within an ontology, AI systems can perform
>>> logical inferences and deduce new knowledge that is not explicitly
>>> programmed. This allows them to answer complex, unanticipated
>>> questions and make more robust, intelligent decisions, as seen in
>>> applications like medical diagnosis or fraud detection.
>>> * Explainable AI (XAI) and Trust: A major challenge with "black
>>> box" AI models is understanding how they arrive at a decision.
>>> Ontologies enhance explainability by grounding AI's reasoning in
>>> transparent, human-understandable concepts and rules. This
>>> auditability is critical for building trust and ensuring
>>> compliance in high-stakes domains like finance, healthcare, and
>>> law.
>>> * Reduced Hallucinations in LLMs: For Large Language Models
>>> (LLMs), ontologies can be used in techniques like
>>> Retrieval-Augmented Generation (RAG) to provide real-time, factual
>>> context from a knowledge graph. This grounds the AI's responses in
>>> a verifiable source of truth, significantly reducing the
>>> generation of incorrect or fabricated information
>>> ("hallucinations").
>>> * Knowledge Reuse and Scalability: Ontologies offer a modular and
>>> reusable framework for representing knowledge. Models and logic
>>> developed for one AI project can be easily adapted or extended for
>>> others, leading to economies of scale and faster deployment of new
>>> AI solutions across an enterprise.
>>> * Support for Multi-Agent Systems: In complex environments where
>>> multiple AI agents need to collaborate (e.g., in a smart grid or
>>> supply chain), ontologies provide a common conceptual map and
>>> communication framework, ensuring coherence and effective
>>> coordination of actions.
>>> * Summary: Ontologies transform raw data into a connected,
>>> intelligent asset, providing a robust foundation that moves AI
>>> beyond pattern recognition to genuinely _understand_ and _reason_
>>> within an operational environment.
>>
>> --
>> All contributions to this forum are covered by an open-source
>> license.
>> For information about the wiki, the license, and how to subscribe or
>>
>> unsubscribe to the forum, see http://ontologforum.org/info
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "ontolog-forum" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to ontolog-foru...@googlegroups.com.
>> To view this discussion visit
>>
> https://groups.google.com/d/msgid/ontolog-forum/4edf8fa3ed3e4d77a44ebf768fc92235%40477526d815f747189dd2caf1b9bad3d0
>> [2].
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/CAFxxROTeq-kuQ0p41Nkydf0XiQQ%2BN6H%2B1gN%2BmukD2pk2NhCcpw%40mail.gmail.com
> [3].
>
>
> Links:
> ------
> [1]
> http://davidcycleback.com/2025/01/17/the-relationship-between-language-and-thought
> [2]
> https://groups.google.com/d/msgid/ontolog-forum/4edf8fa3ed3e4d77a44ebf768fc92235%40477526d815f747189dd2caf1b9bad3d0?utm_medium=email&amp;utm_source=footer
> [3]
> https://groups.google.com/d/msgid/ontolog-forum/CAFxxROTeq-kuQ0p41Nkydf0XiQQ%2BN6H%2B1gN%2BmukD2pk2NhCcpw%40mail.gmail.com?utm_medium=email&utm_source=footer

Nadin, Mihai

unread,
Nov 19, 2025, 2:24:41 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com

Dear and respected colleagues,

Measuring is disturbing. Alden’s remarks make sense only in the perspective of Heisenberg’s Uncertainty principle.

 

Data is definitely embodied in matter. Meaning, ie.e data referenced to what ir represents,  is not. Landauer describes heat dissipation as related to erasing data. But meaning is not embodied.

 

There is no mass-energy equivalence. Let us be a bit more precise in calling up theories when we argue in favor of one viewpoint or another. Is consciousness reducible to matter?

Thought, never mind language, deserve a more nuanced understanding than the matter reductionism.

 

Mihai Nadin

John F Sowa

unread,
Nov 19, 2025, 3:12:36 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG
Doug, Alden, Rogerio, List,

All your notes make important points.  But as the subject line indicates, we must distinguish the roles of language, thought, ontology, and AI from their representations.  And those representations occur in many forms in language (spoken, written, signed, natural, artificial, diagrammed, computerized), thought (at many levels in many parts of the brain), ontology (formal, informal, in many different kinds of notations), and in the computer, where any aspect of all the earlier items in this list may or may not be represented and processed.

Then we have to consider how any of these things are used -- by human speakers, writers, brains, by computer programs for many different purposes.  Then when we get to ontologies that we can use in computer programs, all that complexity gets represented by statements in some version of formal logic.  That gets into many different questions in many different formats.

When we talk about logic, first order logic (FOL) is the simplest general form that can represent everything and anything that can be represented in a digital computer,  It is sometimes called GOFOL (Good Old Fashioned First Order Logic).

There are various subsets of FOL, such as OWL, which is restricted for a special purpose reasoning system.  But FOL is more general and it has a simpler mapping to and from English and other languages.   But FOL and its subsets cannot do any reasoning about the predicates, relations, and functions that are represented by symbols in FOL.  For that, it's necessary to go beyond FOL to second order (SOL) or higher order (HOL) logics.

There is also an important extension to FOL called Common Logic (CL), which supports reasoning about functions and relations without going beyond a first-order style of reasoning.

Moral of this story:  For representing and reasoning about ontology on a digital computer, none of the questions about what happens in the brain are relevant.   The only significant issue is which version of logic is best suited to the intended or expected applications.

John 


Rogério Lourenco

unread,
Nov 19, 2025, 3:29:00 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG

John,

I agree with the need to distinguish phenomena from their representations.

From an anthropological perspective, I would only add one point: the representations are not neutral.

Once we move into FOL, OWL, CL or any formal system, we also inherit the historical segmentation of alphabetic, numeric, and graphic chains that makes those systems possible. Computers reason over logic, but the logic itself carries the organization produced by those symbolic forms.

So maybe the key issue is not brain processes, but how each representational choice shapes what can be expressed and what can be reasoned about.

Thank you, John, for the feedback.

Also, all the other members for sending me your related work to me. 

Rogerio


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

do...@foxvog.org

unread,
Nov 19, 2025, 3:32:05 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com
Most of this is basic & has been stated numerous times in this forum.
But one clarification.

John, you say both
* "first order logic (FOL) is the simplest general form that can
represent everything and anything that can be represented in a digital
computer"

* "FOL and its subsets cannot do any reasoning about the predicates,
relations, and functions that are represented by symbols in FOL."

Without a strange meaning of "represent" or a restriction on
"everything" in the first assertion, the second sentence does not hold.
To represent assertions about predicates, relations, & functions, one
needs higher order logic. Such assertions include specifying
relationships among the range & domains of different predicates and
relations among subsets of the arguments of a relation with more than
two arguments.

-- doug foxvog
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/f1f3f3ecad514fa1a8dd085d3c6fe339%40679415a312f4464ebcb5a92ca3856658
> [1].
>
>
> Links:
> ------
> [1]
> https://groups.google.com/d/msgid/ontolog-forum/f1f3f3ecad514fa1a8dd085d3c6fe339%40679415a312f4464ebcb5a92ca3856658?utm_medium=email&utm_source=footer

do...@foxvog.org

unread,
Nov 19, 2025, 3:53:07 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com
Responses inline

On 2025-11-19 14:24, Nadin, Mihai wrote:
> Dear and respected colleagues,
>
> Measuring is disturbing. Alden’s remarks make sense only in the
> perspective of Heisenberg’s Uncertainty principle.

> Data is definitely embodied in matter. Meaning, ie.e data referenced
> to what ir represents, is not. Landauer describes heat dissipation as
> related to erasing data. But meaning is not embodied.

Data is ENCODED in matter. The matter is not the data. The meaning is
neither the data nor the matter.

> There is no mass-energy equivalence. Let us be a bit more precise in
> calling up theories when we argue in favor of one viewpoint or
> another. Is consciousness reducible to matter?

Consciousness is a process that matter undergoes. It is not the matter.

> Thought, never mind language, deserve a more nuanced understanding
> than the matter reductionism.
>
> Mihai Nadin
>
> From: 'Dima, Alden A. (Fed)' via ontolog-forum
> <ontolo...@googlegroups.com>
> Sent: Wednesday, November 19, 2025 10:30 AM

> John,
>
> That is certainly true from any practical point of view and I’m
> guessing that measuring any mass change would exceed the limits of
> current measurements. But from a theoretical view, doesn’t storing
> and reading information require energy?

Yes, it does.

> If that information is stored
> using energy (for example, electrical or chemical), then doesn't the
> mass of the memory cell change (Einstein's mass-energy equivalence),
> even if it is infinitesimal?

The energy cell may be in a higher energy state during the transition,
but not necessarily after the transition. The energy of transition
would normally be converted into heat, which does not need to remain in
the energy cell.

> If a bit can be stored without changing
> stored energy (say for instance, by shifting the location of a
> magnetic bubble), wouldn’t there invariably be heat dissipation
> which would also temporarily alter the computer’s mass (Landauer’s
> Principle)?

With each clock tick energy pulses through the computer. Except while
the computer is warming up after being turned on, the temperature (and
thus latent heat) of the computer stabilizes. That happens whether or
not data is changing.

> I believe these principles apply to biological systems,
> like brains, and their phenomena, like thoughts.

Biological systems produce heat by metabolism. Whether some of the
changes encode data or not is immaterial.

> I would argue that, in principle, any ontology as it is conceived and
> committed to memory or permanent storage has physical effects - it
> ever so slightly changes the universe.

As the processes occur whethr or not what is being considred is an
ontology or not, the existence of the ontology doesn't have an effect on
the universe -- to the extent it is not considered part of the universe.

-- doug foxvog

> Alden
>
> From: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
> on behalf of John F Sowa <so...@bestweb.net>
> Date: Tuesday, November 18, 2025 at 1:35 PM
> To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
> Cc: CG <c...@lists.iccs-conference.org>
> Subject: [EXTERNAL] Re: [ontolog-forum] Language, Thought, Ontology,
> and AI
>
> Alex,
>
> Every item in an ontology is a thought. Nothing in mathematics is
> made of matter. Nothing in language is made of matter. You can't
> see. feel, or weigh the number 7, the verb 'think', or the data
> encoded in bits or bytes of a computer.
>
> You can see, feel, or weigh the hardware, but not the data encoded in
> it. You can add or erase billions of bits of data without changing
> the weight of your computer.
>
> John
>
> -------------------------
>
> From: "Alex Shkotin" <alex.s...@gmail.com>
>
> John,
>
> Great topics, interesting sources.
>
> Let me propose to change "Thought" to "Matter" in your subject line.
> To keep it nearer to applied ontologies.
>
> Alex
>
> вт, 18 нояб. 2025 г. в 05:05, John F Sowa
> <so...@bestweb.net>:
>
>> The topics on the subject line are important issues to discuss in
>> Ontolog Forum, CG list, and the 2026 Ontology Summit. I'd like to
>> mention two sources of topics to consider: (1) an article about
>> the relationship between language and thought. (2) The answer by
>> Google AI to the question "What value can
>>
>> ontology provide to AI applications?"
>>
>> 1, What is the relationship between language and thought?
>>
>> This question has long been a topic in cognitive science,
>> philosophy, and linguistics. The question is: Do we think in
>> language, or does language merely serve as a tool to express
>> pre-existing thoughts? The following excerpts are from
>>
> https:\\davidcycleback.com/2025/01/17/the-relationship-between-language-and-thought
>> [1]
>>
>> THE SAPIR-WHORF HYPOTHESIS
>>
>> The Sapir-Whorf hypothesis, named after pioneering linguists Edward
>> Sapir and Benjamin Lee Whorf, explores the connection between
>> language and thought. It is commonly divided into two components:
>> _linguistic relativity_ and _linguistic determinism._
>>
>> _Linguistic relativity_ says that the structure of a language
>> shapes how its speakers perceive and interpret the world. In
>> contrast, _linguistic determinism_ is much more rigid and claims
>> that language entirely governs thought, confining cognition within
>> the limits of linguistic expression. Linguistic determinism has been
>> largely dismissed due to insufficient empirical evidence.
>>
>> The milder form, linguistic relativity, is widely accepted.
>> Research indicates that language structure influences cognitive
>> functions, including how we perceive colors, navigate spatial
>> relationships, and conceptualize time.
>>
>> LANGUAGE AND PERCEPTION
>>
>> Research has shown that speakers of different languages may
>> perceive the same external sensory information differently, due to
>> how their language categorizes those stimuli.
>>
>> A classic example is the way languages divide the color spectrum.
>> English, for instance, uses basic color terms like “red,”
>> “blue,” and “green,” where other languages, such as Russian,
>> distinguish between lighter and darker shades of blue with separate
>> terms. Studies have shown that Russian speakers are faster at
>> distinguishing between these two types of blue compared to English
>> speakers.
>>
>> Similarly, languages with different ways of conceptualizing spatial
>> relationships can lead to differences in how speakers think about
>> space. For example, some indigenous languages, such as Guugu
>> Yimithirr of Australia, use cardinal directions (north, south, east,
>> west) instead of relative terms (left, right, in front, behind) to
>> describe spatial relationships. Research has found that speakers of
>> such languages are better at keeping track of direction and
>> orientation due to the linguistic emphasis on geographical
>> awareness.
>>
>> LANGUAGE AND MEMORY
>>
>> Research suggests that our language influences how we encode,
>> store, and retrieve memories.
>>
>> One of the most famous studies in this area is the work of
>> University of California Irvine psychology and law professor
>> Elizabeth Loftus. She demonstrated how the language used in
>> eyewitness testimony shape people’s recollections of events.
>>
>> In her study, participants who were asked to describe a car
>> accident using different verbs—such as “smashed” or
>> “contacted”—reported different estimates of the speed of the
>> vehicles and even recalled seeing broken glass when there was none.
>> This finding underscores how language shapes the way we reconstruct
>> our memories, making them susceptible to suggestion and other
>> manipulation.
>>
>> Language also aids in the organization of memory. For instance, we
>> use language to label and categorize experiences, helping us to
>> recall them more easily. Without language, our ability to structure
>> and make sense of the world around us would be significantly
>> impaired.
>>
>> LANGUAGE AND PROBLEM SOLVING
>>
>> Cognitive science also shows that language plays a role in
>> problem-solving. When confronted with a challenge, people often
>> verbalize the problem or “talk through” possible solutions,
>> either out loud or internally. This process helps to structure
>> thoughts, clarify the problem, and organize potential strategies.
>> For example, research has shown that people tend to think more
>> clearly and effectively when they use language to break down complex
>> tasks into manageable steps.
>>
>> In some cases, however, language slows problem-solving. For
>> example, the well-known _verbal overshadowing_ effect refers to when
>> verbalizing a solution interferes with nonverbal tasks, such as
>> remembering the details of an image or solving a spatial puzzle.
>> This shows that, while language is an important tool for organizing
>> thought, it is not always the most efficient medium for certain
>> kinds of cognitive tasks.
>>
>> BILINGUALISM AND COGNITIVE FLEXIBILITY
>>
>> Another fascinating aspect of the relationship between language and
>> thought is bilingualism.
>>
>> Research has shown that bilingual people often exhibit greater
>> cognitive flexibility and enhanced executive functions compared to
>> monolingual people. The constant switching between languages
>> provides mental exercise, improving the brain’s ability to control
>> attention, inhibit distractions, and solve problems. Bilingual
>> people also tend to have an increased ability to take on different
>> perspectives, as they are accustomed to navigating multiple
>> linguistic and cultural frameworks.
>>
>> Interestingly, the influence of language on cognition may vary
>> depending on the context. Bilingual people often switch languages
>> based on the social context, and this ability to move back and forth
>> between linguistic systems has broader cognitive benefits, leading
>> to enhanced problem-solving abilities and greater adaptability in
>> novel situations.
>>
>> 2. What value can ontology provide to AI applications?
>>
>> Answer by Google AI: Ontologies provide significant value to AI
>> applications by serving as a structured, machine-readable framework
>> of domain knowledge that enables context-aware reasoning, data
>> integration, explainability, and the development of robust, scalable
>> AI systems.
>>
>> Key Values of Ontology in AI Applications
>>
>> * Semantic Understanding and Context: Ontologies define concepts,
>> their properties, and, crucially, the relationships between them
>> within a specific domain. This gives AI systems a rich, shared
>> understanding (a "common vocabulary") of the real world, allowing
>> them to interpret data with context and meaning, rather than just
>> processing raw data or statistical patterns.
>> * Enhanced Data Integration and Interoperability: In large
>> organizations, data is often siloed across disparate systems using
>> inconsistent terminologies. Ontologies act as a "Rosetta stone,"
>> mapping these varied data sources into a single, unified semantic
>> layer. This allows different AI systems and applications to
>> communicate and share information seamlessly.
>> * Improved Reasoning and Decision-Making: By using the explicit
>> rules and axioms within an ontology, AI systems can perform logical
>> inferences and deduce new knowledge that is not explicitly
>> programmed. This allows them to answer complex, unanticipated
>> questions and make more robust, intelligent decisions, as seen in
>> applications like medical diagnosis or fraud detection.
>> * Explainable AI (XAI) and Trust: A major challenge with "black
>> box" AI models is understanding how they arrive at a decision.
>> Ontologies enhance explainability by grounding AI's reasoning in
>> transparent, human-understandable concepts and rules. This
>> auditability is critical for building trust and ensuring compliance
>> in high-stakes domains like finance, healthcare, and law.
>> * Reduced Hallucinations in LLMs: For Large Language Models
>> (LLMs), ontologies can be used in techniques like
>> Retrieval-Augmented Generation (RAG) to provide real-time, factual
>> context from a knowledge graph. This grounds the AI's responses in a
>> verifiable source of truth, significantly reducing the generation of
>> incorrect or fabricated information ("hallucinations").
>> * Knowledge Reuse and Scalability: Ontologies offer a modular and
>> reusable framework for representing knowledge. Models and logic
>> developed for one AI project can be easily adapted or extended for
>> others, leading to economies of scale and faster deployment of new
>> AI solutions across an enterprise.
>> * Support for Multi-Agent Systems: In complex environments where
>> multiple AI agents need to collaborate (e.g., in a smart grid or
>> supply chain), ontologies provide a common conceptual map and
>> communication framework, ensuring coherence and effective
>> coordination of actions.
>> * Summary: Ontologies transform raw data into a connected,
>> intelligent asset, providing a robust foundation that moves AI
>> beyond pattern recognition to genuinely _understand_ and _reason_
>> within an operational environment.
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/4edf8fa3ed3e4d77a44ebf768fc92235%40477526d815f747189dd2caf1b9bad3d0
> [2].
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/SA0PR09MB67965A93C7BB252B679FFB87E7D7A%40SA0PR09MB6796.namprd09.prod.outlook.com
> [3].
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897D904DD6AA4D64399CED6DAD7A%40BL3PR01MB6897.prod.exchangelabs.com
> [4].
> https://groups.google.com/d/msgid/ontolog-forum/SA0PR09MB67965A93C7BB252B679FFB87E7D7A%40SA0PR09MB6796.namprd09.prod.outlook.com?utm_medium=email&amp;utm_source=footer
> [4]
> https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897D904DD6AA4D64399CED6DAD7A%40BL3PR01MB6897.prod.exchangelabs.com?utm_medium=email&utm_source=footer

John F Sowa

unread,
Nov 19, 2025, 4:08:56 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG
Doug,

That is an excellent question, which is related to the question of how Common Logic can reason about predicates, functions, and relations without going beyond a first order style of reasoning.

The answer is based on the size and specification of the domain of discourse -- the totality of everything that a logic can reason about.

 


Douglas Miles

unread,
Nov 19, 2025, 5:32:32 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG
Perhaps we are using the syntax of first order logic but expect that HOL is operational. 

Douglas Miles

unread,
Nov 19, 2025, 5:41:04 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG
Perhaps we are using the syntax of first order logic but expect that HOL is operational.

The real tension here is the worry that we are over-connecting the form of the representation with the limits of the reasoning that can be carried out. In other words, we often treat the use of FOL notation as if it automatically imposes the semantic restrictions of strict first-order logic.

But in practice, many ontology and AI systems write in first-order syntax while relying on reification strategies, or domain-level assumptions that go beyond what pure FOL can express.

Common Logic is a canonical example: it stays within a first-order style of inference while still supporting reasoning
about functions and relations by pulling them into the domain.

So part of the confusion in this thread may simply come from assuming that the syntactic shape of the representation determines its conceptual scope
    —when in fact the operational semantics may already be quietly doing something closer to second- or higher-order logic.

John F Sowa

unread,
Nov 19, 2025, 6:23:31 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com, CG
Doug,

You're asking an important question:  How is it possible to reason about functions and relations without getting into the uncountable infinities of second or higher order logics?

Short answer:  Limit the domain of functions and relations to a fixed set, not infinities of infinities.

For example, FOL can define and reason about a fixed set R of relations or functions over a possibly infinite domain  D.  Quantifiers can govern things in D, but not things in R.

But true higher-order logic allows R to be the set of all possible functions and relations over D,  If D is infinite,  R would consist of infinities of infinities
 
Common Logic (CL) gets around that problem by having just one fixed set D of named things, which may be individuals or functions or relations.   Quantifiers may range over anything in D.

As a result, CL has the same level of complexity as FOL, but it allows quantified variables to refer to functions or relations.

John
 


do...@foxvog.org

unread,
Nov 19, 2025, 8:21:07 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com
On Thu, Nov 20, 2025 at 4:08 AM John F Sowa <so...@bestweb.net> wrote:
> Doug,
>
> That is an excellent question, which is related to the question of
> how Common Logic can reason about predicates, functions, and
> relations without going beyond a first order style of reasoning.
>
> The answer is based on the size and specification of the domain of
> discourse -- the totality of everything that a logic can reason about.

John,
This can be done if the logic uses two symbols for the relations: one a
name for them to reason about and a second the relation itself. The
symbol that is reasoned about is not the one that performs the
operation. They can actually be textually the same with the syntax of
the language hiding that symbols in an argument position are of one
class while symbols in operator position are of a different class. Then
one can write (in LISP syntax): (arg1Isa isMotherOf FemaleAnimal)
(arg2Isa isMotherOf Animal) (arity isMotherOf 2)

In such a language the two symbols that look the same have nothing to do
with each other. In LISP syntax, one could write (equals equals 3),
which would mean that the symbol "equals", when used in an argument
position is a numeric variable or constant whose (current) value is 3.

-- doug foxvog
> -------------------------
>> [1].
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/CAER3M5%3DZftzWActE2H6Du4XR5t0YM%2Bn4_COLN%2BBSdku4CiY2mA%40mail.gmail.com
> [2].
>
>
> Links:
> ------
> [1]
> https://groups.google.com/d/msgid/ontolog-forum/f58aa6b8976d4d1eab20c2e4a2fc936e%402b31e7d0d23241de923828dfd424f34c?utm_medium=email&amp;utm_source=footer
> [2]
> https://groups.google.com/d/msgid/ontolog-forum/CAER3M5%3DZftzWActE2H6Du4XR5t0YM%2Bn4_COLN%2BBSdku4CiY2mA%40mail.gmail.com?utm_medium=email&utm_source=footer

do...@foxvog.org

unread,
Nov 19, 2025, 9:29:08 PM (4 days ago) Nov 19
to ontolo...@googlegroups.com
On 2025-11-19 18:23, John F Sowa wrote:
> Doug,

> You're asking an important question: How is it possible to reason
> about functions and relations without getting into the uncountable
> infinities of second or higher order logics?

> Short answer: Limit the domain of functions and relations to a fixed
> set, not infinities of infinities.

This does not occur in implemented second order logics, such as Cyc,
because the any ontology created in the language is itself finite. The
potential domain is unrestricted, but in any implementation, it is
finite.

> For example, FOL can define and reason about a fixed set R of
> relations or functions over a possibly infinite domain D.
> Quantifiers can govern things in D, but not things in R.

This contradicts the definition of FOL, which does not allow relations
and functions as arguments. You say below, that higher-order logics can
operate with the complexity of FOL if they are finite logics. I note
that any implemented logic is finite.

> But true higher-order logic allows R to be the set of all possible
> functions and relations over D, If D is infinite, R would consist of
> infinities of infinities

Note that any implementation of a logic, whether First Order or higher
order is finite. So D & R are finite in any implemenaton even though
they are potentially infinite.

In Cyc, one can write:
(if (and
(binaryClosureOf ?R2 ?R1)
(?R1 ?A ?B)
(?R1 ?B ?C))
(?R2 ?A ?C))

?R1 & ?R2 are constrained to be instances of BinaryPredicate since:
(arg1Isa binaryClosureOf BinaryPredicate)
(arg2Isa binaryClosureOf BinaryPredicate)
FWIW, (and ...) is an N-ary predicate whose arguments are constrained
to be truth values. The value of a predicate applied to arguments is a
truth value.

> Common Logic (CL) gets around that problem by having just one fixed
> set D of named things, which may be individuals or functions or
> relations. Quantifiers may range over anything in D.

In Cyc, functions and relations are also individuals. In CL, the named
things may also be classes. Instances of classes can be restricted to
being individuals, but they can also be classes themselves. In Cyc, a
class whose instances are restricted to being individuals are considered
to be First Order classes. A class whose instances are restricted to
being First Order classes are considered to be Second Order Classes.
The class, BiologicalTaxon, is Second Order. Its instances, such as the
class, Mammal, are First Order. Mammal's instances are individuals.
Note that BiologicalTaxon has a number of subclasses (BiologicalKingdom,
BiologicalPhylum, BiologicalFamily, BiologicalClass, BiologicalGenus,
...) are also instances of Second Order Class.

-- doug foxvog

> As a result, CL has the same level of complexity as FOL, but it allows
> quantified variables to refer to functions or relations.

> John
>
> -------------------------
>
> From: do...@foxvog.org
>
> Most of this is basic & has been stated numerous times in this forum.
> But one clarification.
>
> John, you say both
> * "first order logic (FOL) is the simplest general form that can
> represent everything and anything that can be represented in a digital
> computer"
>
> * "FOL and its subsets cannot do any reasoning about the predicates,
> relations, and functions that are represented by symbols in FOL."
>
> Without a strange meaning of "represent" or a restriction on
> "everything" in the first assertion, the second sentence does not
> hold.
> To represent assertions about predicates, relations, & functions, one
> needs higher order logic. Such assertions include specifying
> relationships among the range & domains of different predicates and
> relations among subsets of the arguments of a relation with more than
> two arguments.
>
> -- doug foxvog
>
> --
> All contributions to this forum are covered by an open-source license.
> For information about the wiki, the license, and how to subscribe or
> unsubscribe to the forum, see http://ontologforum.org/info
> ---
> You received this message because you are subscribed to the Google
> Groups "ontolog-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to ontolog-foru...@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/ontolog-forum/48313e59538742a8ad008e81d3010c54%40ae84e3ec1c94401db4710fa8560bb3fd
> [1].
>
>
> Links:
> ------
> [1]
> https://groups.google.com/d/msgid/ontolog-forum/48313e59538742a8ad008e81d3010c54%40ae84e3ec1c94401db4710fa8560bb3fd?utm_medium=email&utm_source=footer

Alex Shkotin

unread,
Nov 20, 2025, 4:55:01 AM (4 days ago) Nov 20
to ontolo...@googlegroups.com, CG
Hi Rogerio,

I'm somehow getting by without semiotics. And I'm having a hard time figuring out why the Weierstrass function is a "hybrid semiotic entity rather than pure abstraction."
And most importantly, what does this add to the truth that a mathematical object is not a thought?

Alex

ср, 19 нояб. 2025 г. в 17:49, Rogério Lourenco <metai...@gmail.com>:

Alex Shkotin

unread,
Nov 20, 2025, 5:20:34 AM (4 days ago) Nov 20
to do...@foxvog.org, ontolo...@googlegroups.com, CG

Doug


Our approaches are close. As for 

"A problem with ontology is that to be formal, an ontological term needs

to have a firm definition, specifying whether a specific newly

considered thing is or is not an instance of the concept the term

encodes.  All boundary conditions are not known when the term is

defined.  When a newly considered object/event/concept is considered, a

change in definition may be desired to include or exclude the new thing

from the class of things represented by the term."


Here we're talking about the construction and application of theory. This is a large and subtle topic that has a simple impact on formal ontologistics:

Theories need to be formalized.

True, one can also formalize descriptions of phenomena. And that's where a treatise on the phenomenology of matter comes in.


Alex



ср, 19 нояб. 2025 г. в 19:54, <do...@foxvog.org>:

Alex Shkotin

unread,
Nov 20, 2025, 5:57:44 AM (3 days ago) Nov 20
to ontolo...@googlegroups.com, CG

John,


You wrote "There are various subsets of FOL, such as OWL…"

But OWL isn't a subset of a first-order language. It's a subset of a higher-order language in which, as you point out,

"...do any reasoning about the predicates, relations, and functions that are represented by symbols in FOL. For that, it's necessary to go beyond FOL to second-order (SOL) or higher-order (HOL) logics."

This is easily seen in the OWL2 Primer:

"ClassAssertion( :Person :Mary )"

where Person is a unary predicate.

And here shows in detail what part of HOL OWL2 is.

And the beauty is that OWL2 knowledge processing algorithms, i.e., reasoners, operate without FOL.

We could say that OWL2-reasoners demonstrate how to work with texts written in HOL.


It's important to distinguish between First Order language (FOl) and any First Order Logic (FOL) written in FOl.


Alex



ср, 19 нояб. 2025 г. в 23:12, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
4:18 PM (1 hour ago) 4:18 PM
to ontolo...@googlegroups.com
Alden.

Any computation by a computer or by a brain is performed by physical processes, which take time and consume some amount of energy.   But the logic or mathematics is purely abstract.  Numbers, alphabets, and propositions stated in any language have no mass or energy or physical space.

An infinite line in geometry has an uncountable infinity of points.  But it has no mass or energy.  And it is a theoretical space that does not exist in the physical universe.

Any representation on paper or in a computer takes up space and has mass.  But the total mass of the representation + ink or other stuff does not change when some data of any kind is represented by it.

John
 


From: "Dima, Alden A. (Fed)' via ontolog-forum" <ontolo...@googlegroups.com>

Reply all
Reply to author
Forward
0 new messages