AI and psychological impact

41 views
Skip to first unread message

John F Sowa

unread,
Aug 30, 2025, 3:56:07 PMAug 30
to ontolog-forum
On August 27,  Adam's parents filed a 40-page lawsuit against OpenAI.  The following web page contains a large excerpt from that lawsuit:

 .
The page below states the basic charge:  ChatGPT's design prioritized engagement over safety.  Interesting point:  They mention the underdeveloped prefrontal cortex of a teenager.  That indicates that they intend to include testimony by a neuroscientist in the trial

I suspect that OpenAI will settle out of court rather than expose the details of their design flaws to the world.

John
_______________.


John F Sowa

unread,
Aug 30, 2025, 4:09:29 PMAug 30
to ontolog-forum
The copy of the critical page (below) was lost in my previous note.

John



ch...@codil.co.uk

unread,
Aug 30, 2025, 5:49:55 PMAug 30
to ontolo...@googlegroups.com

A very sad case but no-one deliberately set out to design an AI system that helped people to commit suicide. Thinking about the incident in terms of legal cases and punishment (huge fines, compensation, etc.) may well not be the way to prevent such suicides happening again. What is needed is genuinely independent airplane crash type enquiry when the primary aim is to avoid similar repeat incidents.

I write because I am in a position to understand, to some extent,  what the parents are going through. In 1985 one of my daughters killed herself  indirectly because of  the events following an incorrect medical diagnosis made at a police station. As a result of the distress her sister became mentally ill and I was so distressed that I took early retirement, abandoning my research (using CODIL) into AI.  In  2001  her sister killed herself after being wrongfully arrested in the SAME police station. Serious mistakes by the police and the local mental health hospital lead to her ?unintentionally? killing herself in an attempt to get medical hekp.  She thought the police were planning to throw her in prison - when, in reality, all that had happened was that a policeman had made a clerical error! 

What is clear from our experience, in the UK, is that if the primary aim is to PUNISH people or organisations who unintentionally made unfortunate decisions, attempts will be made to withhold information and  lessons will not be learnt. The result will be that parents are put through hell having to deal with the lies and half truth that emerge, and will also know that the same kind of deaths will probablycontinue.

If this Chatbot case is closed due to an "out of court" agreement the real cause will be kept secret - maximising the chances of other people making the same unintended errors and further deaths occurring.

Chris.

Alex Shkotin

unread,
Aug 31, 2025, 4:17:05 AM (14 days ago) Aug 31
to ontolo...@googlegroups.com

Chris


I absolutely agree that this is the situation with all general-purpose tools. Moreover, the same LLMs have now begun to be used by criminals, for example, for phishing and other methods of cyberattacks.

Of course, chatGPT should answer the question of how much it can prevent advice on illegal actions.

And we will return to the topic of the algorithm for checking the result of the chatbot's work.

And yes, this can be another specially trained LLM.

But of course it would be better to have more reliable algorithms.

It's like with a formal proof: the verification algorithm is trivial, but finding a proof is not easy.


Alex



вс, 31 авг. 2025 г. в 00:49, <ch...@codil.co.uk>:

John F Sowa

unread,
Aug 31, 2025, 2:25:59 PM (13 days ago) Aug 31
to ontolo...@googlegroups.com, CG
Alex and Chris,

LLM's are very good for finding patterns and for transforming one kind of pattern to another.  That is all they can do.  They cannot do any kind of reasoning.  And they have no memory about where they found any pattern they happened to use..

The only reasoning they do is to find some pattern of reasoning and use it to transform another pattern.  But they have no way to evaluate patterns and test whether they are appropriate.  The also have no memory about which patterns they used to derive anything.

Fundamental principle:  LLMs are STUPID.  They cannot do anything intelligent by themselves.  The only operations that seem to be intelligent are those that they find and use.  But they don't have any method for evaluating whether one pattern is better than another.  

If you're lucky, the best pattern will happen to be the one that it found first.  If not, you'll get garbage or a hallucination.  The worst case is a result that looks good, but happens to be very bad.

Apple worked for two years to use LLMs to derive a better version of Siri.  But they failed.  

Their problem:  Most answers were correct.  But some answers were wrong, and some could cause a disaster:  break you TV or some device connected to it.  

A version of Siri that was correct 999 times out of a thousand would seem to be very good.  But if it destroyed your TV on case #1000, it would be very, very bad.  Nobody would buy a TV that broke down a few months after you bought it.

So Apple canceled their project for an LLM-based Siri. 

For any business-critical operations, 99.9% accuracy would be a disaster.

A better method would be 90% accuracy plus a message for the other 10% that says "Please restate that request."

Unfortunately, LLMs never give any evaluations about how accurate their advice may be.  To do that, some GOFAI (Good Old Fashioned AI) is necessary.

The solution is HYBRID systems that treat any LLM result as an abduction (educated guess).  Then use symbolic methods of deduction and testing to evaluate the guesses.  GOFAI is essential for reliable AI systems.

Answer for Chris:   A hybrid system that uses symbolic methods to evaluate every answer generated by ChatGPT could detect and reject a very wide range of bad answers -- including all the ones that lured Adam to his death.

Conclusion:  A good hybrid system would be much more reliable and vastly CHEAPER to build than the behemoth for which Elon M is wasting billions of $$$.  He is using it to analyze vastly more data on the WWW.  But it will be analyzing more bad data along with good data.  That will not avoid the disasters -- it may even create more disastrous disasters.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Sep 1, 2025, 3:49:50 AM (13 days ago) Sep 1
to ontolo...@googlegroups.com, CG
John,

This may be interesting "Our Threat Intelligence report discusses several recent examples of Claude being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills. We also cover the steps we’ve taken to detect and counter these abuses."

Alex

вс, 31 авг. 2025 г. в 21:25, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

ch...@codil.co.uk

unread,
Sep 1, 2025, 11:58:56 AM (12 days ago) Sep 1
to ontolo...@googlegroups.com, c...@lists.iccs-conference.org
Alex

It is useful to think of human intelligence and large language models in evolutionary terms. If an animal invents a tool to help it survive this has little evolutionary long term advantage unless it can pass details of the tool to the next generation. Human intelligence is based to the invention of a tool, called language, which allows the tool making skill to be passed from one generation to the next.

This involves mapping information about the tool stored in a neural network (a brain) into a sequential string (natural language) and then converting it back to a neural network (in the receiving human brain) with high efficiency and with the minimum of information loss. This process allows each generation to improve the tool and this includes improving the language tool itself. Thus the human brain "CPU" needs a "compiler" to generate a "high level language" (natural language) and also a "decompiler" to repeat the process.

This raises the question - How far is human intelligence due to the algoritms in the brain's "CPU"  and how far is intelligence due to the natural language acting as a high level programming language.

Issac Newton answered this question when he said "'If I have seen further than others, it is by standing upon the shoulders of giants" - Advances in intelligence depend on the exchange of information between generations and intelligent advances are based on intelligent information embedded in shared cultural information.

Large language models read cultural information and work well enough to recognise repeated patterns  - but clearly does not understand what the information means. In effect the large language models act as if they were decompiling statements in a high level cultural lane called Culture. Hallucination and other LLM limitations arise because there are serious bugs in the "decompiler" algorithms. These bugs arises because the underlying large language model is inappropriate and I am sure that everyone  using the LL model is aware that the brain does not store information as numbers in an numerically  address array using a profoundly deep understanding of statistics.

The danger is in using a powerful underlying mathematical model with an indefinitely large numbers of variables to predict future patterns in a system which generates patterns. This danger is taken for granted by anyone understanding the history of science.  Over two thousand years ago the Babylonians and the Greeks recorded the patterns created by the planets in the sky and archaeologists have shown that over 2000 years ago the Greeks actually built a clockwork "computer" which, to an acceptable degree of accuracy, predicted the future position of the planets, and also eclipses. This was based on a mathematicasl model using epicycles where the accuracy of the prediction could always be corrected by adding more epicycles.  Galileo realised, using later technology to measure the position of the planets, that these earlier prediction were not accurate. He knew that he could have corrected the epicycle model by adding further layers of epicyclea - but realised that a very much simpler mathematical model was obtained by putting the sun in the centre of the solar system. Later research by Kepler and Newton showed that elipses worked better than circles.

It is appropriate to ask if large language models  (which can be "improved" by adding billions more array cells holding probabilities)  are making the same mistake that the Greeks made in modelling the planets. One may get always fractionally better results at very much higher computing costs - but the results tell you very little about how the human brain actually evolved to become intelligent or the actual algorithms it uses to map cultural intelligent information onto the brain's neural network. The advantage of hyping the paradigms which require funding and building ever bigger and more powerful computing system is that the more money you spend on the hyped research the more prestige you get and the more influence you have on the direction of future research - including using the peer review system to anonymously rejecting draft papers which criticise the currently favoured AI paradigm.

CODIL was based on the observation of the average human of the 1960's (most current researchers will be biased because they were taught how the stored program computer works at school).   It assumes information is handled in sets (with an ontology of names chosen by the human users) held in a neural network, and uses a single small highly recursive algorithm (which could be interpreted in evolutionary terms where efficiency is significant) to find valid pathways through the associatively addressed network.

To summarise. The Greeks thought models involving the earth at the centre and movement in perfect circles the answer (however many circle had to be invoked to correct the "errors") - when what was needed was a far simpler model with a minimal number of elipses round the sun. Could large language models be making a similar error of starting from the wrong basic assumption and only get a better looking approximate result by using modern computer technology to handle a situation where the number of variables needed is trending towards infinity and in the energy require is in danger of exaserbating climate warming ?

The important thing about CODIL is that it was based on first hand neurodiverse-inspired observations of real people (rather than slavishly following expert views and mathematical models) This involved how people educated before the coming of computers actually processed information in several very different real life complex task areas. In effect the CODIL research, almost accidentally, modelled the brain's neural network's symbolic assembly language, in a way that is counter-intuitive to people who have been trained to program stored program computers.   

I am sure that one of the reasons CODIL failed to get properly funded is that the language does not include an EXPLICIT "IF". Everyone knows that a procedural programming language MUST contain IF statements - so therefore CODIL got no funding and support for such a silly idea. But the whole point of CODIL is there is an "intelligent" network search routine which decides in real time which items of information are to be used as "data", a "command" or as a open/closed gateway (IF) through the network. Because there is an intelligent decision making procedure  there is no need for a human programmer (or a clever conventional AI program) to design an explicit procedural program. The reason is because, like the human brain, CODIL is designed to handle dynamic real life complex tasks which, in parts at least, cannot be "programmed" in advance because the relevant a priori information is not available.

Chris Reynolds

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Sep 1, 2025, 12:30:42 PM (12 days ago) Sep 1
to ontolo...@googlegroups.com, c...@lists.iccs-conference.org

Chris


That's why we ask from time to time at our meetings: Has CODIL been restored? When can we try it out?

How is this project progressing?

 
Give me CODIL and I will change the world - as Archimedes said.

Alex


John F Sowa

unread,
Sep 1, 2025, 2:43:55 PM (12 days ago) Sep 1
to ontolo...@googlegroups.com, c...@lists.iccs-conference.org
Chris, Alex, List,

Codil is an good example of GOFAI (Good Old Fashioned AI).  Following is a description:

CODIL was a very early attempt to build a human-friendly language for a computer which could work with humans as a transparent "electronic clerk" - avoiding the "black box" problems associated with modern large language artificial intelligent systems.. The current study suggested that CODIL worked by modelling how the human brain handles complex information processing tasks. The CODIL archives suggest effective ways of building transparent AI systems and modelling how human intelligence evolved.


I completely agree with the opening sentence.   It's an excellent strategy for designing AI systems, and it is close to my reasons for emphasizing the need for HYBRID systems that combine LLMs with well designed symbolic systems that evaluate and control the LLM component.

The second sentence shows that the designers had developed a conversational model that was inspired by various psychological-physiological hypotheses about how the human brain works.  There are many such hypotheses with various degrees of experimental validation.  In fact, the AGI gang (who I believe are hopelessly misguided), claim that LLMs model the human brain.

I believe that the CODIL model has a better foundation than LLMs by themselves.   It is based on a better hypothesis than the one that inspired the AGI gang.  But neuroscientists admit that every new discovery raises more questions than answers.

Summary:   Ongoing AI research shows that LLMs, by themselves generate valuable hypotheses, which must be evaluated and tested before they can be used successfully.    For small scale applications, that evaluation can be done by the human users themselves.  For large scale applications, some computer systems must use symbolic reasoning methods to do the evaluation and testing.  Those methods are based on the 60 years of research on symbolic reasoning systems (GOFAI).

Codil is an excellent example of the way neuroscience can inspire good AI designs.  But there are other symbolic methods, which are inspired by the six cognitive sciences:  Philosophy, Psychology, Linguistics, Neuroscience, Anthropology, and Artificial Intelligence.  All six of those sciences have contributed valuable ideas about human thinking, and designers of intelligent systems should consider all of them.
 
John








____________________
 

Alex

It is useful to think of human intelligence and large language models in evolutionary terms. If an animal invents a tool to help it survive this has little evolutionary long term advantage unless it can pass details of the tool to the next generation. Human intelligence is based to the invention of a tool, called language, which allows the tool making skill to be passed from one generation to the next.

This involves mapping information about the tool stored in a neural network (a brain) into a sequential string (natural language) and then converting it back to a neural network (in the receiving human brain) with high efficiency and with the minimum of information loss. This process allows each generation to improve the tool and this includes improving the language tool itself. Thus the human brain "CPU" needs a "compiler" to generate a "high level language" (natural language) and also a "decompiler" to repeat the process.

This raises the question - How far is human intelligence due to the algoritms in the brain's "CPU"  and how far is intelligence due to the natural language acting as a high level programming language.

Issac Newton answered this question when he said "'If I have seen further than others, it is by standing upon the shoulders of giants" - Advances in intelligence depend on the exchange of information between generations and intelligent advances are based on intelligent information embedded in shared cultural information.

Large language models read cultural information and work well enough to recognise repeated patterns  - but clearly does not understand what the information means. In effect the large language models act as if they were decompiling statements in a high level cultural lane called Culture. Hallucination and other LLM limitations arise because there are serious bugs in the "decompiler" algorithms. These bugs arises because the underlying large language model is inappropriate and I am sure that everyone  using the LL model is aware that the brain does not store information as numbers in an numerically  address array using a profoundly deep understanding of statistics.

The danger is in using a powerful underlying mathematical model with an indefinitely large numbers of variables to predict future patterns in a system which generates patterns. This danger is taken for granted by anyone understanding the history of science.  Over two thousand years ago the Babylonians and the Greeks recorded the patterns created by the planets in the sky and archaeologists have shown that over 2000 years ago the Greeks actually built a clockwork "computer" which, to an acceptable degree of accuracy, predicted the future position of the planets, and also eclipses. This was based on a mathematicasl model using epicycles where the accuracy of the prediction could always be corrected by adding more epicycles.  Galileo realised, using later technology to measure the position of the planets, that these earlier prediction were not accurate. He knew that he could have corrected the epicycle model by adding further layers of epicyclea - but realised that a very much simpler mathematical model was obtained by putting the sun in the centre of the solar system. Later research by Kepler and Newton showed that elipses worked better than circles.

It is appropriate to ask if large language models  (which can be "improved" by adding billions more array cells holding probabilities)  are making the same mistake that the Greeks made in modelling the planets. One may get always fractionally better results at very much higher computing costs - but the results tell you very little about how the human brain actually evolved to become intelligent or the actual algorithms it uses to map cultural intelligent information onto the brain's neural network. The advantage of hyping the paradigms which require funding and building ever bigger and more powerful computing system is that the more money you spend on the hyped research the more prestige you get and the more influence you have on the direction of future research - including using the peer review system to anonymously rejecting draft papers which criticise the currently favoured AI paradigm.

CODIL was based on the observation of the average human of the 1960's (most current researchers will be biased because they were taught how the stored program computer works at school).   It assumes information is handled in sets (with an ontology of names chosen by the human users) held in a neural network, and uses a single small highly recursive algorithm (which could be interpreted in evolutionary terms where efficiency is significant) to find valid pathways through the associatively addressed network.

To summarise. The Greeks thought models involving the earth at the centre and movement in perfect circles the answer (however many circle had to be invoked to correct the "errors") - when what was needed was a far simpler model with a minimal number of elipses round the sun. Could large language models be making a similar error of starting from the wrong basic assumption and only get a better looking approximate result by using modern computer technology to handle a situation where the number of variables needed is trending towards infinity and in the energy require is in danger of exaserbating climate warming ?

The important thing about CODIL is that it was based on first hand neurodiverse-inspired observations of real people (rather than slavishly following expert views and mathematical models) This involved how people educated before the coming of computers actually processed information in several very different real life complex task areas. In effect the CODIL research, almost accidentally, modelled the brain's neural network's symbolic assembly language, in a way that is counter-intuitive to people who have been trained to program stored program computers.   

I am sure that one of the reasons CODIL failed to get properly funded is that the language does not include an EXPLICIT "IF". Everyone knows that a procedural programming language MUST contain IF statements - so therefore CODIL got no funding and support for such a silly idea. But the whole point of CODIL is there is an "intelligent" network search routine which decides in real time which items of information are to be used as "data", a "command" or as a open/closed gateway (IF) through the network. Because there is an intelligent decision making procedure  there is no need for a human programmer (or a clever conventional AI program) to design an explicit procedural program. The reason is because, like the human brain, CODIL is designed to handle dynamic real life complex tasks which, in parts at least, cannot be "programmed" in advance because the relevant a priori information is not available.

Chris Reynolds
______________________________
Alex and Chris,
 
LLM's are very good for finding patterns and for transforming one kind of pattern to another.  That is all they can do.  They cannot do any kind of reasoning.  And they have no memory about where they found any pattern they happened to use..
 
The only reasoning they do is to find some pattern of reasoning and use it to transform another pattern.  But they have no way to evaluate patterns and test whether they are appropriate.  The also have no memory about which patterns they used to derive anything.
 
Fundamental principle:  LLMs are STUPID.  They cannot do anything intelligent by themselves.  The only operations that seem to be intelligent are those that they find and use.  But they don't have any method for evaluating whether one pattern is better than another.  
 
If you're lucky, the best pattern will happen to be the one that it found first.  If not, you'll get garbage or a hallucination.  The worst case is a result that looks good, but happens to be very bad.  

Apple worked for two years to use LLMs to derive a better version of Siri.  But they failed.  Their problem:  Most answers were correct.  But some answers were wrong, and some could cause a disaster:  break you TV or some device connected to it.  
 
A version of Siri that was correct 999 times out of a thousand would seem to be very good.  But if it destroyed your TV on case #1000, it would be very, very bad.  Nobody would buy a TV that broke down a few months after you bought it.  So Apple canceled their project for an LLM-based Siri. 
 
For any business-critical operations, 99.9% accuracy would be a disaster.  A better method would be 90% accuracy plus a message for the other 10% that says "Please restate that request."
 
Unfortunately, LLMs never give any evaluations about how accurate their advice may be.  To do that, some GOFAI (Good Old Fashioned AI) is necessary.  The solution is HYBRID systems that treat any LLM result as an abduction (educated guess).  Then use symbolic methods of deduction and testing to evaluate the guesses.  GOFAI is essential for reliable AI systems.
 

John F Sowa

unread,
Sep 1, 2025, 10:08:10 PM (12 days ago) Sep 1
to ontolo...@googlegroups.com, CG
Alex,

That is another reason for having symbolic methods for analyzing output by the LLMs.  It could detect attempts to perform fraudulent or illegal operations..  Then it could report that activity to somebody at the company who could check what is being done.

It could be something innocuous, such as somebody analyzing or writing a detective story.  But it could be something illegal or dangerous.  A hybrid system that evaluates the implications of what is being done, could detect such activity.

Alex Shkotin

unread,
Sep 2, 2025, 4:35:17 AM (12 days ago) Sep 2
to ontolo...@googlegroups.com, CG
John,

Exactly. And when we discuss a kind of "symbolic methods" I am standing for theoretical knowledge formalization together with finite models for formal theories.
In math models are usually infinite.

Alex

вт, 2 сент. 2025 г. в 05:08, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 2, 2025, 3:20:37 PM (11 days ago) Sep 2
to ontolo...@googlegroups.com, CG
Alex,

There are infinitely many finite models.  In fact, it's impossible to observe or compute more than a finite number of instances.  That implies that all our infinite models are purely theoretical. 

However, the theories of infinite integers, real numbers, complex numbers, etc, are much easier to reason and compute than theories with finite bounds.  That is why we normally use the infinite theories -- they are much simpler to compute with than the finite theories.

But all observations are finite.  It's impossible to observe every effect of the immense amount of possible observations, even in a single blink of an eye.  Every theory of ontology is just a limited abstraction for some special purpose.

The most accurate ontologies are extremely limited for some special purpose.  A bank, for example, must be accurate in computing every transaction to the smallest unit of currency.  All banks have a variety of accounts -- checking, savings, and many special services.  Some large banks have many branches that offer the same services.

But different banks have very different rules and regulations for their accounts.   When two different banks merge, they never map accounts from one bank to the other.  Instead, they run all the software for both banks -- eventually, they close some accounts and transfer the funds from old versions to the newer versions that have different rules and methods of processing data.

Fundamental principle:   Every large ontology is vague and underspecified.    The only precise ontologies are very small and limited.  The reasoning methods and assumptions (axioms) are specialized.   But the reasoning itself can be done by formal logical methods.  

Small, specialized applications can be processed by computations that are just as precise as the computations for larger systems and theories.  The possible errors arise in the methods of observation and measurement.  For precision, the error bounds must be taken into account by the computing procedures.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>
Sent: 9/2/25 4:35 AM
To: ontolo...@googlegroups.com
Cc: CG <c...@lists.iccs-conference.org>
Subject: Re: [ontolog-forum] Page added to AI and psychological impact

Alex Shkotin

unread,
Sep 3, 2025, 8:57:42 AM (10 days ago) Sep 3
to ontolo...@googlegroups.com, CG

John,


For us, the most interesting case is when both the theory and its model are in the computer, and as data, not as hardware.

And we have some algorithms to process the theory as theoretical knowledge, for example, to semi-automatically prove its theorems. And also to process the model itself, which is primarily some finite mathematical structure satisfying the axioms of the theory. But in addition, it is tied to reality. And the calculations that we perform on it tell us something about reality.

The CODIL structure is some graph, i.e. mathematical representation of knowledge about the world, i.e. a model.

But Chris has not yet described the algorithm for processing this graph in response to a request.

And this is always a question for the author of the language: where is the processor that can work with these language texts?


What theoretical and factual knowledge do large ontologies contain is already a task of labor-intensive research, since they are really large.


One of the remarkable testing grounds for formal theories and their models is Geometry. There are about ten theories of mathematical objects in Euclidean space, and each drawing with various geometric figures is a model of these theories.

If we tie the drawing to reality, it will be a very useful model of reality, precisely because of its abstractness, i.e. simplicity.


It is proposed to concentrate theoretical knowledge in frameworks: each theory separately. An example for the theory of undirected graphs is here (PDF) Theory framework - knowledge hub message #1.

The framework of Hilbert's theory for Euclidean geometry is just around the corner.

And then statics, as a section of mechanics, will catch up.


Alex



вт, 2 сент. 2025 г. в 22:20, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 4, 2025, 2:41:46 PM (9 days ago) Sep 4
to ontolo...@googlegroups.com, CG
Alex,
 
There is a huge variation in the requirements and options for representing different kinds of subjects.  The simplest are those that are totally and precisely represented in bit strings on a computer.  An important example is banking.  The computer data and the theoretical data can be put into an exact alignment to each other on a  digital computer --  100% accuracy.

But the physical world is a four-dimensional space-time continuum.  Nothing in the world can be represented with precision in any language or notation of any kind.  No animal, human, or extraterrestrial being can represent any aspect of the physical world with perfect accuracy.  Every notation, or drawing, or model is always an approximation of some aspects for some purpose.

That includes all representations inside the heads of people and other beasts.  Some animals represent and reason with some aspects of the world more precisely than humans do.  Dogs, for example, are far superior to humans for smells.  Dolphins are far superior tp humans in detecting and communicating complex multidimensional spatial patterns and motions.

The sign languages of the deaf are much more expressive of three and four dimensional patterns than than the usual spoken and written languages that humans speak and write. Following is a book (based on a PhD dissertation) that goes into detail about the rich expressivity of human sign languages:

Linda Uyechi (2012) The geometry of visual phonology, https://typo.uni-konstanz.de/csli-konstanz/books/geometry-of-visual-phonology.pdf .

In this thesis I argue for a theoretical framework of visual phonology, the phonology of sign language, that is distinct from current theoretical frameworks of spoken language phonology. The division between sign language and spoken language phonology is motivated by an inherent asymmetry between sight and sound. Whereas a visual image can be seen in a moment, at a discrete point in time, an auditory signal requires an interval of time to hear. I demonstrate that this asymmetry is present in the phonological organization of language and argue, therefore, that language mode must be accounted for in phonological theory. 

Hence, the theoretical frameworks of visual phonology and spoken language phonology are distinct, where by theoretical framework I mean a precise use of language to capture formal properties of a well-defined object of study. I take the domain of visual phonology to be the set of natural signed languages, and the domain of spoken language phonology to be the set of natural spoken languages. Using modality-free language to compare the phonological constructs of the frameworks, I conclude that they share modality-independent properties. Those properties form the basis for articulating a theory of universal phonology, where by theory I mean a set of laws that hold over all objects in the domain of the theory – the domain of universal phonology being the set of all natural languages.
_________________________

From: "Alex Shkotin" <alex.s...@gmail.com>

John,


For us, the most interesting case is when both the theory and its model are in the computer, and as data, not as hardware.

And we have some algorithms to process the theory as theoretical knowledge, for example, to semi-automatically prove its theorems. And also to process the model itself, which is primarily some finite mathematical structure satisfying the axioms of the theory. But in addition, it is tied to reality. And the calculations that we perform on it tell us something about reality.

The CODIL structure is some graph, i.e. mathematical representation of knowledge about the world, i.e. a model.

But Chris has not yet described the algorithm for processing this graph in response to a request.

And this is always a question for the author of the language: where is the processor that can work with these language texts?


What theoretical and factual knowledge do large ontologies contain is already a task of labor-intensive research, since they are really large.


One of the remarkable testing grounds for formal theories and their models is Geometry. There are about ten theories of mathematical objects in Euclidean space, and each drawing with various geometric figures is a model of these theories.

If we tie the drawing to reality, it will be a very useful model of reality, precisely because of its abstractness, i.e. simplicity.


It is proposed to concentrate theoretical knowledge in frameworks: each theory separately. An example for the theory of undirected graphs is here (PDF) Theory framework - knowledge hub message #1.

The framework of Hilbert's theory for Euclidean geometry is just around the corner.

And then statics, as a section of mechanics, will catch up.


Alex

Ravi Sharma

unread,
Sep 4, 2025, 6:02:02 PM (9 days ago) Sep 4
to ontolo...@googlegroups.com
John
Great input and I plan to respond with queries, you might have the key to a broader definition of at lest Audio-visual "understanding".
Regards

Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Sep 5, 2025, 5:33:47 AM (9 days ago) Sep 5
to ontolo...@googlegroups.com, CG

John,


Any mathematical model is good if it describes reality with the required accuracy. Some use four-dimensional space, and some use Hilbert space. 100% accuracy is rarely needed and even more rarely achievable. But, for example, the question of how many people are in a given room right now can usually be answered accurately.


I am glad if Linda Uyechi has developed or is developing a theory of universal phonology. Then we, formal ontologists, only have to formalize this theory.

In the meantime, we have to formalize the axiomatic theory of Euclidean geometry as presented by Hilbert. And on its basis, statics as presented by Landau, Lifshitz, Course of Theoretical Physics, Vol. I, Mechanics.

Take away: a formal ontologist does not create new knowledge, he only systematizes and mathematically records knowledge created by others. Since in this form this knowledge can be processed by algorithms 100% accurately.


Alex



чт, 4 сент. 2025 г. в 21:41, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Sep 5, 2025, 9:58:57 AM (8 days ago) Sep 5
to ontolo...@googlegroups.com, CG
 Alex,

I agree that models are important.   But every formal model is an approximation of some aspect of reality for some particular purpose.  Engineers have an excellent summary of the issues:  "All models are wrong, but some are useful."  Wikipedia has a good article on that point,  Brief excerpt:

The phrase "all models are wrong" was attributed[1] to George Box who used the phrase in a 1976 paper to refer to the limitations of models, arguing that while no model is ever completely accurate, simpler models can still provide valuable insights if applied judiciously.[2]: 792  In their 1983 book on generalized linear models, Peter McCullagh and John Nelder stated that while modeling in science is a creative process, some models are better than others, even though none can claim eternal truth.[3][4] In 1996, an Applied Statistician's Creed was proposed by M.R. Nester, which incorporated the aphorism as a central tenet.[1]


I recommend that article, which has good insights and a list of good references for further reading.  But I want to emphasize several issues:  (1) It's impossible for any model to represent everything about anything in the physical world.  (2) Every model is designed for some particular purpose for some particular aspect of some subject.  (3) Even for the same subject, there will be many different models for different purposes. (4) Different people (or even the same person) will have many different models of the same subject for different purposes from different points of view  

As an example, consider an engineer who is designing a bridge across a river.  There must be many models to start::  the locations at each end where the bridge will be attached.  The river bottom, upper surface, the kind of mud, sand, rock beneath  the surface, the bed rock at the bottom for the foundation of the piers.  The design of the bridge.  The many components implied by that design.  The composition, strength, tension or pressure of each component and each part of each component. etc. etc. etc.  
    
Then the committee that approves the design will need models that consider the cost, the methods of construction, the views of the bridge by the people driving cars or riding in trains.  The views of the bridge by people on both ends of the bridge or by the boats and ships going under the bridge, the possibility that some of them might crash into the piers and the methods for protecting the piers to avoid damage or destruction, etc., etc,, etc.

Summary:  Just for one single project, the number of different models for different purposes can be enormous.  All of them will be different, and the mapping from one to the other will be complex -- often too complex to be computed effectively.

There is no such thing as a single general purpose model for a bridge -- or for any large project of any kind.  Even small projects will require multiple models for using or working with something as simple as a knife, a broom, or a chair.

And none of those models will be perfect.  All of them would need to be modified or even rejected when a different purpose is being considered.

John
 


___________

Alex Shkotin

unread,
Sep 5, 2025, 1:10:54 PM (8 days ago) Sep 5
to ontolo...@googlegroups.com, CG

John,


Formalization, that is, the construction of formal theories and mathematical objects, structures as their models, is a very specific activity. Formalizers are not satisfied with even an axiomatic theory, for example, in the form in which Hilbert wrote it for geometry. And they will not rest until they write out all its axioms, definitions, theorems, proofs in some formal language: Isabell, Coq, HOL4 etc.

Surprisingly, this topic is close to our community, since the formal part is a significant innovation in our ontologies, without which they would be just explanatory dictionaries, reference books.

Usually our ontologies contain mainly theoretical propositions, but often, especially in the form of knowledge graphs, they also contain a bunch of facts, i.e. a model of the theory.

But the formalization approach is that some scientific or engineering text is taken and formalized.

For example, I can formalize your reasoning about building a bridge, or about banking.

However, usually a textbook or an article or an engineering report is taken.


One of the fundamental questions: what mathematical objects, systems of objects do we use as models of our theories and how do we connect them with reality. Of course, this is done with some practical purpose.

There are many interesting topics here.


Alex



пт, 5 сент. 2025 г. в 16:58, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages