Apple pulled the plug on AI hype

62 views
Skip to first unread message

John F Sowa

unread,
Jun 23, 2025, 5:00:16 PMJun 23
to ontolo...@googlegroups.com, CG
This is what I've been saying for the past two years.  LLMs are very good for generating hypotheses (good guesses), but without symbolic methods for evaluating what they find, the results are just guesses -- and hallucinations.

When I need information, I do my own searching and use Wikipedia and other reliable web sites.  I never trust anything generated by any system that uses GPT or other LLM systems -- UNLESS they use symbolic systems to evaluate anything generated by LLMs AND they provide links to the SOURCES of the information.

Our Permion system does that.

Short excerpt below.

John
__________________________

Apple Just Pulled the Plug on the AI Hype. Here’s What Their Shocking Study Found

We’re living in an era of incredible AI hype. Every week, a new model is announced that promises to “reason,” “think,” and “plan” better than the last. We hear about OpenAI’s o1 o3 o4, Anthropic’s “thinking” Claude models, and Google’s gemini frontier systems, all pushing us closer to the holy grail of Artificial General Intelligence (AGI). The narrative is clear: AI is learning to think.

But what if it’s all just an illusion?

What if these multi-billion dollar models, promoted as the next step in cognitive evolution, are actually just running a more advanced version of autocomplete?

That’s the bombshell conclusion from a quiet, systematic study published by a team of researchers at Apple. They didn’t rely on hype or flashy demos. Instead, they put these so-called “Large Reasoning Models” (LRMs) to the test in a controlled environment, and what they found shatters the entire narrative.





alex.shkotin

unread,
Jun 24, 2025, 3:47:30 AMJun 24
to ontolog-forum
John and All,

As Medium article is moneywalled I asked chatGPT to clarify situation https://chatgpt.com/s/t_685a5600a8888191ab2ccdf750abf51e
It is full of links and of course should be read with caution.

There is a hype and anti-hype. You know.

Alex

вторник, 24 июня 2025 г. в 00:00:16 UTC+3, John F Sowa:

Alexandre Rademaker

unread,
Jun 24, 2025, 6:31:45 AMJun 24
to ontolo...@googlegroups.com, CG, ontolo...@googlegroups.com

Hi John,

Where can I read about the Permion system? 

—-
Alexandre Rademaker

On 23 Jun 2025, at 18:00, John F Sowa <so...@bestweb.net> wrote:



John F Sowa

unread,
Jun 24, 2025, 9:43:05 AMJun 24
to ontolo...@googlegroups.com
Alex,

That is true:  

Alex:  There is a hype and anti-hype. You know.

Apple's anti-hype is absolutely correct about the majority of GPT  promoters.   

Fortunately, there are a fair number who recognize the dangers of "educated guesses" by LLM-based systems.  There are various hybrids that use symbolic methods (both traditional and AI-based) to evaluate the LLM guesses and weed out the errors and hallucinations.

The best ones had developed successful symbolic AI systems long before LLMs were invented.  They combine the best of both worlds.   Unfortunately, many of the new LLM-based developers don't have sufficient background in symbolic methods to develop the hybrids.

John
 


From: "alex.shkotin" <alex.s...@gmail.com>
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jun 24, 2025, 10:34:17 AMJun 24
to Andras Kornai, ontolo...@googlegroups.com, CG
Andras,

Thanks for those two links.  They emphasize the point I have been making ever since the LLM-based methods became popular for AI.  Unfortunately, they have been so over-hyped that the term "AI" has come to mean "LLM-based technology" in popular reporting.

The symbolic AI methods date to the 1950s.  They include both precise logic-based methods and approximate statistical methods.  But all of them keep track of the sources of information so that checking and evaluation can be done.

Unfortunately, LLMs are a powerful statistical method that combines  information from an open-ended variety of sources, and they do not provide any information about the sources.

In any case,  I presented a talk last year that emphasizes the good, bad, and dangerous aspects of LLM-based methods.  In short, every answer produced by LLM-based methods is an abduction (educated guess) justified by statistical reasoning.  There are many methods for improving the likelihood.  In general, the guess must be ev;uated by deduction, testing, and induction.

See "Without Ontology, LLMs are Clueless", https://www.youtube.com/watch?v=t7wZbbISdyA .  This talk has had 11k views.

John
 


From: "Andras Kornai" <kor...@ilab.sztaki.hu>

Yet another sensationalized write-up of yet another badly designed naysayer study. For a critique, see https://arxiv.org/html/2506.09250v1


Andras

Kingsley Idehen

unread,
Jun 25, 2025, 11:21:01 AMJun 25
to ontolo...@googlegroups.com

Hi John and others,

On 6/24/25 9:42 AM, John F Sowa wrote:
Alex,

That is true:  

Alex:  There is a hype and anti-hype. You know.

Apple's anti-hype is absolutely correct about the majority of GPT  promoters.   

Fortunately, there are a fair number who recognize the dangers of "educated guesses" by LLM-based systems.  There are various hybrids that use symbolic methods (both traditional and AI-based) to evaluate the LLM guesses and weed out the errors and hallucinations.

The best ones had developed successful symbolic AI systems long before LLMs were invented.  They combine the best of both worlds.   Unfortunately, many of the new LLM-based developers don't have sufficient background in symbolic methods to develop the hybrids.

John


Yep!

BTW -- I've published some additional articles about LLMs and their symbiotic relationship with Semantic Web Project related systems (e.g., Knowledge Graphs) that some may not have seen on this list.

[1] https://www.linkedin.com/pulse/large-language-models-llms-knowledge-graph-symbiosis-revisited-ty95e/ -- Large Language Models (LLMs) & Knowledge Graph Symbiosis — Revisited

[2] https://www.linkedin.com/pulse/from-complexity-clarity-how-natural-language-roles-around-idehen-jtm8e/ -- From Complexity to Clarity: How Natural Language is Transforming Software—and the Roles Around It

[3] https://www.linkedin.com/pulse/why-philosophy-eat-ai-kingsley-uyi-idehen-rdd9e/ -- Why Philosophy Will Eat AI

[4] https://www.linkedin.com/pulse/semantic-web-project-didnt-fail-waiting-ai-yin-its-yang-idehen-j01se/ -- The Semantic Web Project Didn’t Fail — It Was Waiting for AI (The Yin of its Yang)


Kingsley

 


From: "alex.shkotin" <alex.s...@gmail.com>

John and All,

As Medium article is moneywalled I asked chatGPT to clarify situation https://chatgpt.com/s/t_685a5600a8888191ab2ccdf750abf51e
It is full of links and of course should be read with caution.

There is a hype and anti-hype. You know.

Alex

вторник, 24 июня 2025 г. в 00:00:16 UTC+3, John F Sowa:
This is what I've been saying for the past two years.  LLMs are very good for generating hypotheses (good guesses), but without symbolic methods for evaluating what they find, the results are just guesses -- and hallucinations.

When I need information, I do my own searching and use Wikipedia and other reliable web sites.  I never trust anything generated by any system that uses GPT or other LLM systems -- UNLESS they use symbolic systems to evaluate anything generated by LLMs AND they provide links to the SOURCES of the information.

Our Permion system does that.

Short excerpt below.

John
__________________________

Apple Just Pulled the Plug on the AI Hype. Here’s What Their Shocking Study Found

We’re living in an era of incredible AI hype. Every week, a new model is announced that promises to “reason,” “think,” and “plan” better than the last. We hear about OpenAI’s o1 o3 o4, Anthropic’s “thinking” Claude models, and Google’s gemini frontier systems, all pushing us closer to the holy grail of Artificial General Intelligence (AGI). The narrative is clear: AI is learning to think.

But what if it’s all just an illusion?

What if these multi-billion dollar models, promoted as the next step in cognitive evolution, are actually just running a more advanced version of autocomplete?

That’s the bombshell conclusion from a quiet, systematic study published by a team of researchers at Apple. They didn’t rely on hype or flashy demos. Instead, they put these so-called “Large Reasoning Models” (LRMs) to the test in a controlled environment, and what they found shatters the entire narrative.



-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com

Social Media:
LinkedIn: http://www.linkedin.com/in/kidehen
Twitter : https://twitter.com/kidehen


Nadin, Mihai

unread,
Jun 25, 2025, 1:57:52 PMJun 25
to ontolo...@googlegroups.com

Dear and respected colleague,

Allow me to recommend to you the book written by Karen Hoa: Empire AI.

Forget the narrative around Sam Altman et al. She is competent. Disclosing the various aspects of data cleaning she is very close to realizing the role of ontology.

Best wishes.

 

Mihai Nadin

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Jack Park

unread,
Jun 25, 2025, 2:57:56 PMJun 25
to ontolo...@googlegroups.com

Alex Shkotin

unread,
Jun 26, 2025, 3:38:00 AMJun 26
to ontolo...@googlegroups.com

John F Sowa

unread,
Jun 27, 2025, 2:54:12 PMJun 27
to ontolo...@googlegroups.com
Alex.

The following excerpt shows that ChatGPT and related systems are stupider than Kenyan workers who are paid $2 per hour:

"AI’s backbone depends on low-paid, invisible workers in the Global South (e.g., Kenya, Philippines, Venezuela), who do psychologically taxing and exploitative content moderation and data tagging."  

Those workers are not highly trained and educated people, but they are more INTELLIGENT than ChatGPT.  The fact that ChatGPT cannot do the data tagging is UNDENIABLE PROOF that it is very far from a human level of intelligence.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>
Sent: 6/26/25 3:38 AM

Alex Shkotin

unread,
Jun 28, 2025, 5:24:33 AMJun 28
to ontolo...@googlegroups.com

John,


This is one of the paradoxes of intelligence: it is based on the knowledge of a huge mass of absolutely primitive simple facts, for recording each of which neither high qualifications nor education are required, only conscientiousness. But collected together they provide the opportunity to "see an elephant" or even "act like an elephant".


I was interested in the ideas from the book. Mihai Nadin could confirm or deny that chatGPT correctly presented them and did not miss anything. You, using the quote for further reasoning, confirmed that you consider the idea expressed by chatGPT to be correct.

This is the situation: the work done by a large number of conscientious, low-paid and so on people, plus a universal algorithm created a system of good intelligence.

Here you can compare Kenyan workers with those builders who have been building roads in the mountains, bridges, pyramids and other structures since time immemorial.


Alex



пт, 27 июн. 2025 г. в 21:54, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jun 28, 2025, 4:23:47 PMJun 28
to ontolo...@googlegroups.com, CG
Alex,

No. That is absolutely FALSE:  

Alex:  This is the situation: the work done by a large number of conscientious, low-paid and so on people, plus a universal algorithm created a system of good intelligence.


Those workers were paid to search for only one kind of dangerous material:   A large amount of pornography and violence.  

They did not weed out the immense amount of false and misleading information on the WWW.  Whenever you ask a question, ChatGPT may find some information from somewhere, but there is no guarantee that what it found was true or false,   And you have no idea whether it combined parts of two true statements to create a combination that happens to be false.

Technically, what ChatGPT produces are ABDUCTIONS.  They're hypotheses or educated guesses.  But in order to be safe and secure, those guesses must be evaluated by deduction, testing, and induction.

There are some hybrid systems that use symbolic methods, which include 60 years of R & D on logic and symbolic methods of AI.  Without those methods, you cannot trust anything that ChatGPT produces

For doing that kind of testing and checking, our VivoMind system from 2010 was far ahead of any GPT based system today.    We did checking for things that are far more difficult to find than just porn and violence.  And our latest Permion Inc system combines the best of the best.

If you want a second opinion, ask Kingsley.  He also does checking of anything generated by ChattGPT or related systems.

Phuc Gmail

unread,
Jun 28, 2025, 4:51:20 PMJun 28
to ontolo...@googlegroups.com

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jun 29, 2025, 5:39:18 AMJun 29
to ontolo...@googlegroups.com, CG

John,


We don't know the full stack of genAI technologies. But we know, for example, that they use manual labor.

Here we can see something in common between their technologies and Permion technologies: you show us some pieces, but the essence is hidden, classified.

genAI are looking at how far they can advance without databases and algorithms, and nothing prevents them from adding these components at some point in addition to manual labor.

They lack world models, as Gary Marcus writes https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread.

And the question arises: where is the world model in formal ontology? Is it an A-box?


By the way, Gary mentions abstraction, and it would be worth writing out the full stack of knowledge processing operations. Abduction, deduction, induction, testing are the tip of the iceberg.


Alex



сб, 28 июн. 2025 г. в 23:23, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

alex.shkotin

unread,
Jun 29, 2025, 11:21:40 AMJun 29
to ontolog-forum
IN ADDITION

To get a feeling of real rules of knowledge processing in math look at "Rules for transforming propositions and phrases" in Specific tasks of Ugraphia on a particular structure (formulations, solutions, placement in the framework). And for natural language rules - at "knowledge processing - derivation rules" in https://www.researchgate.net/publication/366216531_English_is_a_HOL_language_message_1X.

And this is just the beginning.


Alex



воскресенье, 29 июня 2025 г. в 12:39:18 UTC+3, Alex Shkotin:

Ravi Sharma

unread,
Jun 30, 2025, 4:33:12 AMJun 30
to ontolo...@googlegroups.com
John's comment about Apple and AI led me to the link: and there is the interesting link with situational awareness i.e. tool, you the user and deciphering what you are thinking to cast you as a person entity? (Ken please note on your favorite topic of SA)
So does the AI engine watching you create any extraordinary problem more than when we sign a waiver of privacy for example?
Regards

Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member



John F Sowa

unread,
Jun 30, 2025, 2:20:40 PMJun 30
to ontolo...@googlegroups.com, CG
Ravi,

The Big AI Engine isn't watching people.  It's just gathering as much as it can of everything on the WWW and making it available to anybody and everybody who wants to look for anything they can about anybody, including you.

As those studies show, even Elon Musk, the developer of the biggest system, can't control what it says about himself.

In one sense, that's humorous.  But when you think about it, that could be you who is being discussed in ways that are very bad or dangerous.

John
 


From: "Ravi Sharma" <drravi...@gmail.com>

Ravi Sharma

unread,
Jun 30, 2025, 3:06:20 PMJun 30
to ontolo...@googlegroups.com, CG
John
Appreciate the analysis.
Thus there should be built in controls like we have in various settings as to what is allowed to be said and what is not in sense of civility? Does that mean filtering through NLP?
Does EU enforce that type of filter today? or anyone else?
regards.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jun 30, 2025, 9:05:02 PMJun 30
to ontolo...@googlegroups.com, CG
Alex,

That statement is absolutely false:

Alex:  and it would be worth writing out the full stack of knowledge processing operations. Abduction, deduction, induction, testing are the tip of the iceberg.

No!!!!!    They are much bigger than the iceberg.  Those four steps include every possible kind of reasoning by humans, computers, or the most advanced extraterrestrials anywhere in the universe.

LLMs, for example, do abduction.  That is the ONLY operation they can perform. 

For machine translation, the abduction is just one short step from the original, and it can be quite accurate.

But any other kinds of reasoning must supplement LLMs with symbolic methods that do various kinds of deduction, testing, and induction.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John,


Kingsley Idehen

unread,
Jul 1, 2025, 6:48:27 PMJul 1
to ontolo...@googlegroups.com

Hi John,

On 6/30/25 2:20 PM, John F Sowa wrote:
Ravi,

The Big AI Engine isn't watching people.  It's just gathering as much as it can of everything on the WWW and making it available to anybody and everybody who wants to look for anything they can about anybody, including you.

As those studies show, even Elon Musk, the developer of the biggest system, can't control what it says about himself.

In one sense, that's humorous.  But when you think about it, that could be you who is being discussed in ways that are very bad or dangerous.

John


Absolutely! Unfortunately, this is going to start happening to a variety of innocent people sooner rather than later.

LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.

Kingsley

 


From: "Ravi Sharma" <drravi...@gmail.com>

John's comment about Apple and AI led me to the link: and there is the interesting link with situational awareness i.e. tool, you the user and deciphering what you are thinking to cast you as a person entity? (Ken please note on your favorite topic of SA)
So does the AI engine watching you create any extraordinary problem more than when we sign a waiver of privacy for example?
Regards

Thanks.
Ravi

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 2, 2025, 5:22:26 AMJul 2
to ontolo...@googlegroups.com, CG

John,


Reduction, one of the ways of thinking, is a feature of math-logic, which in many cases is useless.

It is very possible that any reasoning can be reduced to "Abduction, deduction, induction, testing" but I doubt it. And anyway we need to study how.

We should begin from rules of reasoning we use in our everyday life, in physics, in other sciences and technologies.

To get a feeling of real rules of knowledge processing in math look at "Rules for transforming propositions and phrases" in Specific tasks of Ugraphia on a particular structure (formulations, solutions, placement in the framework). [1]

And for natural language rules look at "knowledge processing - derivation rules" in https://www.researchgate.net/publication/366216531_English_is_a_HOL_language_message_1X. [2]

And this is just the beginning. By the way, generalization is a powerful knowledge processing technik.


We can take any everyday task like to set a goal to drink a cup of tea. And create a plan, an algorithm of doing so. And look at the rules of knowledge processing we use step by step to create this plan.


Abstraction is one of the most important classes of mental processing.


For me your way of thinking is REDUCTIONISM🦉


Alex

[1]

Summary of rules used


rule 

short description

use and comment

"quantifier expansion"

finitistic development by run-in ∧, +.

2:_INC00 CLC8_4

"subtask"

separating a subtask into a separate block

26: found in almost all solutions.

"substitution"

DEFINITIONS

4: FCT4_1-1 подзадача

"split"

statements using "and" (2), "equal" (2), "plus"

5:FCT8_6 FCT4_1 FCT4_1

"reformulation"

to standard form

2: FCT4_1-1 подзадача

"interpretation"

transition from the terminology of theory to the terminology of structures

FCT4_1-1 подзадача FCT4_1-1 подзадача FCT4_1-1 подзадача

"choice"

it is determined whether there is an element in the set that satisfies a given condition

2: FCT4_1-1 подзадача

"selection"

subset from set

2:

"obviously"

mental action with elements, parts of phrase

3: FCT4_1 FCT4_1-1 подзадача  

...

"count"

in the mind

2:

[2]

knowledge processing - derivation rules

Derivation rules of a particular domain of knowledge should be studied separately.

We have examples of FOL derivations and can study these forms for operational sentences.

For example,

Every human is mortal. Socrates is human. Hence "Socrates is mortal."

((Every human) is mortal) --"Every" - puo,  "is" - bo.

(Socrates is human)

Hence

(Socrates is mortal).

Derivation rule behind "hence".

We need to substitute "Socrates" instead of "(Every human)"

and generalization is

Whenever you have schemas `((every X) is Y)` and `(Z is X)` applied, i.e. X, Y, Z have a specific value, applying the first schema to the second gives `(Z is Y)`:

((every X) is Y),  (Z is X) |= (Z is Y) --algorithm: {substitute Z instead of (every X)}

task FOL - Knowledge Processing

What is the operational form for FOL derivation rules?

comparison with CFG language

If we are for a while in FOL statements, they have simple CFG and the structure behind is a derivation tree with non-terminals in the internal nodes. But first of all FOL has additional variables:

((Every human) is mortal) == (Every x:human mortal(x))

(Socrates is human) == human(Socrates)

(Socrates is mortal) == mortal(Socrates)

And we substitute "Socrates" to the second occurrence of "x" eliminating Every-phrase!



вт, 1 июл. 2025 г. в 04:05, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 2, 2025, 11:12:16 AMJul 2
to ontolo...@googlegroups.com, CG
Alex,

That  cycle of Abduction, Deduction, Testing, and Induction is not "my" way of thinking.  It is the UNIVERSAL way of reasoning by every intelligent being everywhere in the universe for all time.   The version I discussed was formulated by Charles Sanders Peirce, and it is the foundation for scientific reasoning on any topic of any kind.

Alex:  It is very possible that any reasoning can be reduced to "Abduction, deduction, induction, testing" but I doubt it. 

Any method you can find or imagine fits into that framework,   Deduction, Testing. and Induction are traditional terms from ancient times.  Peirce introduced the term abduction, but it covers the same way of thinking that is also called hypothesis, educated guess, or invention.

Alex:  look at "Rules for transforming propositions and phrases"

All those rules are versions of deduction.

Alex:  for natural language rules look at "knowledge processing - derivation rules" in https://www.researchgate.net/publication/366216531_English_is_a_HOL_language_message_1X

That is just an article that maps deduction in logic to and from deduction in a version of English.  Aristotle did that for Greek.  Ockham did it for Latin.  Symbolic AI systems have been doing that for the past 60 years.

Alex:  Abstraction is one of the most important classes of mental processing.

Literally, "abstraction" means "taking away".   When you do any kind of reasoning, the first step is to take away the irrelevant details to specify the basic steps in the reasoning process.   Then you determine which steps belong to each of the four stages in the cycle:  Abduction, Deduction, Testing, and Induction.

Another name for those four steps is the Action-learning cycle:  Learn=Abduction, Plan=Deduction, Act=Test, Reflect=Induction.  Other people call it the scientific method, and they may add more substages.  The basic ideas are as old as Aristotle, and they have been rediscovered and renamed in various ways.

Another example is the OODA loop by John Boyd (Observe=Induction,  Orient=Abduction, Decide=Deduction,  Act=Test).  Just google "OODA loop" for many discussions. 

Some ways of thinking, such as LLM, don't cover all four stages.   In fact, LLMs by themselves just do abduction -- guessing.  That is why they hallucinate -- they don't do the deduction and testing.   

For machine translation, LLMs are very good because the first stage (abduction or guessing) stays very close to the original.  Therefore, the translation is very close to the source.  But when multiple guesses are involved, errors are inevitable.  That is why you need deduction and testing.  The fourth stage of induction adds the new information to the knowledge base so that it becomes available for  reuse.

Symbolic AI  systems have been developing versions of all four stages.  Wolfram's system, for example, supports all four stages, but they're expressed in a very formal notation.  He and his gang have used LLMs to support an English language front end that makes it easier to learn and use.  

Alex:  For me your way of thinking is REDUCTIONISM.

Whenever you map an informal description of a problem to some formal method,  it's inevitable that you map phrases or even sentences from the original text to symbols like x, y, z.  You may call that a reduction, but you have to do that with any formal notation for any purpose.  

Challenge:  If you think that some method of reasoning doesn't fit in that four-step cycle, describe it or send a reference, and I'll show exactly how to map it to that cycle or  to some part of the cycle.

John
 


John F Sowa

unread,
Jul 2, 2025, 4:15:53 PMJul 2
to ontolo...@googlegroups.com, CG
Kingsley,

Thanks for emphasizing that point:

KI:  LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.

Yes.  Those systems sound intelligent because they produce answers in clear, syntactically correct sentences.  But clarity does not imply truth.  

There are 60 years of traditional symbolic methods of AI that can do the deduction and testing.  But there are many, many different ways of combining those methods with LLMs to form hybrid systems..

Some methods are better than others, and there is a lot of research & development that is being implemented and tested.

John
 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>

Hi John,

Kingsley Idehen

unread,
Jul 2, 2025, 7:18:37 PMJul 2
to ontolo...@googlegroups.com

Hi John and others,

On 7/2/25 4:15 PM, John F Sowa wrote:

Kingsley,

Thanks for emphasizing that point:

KI:  LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.

Yes.  Those systems sound intelligent because they produce answers in clear, syntactically correct sentences.  But clarity does not imply truth. 

Yes!


There are 60 years of traditional symbolic methods of AI that can do the deduction and testing.  But there are many, many different ways of combining those methods with LLMs to form hybrid systems..

Some methods are better than others, and there is a lot of research & development that is being implemented and tested.

John

LLMs are now in very dangerous territory for untrained (or uninitiated) users. The marketing of LLMs as some kind of human-like intelligence is not just irresponsible—it’s downright dangerous.

As you know, man-made tools are simply that: tools. They exist to overcome operational inefficiencies understood by their creators. I’ve had numerous experiences with LLMs that genuinely terrify me, particularly due to their sleight-of-hand tendencies—subtle changes introduced either by hallucinations or biases in their training data that everyday users will never catch.

The problem? Everyday users are everywhere across the command hierarchies of organizations that impact large groups of people—employees, national citizens, families—you name it.

If LLMs were marketed for what they really are—UI/UX stack additions—the risk would be significantly lower.

My most recent and ironic example of LLM-related dangers happened while exploring the seminal AI Conference at Dartmouth University (circa 1955). I’ve captured the full story in a LinkedIn post here: [1].

[1] Comment I posted to a Discussion about AI Hype

I've also attached a recent comic strip I knocked up using ChatGPT where I refer to LLMs as Langulators 🙂

Kingsley

 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>

Hi John,
On 6/30/25 2:20 PM, John F Sowa wrote:
Ravi,

The Big AI Engine isn't watching people.  It's just gathering as much as it can of everything on the WWW and making it available to anybody and everybody who wants to look for anything they can about anybody, including you.

As those studies show, even Elon Musk, the developer of the biggest system, can't control what it says about himself.

In one sense, that's humorous.  But when you think about it, that could be you who is being discussed in ways that are very bad or dangerous.

John

Absolutely! Unfortunately, this is going to start happening to a variety of innocent people sooner rather than later.

LLMs need to be properly confined to their rightful role—as new and useful multimodal natural language processing components within the UI/UX stack. That’s where their true value lies.

Kingsley


From: "Ravi Sharma" <drravi...@gmail.com>
 
John's comment about Apple and AI led me to the link: and there is the interesting link with situational awareness i.e. tool, you the user and deciphering what you are thinking to cast you as a person entity? (Ken please note on your favorite topic of SA)
So does the AI engine watching you create any extraordinary problem more than when we sign a waiver of privacy for example?
Regards

Thanks.
Ravi

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
calculator-and-langulator-2.png

Alex Shkotin

unread,
Jul 3, 2025, 6:54:00 AMJul 3
to ontolo...@googlegroups.com, CG

John,


From "All those rules are versions of deduction." I got that you are simply classifying any real particular rule of knowledge processing under one of four titles. With a very broad sense for each. Like "Learn=Abduction, Plan=Deduction, Act=Test, Reflect=Induction." or "Observe=Induction,  Orient=Abduction, Decide=Deduction,  Act=Test". This is a separate topic on how to classify real rules of knowledge processing.

Let me just point out that in https://plato.stanford.edu/ we have an article only for abduction https://plato.stanford.edu/entries/abduction/. But even in this case the meaning differs:"In the philosophical literature, the term “abduction” is used in two related but different senses. In both senses, the term refers to some form of explanatory reasoning. However, in the historically first sense, it refers to the place of explanatory reasoning in generating hypotheses, while in the sense in which it is used most frequently in the modern literature it refers to the place of explanatory reasoning in justifying hypotheses. In the latter sense, abduction is also often called “Inference to the Best Explanation.”"

The absence of an article for deduction and induction (sorry, I did not search for "test") for me means that they get their meaning only inside of one or another theory.


The topic of what kind of rules of knowledge processing we have in sciences, technologies, and everyday life is much more important for me.


I would be happy if you could classify the rules I have found on my way to knowledge concentration and formalization.


Let's restrict our knowledge processing by text. In this case a rule is how to take one text and create a new one using only this text and maybe some parameters.

Usually for a rule we have just one sentence as an input, and one or more for output.

If we look at this sequence of sentences itself we have something logical (see third column below). And the rule applied on every step is a way to get this sequence.

For example, let's take a step back to https://www.researchgate.net/publication/374265191_Theory_framework_-_knowledge_hub_message_1 because a proof is the simplest way of knowledge processing.

I took sequence of sentences from here Theory framework - knowledge hub. message #1 (-:PUBLIC:-) which is the latest version of the article.


eng

1

each simple edge contributes 2 to the sum of the degrees of the vertices of the graph.

[simpleE]

"a-priory"

eng

2

each loop contributes 2 to the degree of its vertex.

[d]

"a-priory"

eng

3

Each edge of the graph makes a contribution equal to two to the sum of the degrees of the vertices of the graph.

[1 2]

"union"

eng

4

In any graph g, the sum of the degrees of the vertices of the graph g is equal to twice the number of edges of the graph g.

[3]

"summation"

 

  "In this case, the section of each language, in addition to the language identifier, contains the number of the sentence in this section.

Moreover, now for each sentence the identifier of the frame element on which this sentence is based is indicated (usually this is a definition) or numbers of sentences of a given proof from which the current sentence follows. This column is called "premises".

The last column indicates how the current sentence is obtained from the sentences specified in the list of premises. This column is called "method of inference". The study of real methods of inference found in proofs is a separate important work, because these are not usually the rules of inference of formal logic.

Rules in action

The statement eng.1 is based on the definition of the term “simple edge”, which has the identifier simpleE in the framework. And in the column “method of inference” it is indicated “a-priory”. In eng.2 parameter [d] refers to the definition of term d from which we can derive our sentence. 

Getting eng.4 from eng.3 is conventionally called “summation”, as a type of reformulation, because it is obvious that both statements are equivalent.

And getting eng.3 from eng.1 and eng.2 is called "union" to emphasize that this is not just LOGICAL AND of two statements but includes some reformulation. 

"

This proof in the framework is here proof Pr1_1__1 Th1_1


I would be happy to get your analysis and classification for these three rules: "a-priory", "union", "summation".


Let me point out that a solution for a task has a tree structure, not a sequence. You may find them in Specific tasks of Ugraphia on a particular structure (formulations, solutions, placement in the framework). Let me cite rules description you put under deduction umbrellar [1].


I am looking forward to aligning our terminology i.e. understand each other properly.


Alex


[1] from Specific tasks of Ugraphia on a specific structure(-:PUBLIC:-)

Rules for transforming propositions and phrases

There are many ways to get from one text to another or several others. It is assumed that the processing of the original text is replaced by the processing of those obtained from it, and we know how to process the received ones, i.e. having received their values, get the value of the original one. We will call such processing methods rules

The texts resulting from applying the rules are called subtasks.

The following are descriptions of different rules.

"subtask"

A rule named “subtask” only indicates that this subtask is separated into a separate task, i.e. must have its own solution block, and if it exists, it is indicated in the “parameters” column (see below), and if the cell keeps "???", then the corresponding task has not yet been added to the framework! Strictly speaking, the task block has not been completed to the end.

"interpretation"

This rule makes the transition from terms of theory to terms of structures.

For example, in "There exists x in U such that _e1 is incident to x." the term "incident" is introduced in the theories of Biria and used in Ugraphia for inc global variable. Knowing its “binding” with inc, we can interpret the statement as “There exists x in U such that (_e1 x) in inc.” where the term “incident” is not used.

It is important to emphasize that no terms of theories are used in the resulting statement!


"substitution" 

This rule refers to the substitution of the definition of a term at the place of its use. Typically, a definition consists of a precondition formulated in a sentence beginning with the word “Let” and the definition itself, consisting of a phrase using the term, a syntactic connective (for example, “if and only if”) and a determinant - a phrase that specifies the meaning of the term. Definitions of terms are given in one theory or another - in our case, it is Ugraphia. From a programming point of view, a definition is a macro command, and a “substitution” is a macro substitution, and sometimes, the substituted text itself (preconditions and determinant) is modified.

The substitution comes down to the fact that the statements of the precondition form a linear block, and the determinant - a node of the decision tree.

For example, consider the statement

_e1 parallel to _e4.

It uses the term from Ugraphia - “parallel”.

Applying his definition:

eng

Let e1 e2 denote edges. e1 is parallel to e2 if and only if e1 and e2 denote different edges and e1 has the same endpoints as e2.

We get two precondition statements:

_e1 is an edge.

_e4 is an edge.

which need to be checked for truth, 

and a subtask:

_e1 and _e4 are different and _e1 has the same endpoints as _e4.

It is easy to see that the actual parameters of the place where the term is used, _e1 and _e4, are substituted in place of the formal parameters e1, e2 of the “macro command” of the definition.


+$+"quantifier expansion"

A quantifier always runs over some set or sequence and is expanded by running into an operation on statements or phrases written for each element of the set - an excellent move to finitism.

For example, consider the expansion of the second quantifier "every" in

"every member of every pair of inc belongs to U."

we need to run by inc and, in our case, see the __inc block of the framework; we will get from inc for the first element:

"every member of (_e1 _v1) belongs to U." 

etc., for each pair in inc.

Notes. For example, the finitistic way for the quantifier expansion in mathematical logic can be found in Esenin-Volpin's works. And it can be stated using an example like this

Let S is {e1 e2} and p() is an unary predicate on S. Then

"∀x:S p(x)" expands into "p(e1)∧p(e2)"

the generalization to the case of any finite number of elements in non-empty S is obvious.

Thus, the quantifier statement is expanded into several more specific statements and the meaning of the original statement is obtained by applying the operation ∧ or +, etc., to the expanded values.

"split"

This is a situation when a conjunction such as “and” or the words “equals”, “plus”, and the like is applied to two specific statements or phrases in the original statement.

For example, in 

"_e1 and _e4 are different and _e1 has the same endpoints as _e4." 

we split along the second “and”, obtaining two statements - subtasks: 

"_e1 and _e4 are different."

"_e1 has the same end vertices as _e4."



"reformulation"

According to this rule, various free texts in NL are converted into equivalent but more regular ones.

For example, 

"_e1 is incident to some element from U."

becomes

"there exists x in U such that _e1 is incident to x."

"choice"

Execution of a rule consists of SEARCHING in a set and determining the presence or absence of an element that satisfies a given condition.

For example, applying the "choice" rule to a statement

" there is x in U such that (_e1 x) in inc. "

consists of running through U and checking for the current element that it, paired with _e1, is present in inc. If such an element is found, then the statement is considered valid; otherwise - it is false.

"selection"

A subset of elements is selected from a specific set according to some criterion.

for example, in the phrase 

" number of elements of U such that it's an edge and simple and incident _v1."

Before counting the quantity, it is necessary to obtain a subset of U with the described elements, and then the phrase is converted into

" number of elements in list (_e1, _e2, _e4) ".

"obviously"

With text, the mental action of obtaining its value is performed.

For example, it is obvious that the following statements are true:

"_e1 and _e4 are different."

"_e1 is an element of U."

"(_e1 _v1) in inc."

However, you need to look at U and inc in the last two cases, respectively.


"count"

The calculation formulated in the phrase is performed in the mind.

For example, 

counting "number of elements in list (_e1, _e2, _e4)" will give the result 3.


ср, 2 июл. 2025 г. в 18:12, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 3, 2025, 6:37:55 PMJul 3
to ontolo...@googlegroups.com, CG
Alex,

There is much much more to say about all of these issues.   I recommend a five-day course that I taught on multiple occasions.  I presented the first version at  a research center in Malaysia, and I reused and revised the slides for other courses elsewhere:

See https://jfsowa.com/talks/patolog1.pdf and 2,3,4.5.  That was for a 5-day short course:  Lectures based on the slides in the mornings, and open-ended discussions in the afternoons.

I would have to add a part 6 to deal with the new material on AI.  But those 5 lectures go into much more detail on many more issues than we have been discussing in these emails.  And note that each lecture concludes with a list of references for further information on related topics.  Many of the slides also contain links to additional material on related issues.

As for your note below (which I have drastically shortened), there is much, much more to say.  See those five Patolog lectures and the references in each one.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John,

Alex Shkotin

unread,
Jul 4, 2025, 4:39:43 AMJul 4
to ontolo...@googlegroups.com, CG

John,


We all are busy with our own projects and ideas. So I was really pleasantly surprised with your "If you think that some method of reasoning doesn't fit in that four-step cycle, describe it or send a reference, and I'll show exactly how to map it to that cycle or  to some part of the cycle."

On the other hand, a classification is just a classification, but a cigar can be smoked.

Finally, let me give you an example of what solving a problem looks like in a case more complex than proving a theorem. [1]


I am on the way to do the same for Statics.


Alex


[1] Specific tasks of Ugraphia on a specific structure(-:PUBLIC:-)

FCT4_1. Task z4-1. Fact and "proof" 

In this block, we again encounter lines that do not have a “tree” identifier in the second column of the row. See details below in the next section, “(t)description of the structure of the solution block”.

In addition, the values in the “value” column (#4) of lines 1.2.1 and 1.2.2 are not simple but composite - ordered pairs.

We begin in this block from the YAFOLL statement and in the rule column keep an id of the Interpreter to call (Yp).

Operation ↑

yfl

0.

?parallel(_e1 _e4)?

TRUE


Yp

eng

0.↑

_e1 is parallel to _e4.

TRUE


"substitution"

eng


_e1 is an edge.

TRUE

FCT4_1-1 подзадача

"subtask"

eng


_e4 is an edge.

TRUE

???

"subtask"

eng

1.∧

_e1 and _e4 are different and _e1 has the same endpoints as _e4.



"split"

eng

1.1.

_e1 and _e4 are different.

TRUE


"obviously"

eng

1.2. =

_e1 has the same endpoints as _e4.

TRUE


"reformulation"+"subtask"

eng

1.2.1.

endpoints of _e1

(_v1 _v2)

???

"subtask"

eng

1.2.2.

endpoints of _e4

(_v1 _v2)

???

"subtask"


(substitution nuances)

"e1 and e2 denote different edges" in the definition turns(!) into "_e1, _e4 are different." i.e. “edges” goes away and “denote” turns into “are”; which of course, strictly speaking, is a “reformulation” within the substitution.

(about the solution in general)

Strictly speaking, the solution has not been completed because three subtasks do not have a link to the solution block; this is precisely how a filled block in a framework should usually not exist: in a framework block, each “subtask” will have a link to its block as a parameter. Thus, the presented framework should be considered educational. The fact that despite the absence of a solution block in the “parameter” cell (#5) there is a value in the "value" cell (#4) indicates that instead of the “subtask” rule, the “obviously” rule was applied, i.e., the meaning was obtained in mind.

(t)Description of the structure of the solution block

A chain of lines that do not indicate a place in the decision tree represents a linear section of requirements checks from the precondition in the definition of the substituted term. This linear section can most easily be attached to a tree node and located, for example, to the right of the node at the same level.

The first subtask has a link to its solution block, where TRUE was received.



пт, 4 июл. 2025 г. в 01:37, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 4, 2025, 11:47:35 AMJul 4
to ontolo...@googlegroups.com, CG
Alex,

Please read my response to the note by Dan Brickley.

The methods you are suggesting won't solve the problems that caused Apple to cancel their plans to develop an upgrade to Siri.

There are various people and companies that have been developing methods that are suitable for the technology they have been using.  But the general problem of detecting ALL the erroneous and dangerous texts on the WWW is extremely important.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John,


Reply all
Reply to author
Forward
0 new messages