Causality and Machine Intelligence: Real/Causal AI as an Ontology Engineering Machine

112 views
Skip to first unread message

Azamat Abdoullaev

unread,
Oct 6, 2022, 4:47:43 PM10/6/22
to ontolog-forum, ontolog...@googlegroups.com
Who first builds Causal AI rules the world.
“Machines’ lack of understanding of causal relations is perhaps the biggest roadblock to giving them human-level intelligence.”
Judea Pearl, Turing Award winner and AI pioneer

“Causality is very important for the next steps of progress of machine learning.”
Yoshua Bengio, Turing Award winner and “Godfather of Deep Learning”

“Causal AI is a key enabler of the next wave of AI, where AI moves toward greater decision automation, autonomy, robustness and common sense.”
Gartner, Analyst Firm
Fig 1. Gartner includes Causal AI in its 2022 Hype Cycle for Emerging Technologies. Based on deep research, surveys, and conversations with 12,000 organizations globally, Gartner’s analysis is an objective assessment that early adopters of Causal AI can net outsized benefits. 
image.png

John F Sowa

unread,
Oct 7, 2022, 12:19:39 AM10/7/22
to ontolog-forum, ontolog...@googlegroups.com
Azamat,
 
I agree that causality is very important.  But in 2019, I commented on the 2019 talks by Hinton, LeCun, and Bengio.  And I explained why they were not going to make any breakthroughs with the technology they were using.  Three years have passed, and they have not shown any progress.  I confidently predict that they won't make any progress by continuing with the methods they were using then and are apparently continuing to use..
 
Following are the slides I presented in 2019 with some additions that I made earlier this year:  https://jfsowa.com/talks/HintonLeCun.pdf
 
John
 
 

From: "Azamat Abdoullaev" <ontop...@gmail.com>
Sent: Thursday, October 6, 2022 4:48 PM
To: "ontolog-forum" <ontolo...@googlegroups.com>, ontolog...@googlegroups.com
Subject: [Ontology Summit] Causality and Machine Intelligence: Real/Causal AIas an Ontology Engineering Machine

Neil McNaughton

unread,
Oct 7, 2022, 2:54:31 AM10/7/22
to ontolo...@googlegroups.com, ontolog...@googlegroups.com

>> “Gartner’s analysis is an objective assessment”

Really? Who says? And the “hype curve”? A fantasy. A (poor) editorial/puffery in a graph!

 

Best regards,

Neil McNaughton

Editor Oil IT Journal – www.oilit.com

Recent readers’ testimonials

The Data Room SAS

7 Rue des Verrieres

92310 Sevres, France

Landline+33146239596

Cell+33672712642

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf867-_5v3_g6TF11F6Zob3ej8hq__vGKCBTcQqBfdkqVA%40mail.gmail.com.

Azamat Abdoullaev

unread,
Oct 7, 2022, 4:17:53 AM10/7/22
to ontolo...@googlegroups.com

John F Sowa

unread,
Oct 8, 2022, 12:30:21 AM10/8/22
to ontolog...@googlegroups.com, ontolog-forum
Azamat,
 
I read the article by Persianov, and I agree that many people just use the words AI or NNs as buzzwords that make their work sound more impressive -- in the hope that they can get funding from gullible people.  That is why I *never* talk about whether something is AI or not AI.  That gets bogged down in  meaningless arguing about charlatans.  It's a total waste of time.
 
And by the way, I got an offline note from somebody who pointed out that Bengio and others are getting very good results with NNs for reasoning about causality.   I followed the pointers, and what they're doing  is using hybrid systems:  NNs do the pattern recognition and they use good old fashioned physics for the causality. 
 
Summary:  NNs cannot do causal reasoning.  But computational physics can do physics.  By combining the two you can get a useful  HYBRID system that combines the strengths of both.
 
And by the way, see  my slides about Hinton and LeCun.  I actually agree with most of what they say.  Where I disagree is with some of their claims.  In particular, see slides 21 to 35 of http://jfsowa.com/talks/HintonLeCun.pdf
 
In those slides, I mostly agree with what Bengio is saying, but I point out that the NNs are only doing the perception, and all the reasoning about causality is done by good old fashioned computation..
 
> Who first builds Causal AI rules the world
 
That claim really doesn't say much.   Bengio  & company are doing causality by combining NNs with physics.  That's useful, but it can only explain physics.  There are many more kinds of explanations that are necessary.  I cited the example of a  3-year-old child who can understand and use more complex causality than any AI system known today.   But I am not making any claims about the need for a biological system.  I am just saying that there s much more to causality than just physics. 
 
John

Azamat Abdoullaev

unread,
Oct 9, 2022, 10:57:30 AM10/9/22
to ontolo...@googlegroups.com

Who first builds Real/Causal /NaturalAI rules the world

 
JS: That claim really doesn't say much.   Bengio  & company are doing causality by combining NNs with physics.  That's useful, but it can only explain physics.  There are many more kinds of explanations that are necessary.  I cited the example of a  3-year-old child who can understand and use more complex causality than any AI system known today.   But I am not making any claims about the need for a biological system.  I am just saying that there s much more to causality than just physics. 


Right. Computer scientists mostly focused on anthropocentric AI algorithms by imitating human intelligence with machines, trying to create artificial human intelligence or human AI as a fake AI or bogus AI, be it symbolic or statistical ones.

In fact, the scope and scale of a true/genuine AI is the whole world with all its content, not just human minds. What I dubbed as Real/Causal/Natural AI embraces the reality-inspired algorithms for problem-solving, search or optimization as well as perception, learning, reasoning, understanding, action and interaction.

So, it covers so-called nature-inspired optimization algorithms (NIOAs).

Many algorithms mimic natural phenomena such as how animals organize their lives, how they use instincts to survive, how generations evolve, how the human brain works, and how we as humans learn.

NIOAs are defined as a group of algorithms that are inspired by natural phenomena, including swarm intelligence, biological systems, physical and chemical systems, etc. NIOAs include bio-inspired algorithms and physics- and chemistry-based algorithms; the bio-inspired algorithms further include swarm intelligence-based and evolutionary algorithms.

NIOAs are an important branch of artificial intelligence (AI), and NIOAs have made significant progress in the last 30 years. Thus far, a large number of common NIOAs and their variants have been proposed, such as genetic algorithm (GA), particle swarm optimization (PSO) algorithm, differential evolution (DE) algorithm, artificial bee colony (ABC) algorithm, ant colony optimization (ACO) algorithm, cuckoo search (CS) algorithm, bat algorithm (BA), firefly algorithm (FA), immune algorithm (IA), grey wolf optimization (GWO), gravitational search algorithm (GSA), and harmony search (HS) algorithm. https://encyclopedia.pub/entry/12212

It is less known than neural networks algorithms (ANNs, machine learning algorithms that mimic the way human neurons communicate with one another) or reinforcement learning (RL) algorithms by trial and error with penalties and rewards.

My message is rather simple: No real causality, no real intelligence.

Or, comprehensive and consistent causal models of the world that include global ontology, science (universal knowledge), with computer science and engineering, are necessary requirements for true intelligence, human or machine.

It is able to efficiently identify the critical information/insights/knowledge/wisdom/learning/understanding in any datasets discarding all the irrelevant and misleading correlations, biased data, etc.


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Oct 15, 2022, 11:14:03 PM10/15/22
to ontolog...@googlegroups.com, ontolog-forum
Azamat.
 
There has been 70+ years of research on AI, machine learning. machine translation, NNs of various kinds (since 1945), and with inspiration and influences from all six branches of cognitive science (philosophy, psychology, linguistics, AI, neuroscience, and anthropology).  There have been many talented people specializing in and publishing about useful methods and combinations  of methods in all of these areas.
 
The book by Landgrebe & Smith is useful because they go into detail about the issues with detailed citations and analyses.  They correctly conclude that combinations of the known methods won't produce superintelligence. 
 
I agree with them that combinations and extensions of all the currently known methods won't produce anything that can match, let along surpass human intelligence during the next 79 years (end of this century).
 
Where I disagree with them is that nobody knows what major breakthroughs might be possible in the 22n century.
 
Where I disagree with your notes about real or fake AI is that you only talk about combinations of known technology.  There have been so many people working on various combinations during the past 70+ years that they have tested the most promising combinations without making any significant breakthroughs.  I don't believe that your vague comments will do anything to overcome the objections discussed by Landgrebe and Smith.
 
John

Azamat Abdoullaev

unread,
Oct 16, 2022, 5:46:54 AM10/16/22
to ontolog...@googlegroups.com, ontolog-forum
JS: "Where I disagree with your notes about real or fake AI is that you only talk about combinations of known technology.  There have been so many people working on various combinations during the past 70+ years that they have tested the most promising combinations without making any significant breakthroughs.  I don't believe that your vague comments will do anything to overcome the objections discussed by Landgrebe and Smith".
Language is inherently vague, being the key means of communicating our thoughts and ideas. There are big gaps between what's thought, what's meant, what's said, what's written, what's heard, what's read and what's done. 
As a result, we all don't hear each other and don't know how to hear each other, due to natural biases, assumptions or filters, Mental Models, aggravated with narrow specialization and limited knowledge. 
Here comes Science as Universal Knowledge with its objective, non-ambiguous, rigorous, causal  language. Today, it is the only reliable way to truth and reality, and any meaningful enterprise and research or innovation has no big sense without Science, with its engineering and technology.
 If you go for the hardest ever problem, machine intelligence, its nature, mechanisms, structure, functions, scientific models, implementations, and impact, your major assumptions should have full validity, internal and external, with compelling claims about the world. 
Starting from the imitation game of A. Turing, rebranded as the Turing Test (TT), 70+ years, several AI researchers generations, the whole field is stuck with the anthropic/subjective/nonscientific assumption and definition that  
Artificial intelligence is human-like/ human-level/subjective intelligence produced artificially, by humans and/or machines.
It is the level of the geocentric model of the universe, even not reaching the heliocentric theory. 
You need all sorts and kinds of TTs, 
the conversation TT, the perception TT, the reasoning TT, the learning TT, the understanding TT, the emotion TT, the behavior TT, the special tasks TT, etc. Hegel considered such things as the bad infinity.
As its implications, such a Human-Like AI brings us human-like biases, new cyber security risks, deep-fakes, privacy infringements, autonomous weapons, and job displacements.
In reality, AI is real/natural/causal/scientific intelligence produced artificially, by humans and/or machines.
This is what I call Real/True/Genuine/Natural/Scientific/Objective AI vs. False/Fake/Spurious/Artificial/Nonscientific/Subjective AI
Again, Real AI as Causal Man-Machine Intelligence and Learning is going to revolutionize our lives over the coming decade in ways we can’t even imagine now. For it’s not about computers being able to perform tasks better than humans, and finally replacing us, but about computers being able to do things that humans can’t do at all, augmenting our power and complementing human intelligence, individual and collective.
Ask yourself why we so highly appreciate the modern machines, equipment, devices, mechanisms and all other physical/chemical/biological/medical/information technologies. They do things that humans can’t do thus complementing and enhancing us.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/28957dae904b488ebf7692e6e51e8cdd%40bestweb.net.

John Bottoms

unread,
Oct 16, 2022, 11:38:32 AM10/16/22
to ontolo...@googlegroups.com

Azamat, Who's Turing Test?

The key issue with a Turing Test is that it is First-Order-Logic (FOL) based. That means that it comes with no metrics, is unlikely to be adaptively context sensitive, and does not lead us to extensible, collaborative AI.

We need to look at natural selection for some guidelines. Nature prefers diversity. If, when  I look 50 or 100 years in the future, I see a static hoard of robots who all think in lock-step, then I will know we have failed.

The Brute Fact is that all intelligence is subjective and local. We don't have enough grains of sand or sheets of papyrus to create an all-knowing, all-seeing AI. They must be domain based, and that is why Domain Based Systems are, at least the immediate future, the favored architecture for systems.

Even with Higher Order Logic (HOL) we will need probabilistic metrics, and the entailed question is"Who's Metrics" and how will we calculate a metric's confidence factor, and for which context? And, a particular HOL is not innately extensible. There is also the issue, true for humans as well as AI, that learning sequences determine the intelligence result. So, we need an AI that is extensible and collaborative with metrics that account for collaboration. One of the prime elements lacking here is that we do not teach thinking in our schools. We, instead, teach how previous solutions are valuable economic solutions, omitting the reasoning and the failed use cases.

-John Bottoms, FirstStar Systems

unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAKK1bf9Q1RbUn210%2BY-ttAqEkYTLQdP4E-D%3DYyMEWro3ZCh1zw%40mail.gmail.com.

Azamat Abdoullaev

unread,
Oct 16, 2022, 3:47:34 PM10/16/22
to ontolo...@googlegroups.com

JB: Azamat, Who's Turing Test?

It is rather What's Turing Test?

The key issue with a Turing Test is that it is First-Order-Logic (FOL) based.

It is not the standard interpretation of TT, which is a man-machine-man language test of a machine's ability to imitate intelligent behavior (blind conversation) identical to a human. 

A similar idea was proposed by D. Diderot: "If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation".

The Brute Fact is that all intelligence is subjective and local.

Nope. Human intelligence is just a sample of real/natural/causal intelligence. Its subjectivity and locality are the attributes of an individual, which are much reduced in the collective intelligence, and minimal in objective scientific intelligence.

Where we agree is that the TT/Imitation Game is not any scientific/objective criterion and performance metric of intelligence. Otherwise, we need an infinite range of it:

Perception TT

Speech TT

Language TT

Learning TT

Planning TT,

Creativity TT

Reasoning TT

Action TT, etc.




Reply all
Reply to author
Forward
0 new messages