When will we achieve Artificial General Intelligence (AGI) and when will we achieve Artificial Super Intelligence (ASI)?

19 views
Skip to first unread message

Azamat Abdoullaev

unread,
Apr 10, 2023, 1:06:48 PM4/10/23
to ontolog-forum, ontolog...@googlegroups.com

Azamat Abdoullaev

unread,
Apr 10, 2023, 2:23:43 PM4/10/23
to ontolog-forum, ontolog...@googlegroups.com
Wonder if a  lurker should show his AI illiteracy and/or dissatisfaction.

On Mon, Apr 10, 2023 at 8:16 PM Krzysztof Janowicz <jano...@ucsb.edu> wrote:
Folks, this used to be a great forum for high-quality discussions about ontology and related topics, now it has drifted into odd postings and links of questionable quality.
Krzysztof


On 4/10/23 19:06, Azamat Abdoullaev wrote:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/CAKK1bf-BYrHrrLx4tEjQ2807QQchqvyfDp8cP%2BPxHmEpb6BsPw%40mail.gmail.com.


-- 
Krzysztof Janowicz
Professor for Geoinformatics, Director Center for Spatial Studies
Geography Department, University of California, Santa Barbara 
4830 Ellison Hall, Santa Barbara, CA 93106-4060 

Email: ja...@geog.ucsb.edu
Webpage: http://geog.ucsb.edu/~jano/
Semantic Web Journal: http://www.semantic-web-journal.net

John F Sowa

unread,
Apr 11, 2023, 12:25:26 AM4/11/23
to ontolog...@googlegroups.com, Azamat Abdoullaev, ontolog-forum
Krzysztof, I certainly agree. 

The answer to the question "What is AI?" is very simple:  AI consists of research  and development that builds on and extends the technology that was called AI at the 1956 conference at Dartmouth.. 

Discussions of Artificial General Intelligence and Artificial Super Intelligence are technically known as BS.  Apparently LinkedIn has a place where BS can be flushed down the drain.  I suggest that all such BS should be dumped at LinkedIn.

Please don't dump it here.

John
 


From: "Krzysztof Janowicz" <jano...@ucsb.edu>

Alex Shkotin

unread,
Apr 11, 2023, 3:42:22 AM4/11/23
to ontolog...@googlegroups.com, Azamat Abdoullaev, ontolog-forum
Krzysztof, 

For me it looks like this: a member of our community shares his thoughts on a topic that is close to us. Many people think that SGI is on the horizon, but AGI is just around the corner.
The lack of fruitful discussions, I think, is partly due to the overwhelming amount of interesting information about projects, from OBO Foundry to Schema.org, that we now receive at the Ontology Summit meetings.
At least I need time to digest it ontologically :-)

Alex

пн, 10 апр. 2023 г. в 20:16, Krzysztof Janowicz <jano...@ucsb.edu>:
Folks, this used to be a great forum for high-quality discussions about ontology and related topics, now it has drifted into odd postings and links of questionable quality.
Krzysztof


alex.shkotin

unread,
Apr 11, 2023, 4:13:44 AM4/11/23
to ontolog-forum
IN ADDITION: GPT-4 speaks quite well on general topics. Imagine that he will begin to act in the same quality. Of course, I won’t let him into the house. it is hallucinogenic by definition. But some believe that it lacks a verifier using a good formal ontology.

вторник, 11 апреля 2023 г. в 10:42:22 UTC+3, alex.shkotin:

Alex Shkotin

unread,
Apr 11, 2023, 4:49:26 AM4/11/23
to ontolo...@googlegroups.com
just for fun:
image.png

image.png

вт, 11 апр. 2023 г. в 11:13, alex.shkotin <alex.s...@gmail.com>:
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3f881d21-9539-45a5-8ccd-1b2694119ad7n%40googlegroups.com.

Azamat Abdoullaev

unread,
Apr 11, 2023, 7:57:09 AM4/11/23
to Alex Shkotin, ontolog-forum, ontolog...@googlegroups.com
Alex,
As I mentioned before, it all comes from a new phenomenon of the post-truth age of fragmented science and commercial technology and deep fake news, a global illiteracy of various kinds and sorts:

  • Literal illiteracy
  • Cultural illiteracy
  • Civic illiteracy
  • Racial illiteracy
  • Financial illiteracy
  • Numerical illiteracy
  • Statistical illiteracy
  • Factual illiteracy
  • Scientific illiteracy
  • Technological illiteracy
  • Environmental illiteracy

A new sort of illiteracy pandemically affecting the academic world is the data and AI illiteracy. Each academic and researcher, engineer and developer, is happy to dig his narrow hole provided it is well-paid. 

So, Real AGI is coming to take over the narrow-minded human world to fill the vacuum.  

 https://www.facebook.com/groups/1955664064497065/?multi_permalinks=6287771674619594&notif_id=1681069098535672&notif_t=feedback_reaction_generic&ref=notif

Azamat Abdoullaev

unread,
Apr 11, 2023, 9:49:29 AM4/11/23
to Alex Shkotin, ontolog-forum, ontolog...@googlegroups.com
It was a smart article in Forbes: Why Is The Public So AI-Illiterate?
The state of AI understanding by the public, policymakers, journalists and even non-AI technologists is almost comically bad. What is most intriguing, however, is that like today’s social media coverage and yesterday’s VR coverage, non-AI researchers tend to react with considerable dismissiveness and even downright vitriol to discussions of the limitations of today’s correlative approach to deep learning and its reliance on free but bad training data. 
Why is the public so AI-illiterate and how do we help the public better understand the state of modern deep learning? 

alex.shkotin

unread,
Apr 11, 2023, 12:30:14 PM4/11/23
to ontolog-forum
Azamat, 

Today there seems to be no doubt that AGI is technically possible, just as since the time of Isaac Asimov there has been no doubt that it is possible in principle. Which technology will win, we will see soon. 
At https://www.ri.cmu.edu/videos/ there are interesting robots.

Alex

вторник, 11 апреля 2023 г. в 16:49:29 UTC+3, Azamat Abdoullaev:

Alex Shkotin

unread,
Apr 11, 2023, 12:36:18 PM4/11/23
to ontolog...@googlegroups.com, ontolog-forum
Patrick,

We have a whole group of subtle definitions of AGI, I can search, but for now let me tell you the basic idea: it acts like a normal person in normal human situations. At first, situations are limited in terms of environments and actions, and then in any situation.

Alex

вт, 11 апр. 2023 г. в 19:27, Patrick Logan <patric...@gmail.com>:
Can anyone point to a semi-formal, actionable definition of AGI? Please don't say the Turing Test.

On Tue, Apr 11, 2023, 9:23 AM Alex Shkotin <alex.s...@gmail.com> wrote:
Azamat, 

Today there seems to be no doubt that AGI is technically possible, just as since the time of Isaac Asimov there has been no doubt that it is possible in principle. Which technology will win, we will see soon. 
At https://www.ri.cmu.edu/videos/ there are interesting robots.

Alex

вт, 11 апр. 2023 г. в 14:57, Azamat Abdoullaev <ontop...@gmail.com>:

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

Azamat Abdoullaev

unread,
Apr 11, 2023, 12:42:56 PM4/11/23
to ontolo...@googlegroups.com
Yep, within 3-6 years. All depends on your model of AGI: 
Human AGI, Human-level, Human-like AI
Non-human AGI, Human-Centered AI, HCAI sometimes referred to as “augmented intelligence,” “centaur intelligence” or “human in the loop”.

unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/7c239411-c622-4132-84de-76466920a0aan%40googlegroups.com.

Alex Shkotin

unread,
Apr 11, 2023, 1:08:21 PM4/11/23
to ontolog...@googlegroups.com, ontolog-forum
The line number is 34. 
With "any situation" this is just a dream :-)

вт, 11 апр. 2023 г. в 19:46, Patrick Logan <patric...@gmail.com>:
Sure. That's a simple statement that's been used for decades. There's still no measure for hire far we are from "any situation", there's still no way of knowing when we've achieved that, and there's still no way of knowing if we've achieved that whether any given change to the system will or will not preserve that condition.

People have been predicting "soon" for decades. Predictions mean nothing.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

alex.shkotin

unread,
Apr 11, 2023, 1:38:38 PM4/11/23
to ontolog-forum
With regard to "any situation", we must still keep in mind that in an unfamiliar environment, the average person behaves rather awkwardly. If the situation is more or less complicated, then he honestly says "I don't understand" and "I don't know what to do." This is the first sign of AGI :-)

вторник, 11 апреля 2023 г. в 20:08:21 UTC+3, alex.shkotin:

alex.shkotin

unread,
Apr 11, 2023, 1:54:17 PM4/11/23
to ontolog-forum
one more link to AGI-developer https://agieng.substack.com/p/agi-from-the-definition-to-the-implementation

вторник, 11 апреля 2023 г. в 20:38:38 UTC+3, alex.shkotin:

Azamat Abdoullaev

unread,
Apr 11, 2023, 3:08:38 PM4/11/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com

John F Sowa

unread,
Apr 11, 2023, 4:11:14 PM4/11/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
Alex,

Please note my reply to  Krzysztow.  I completely agreed with his comment:  "Folks, this used to be a great forum for high-quality discussions about ontology and related topics, now it has drifted into odd postings and links of questionable quality."

First, I want to say that I was invited to the founding meeting of AGI.  I knew most of the people there, and they had a wide range of opinions on these issues.  But the people who continued to attend those meetings are those who predicted that their methods would soon produce AGI.  Those of us who understood their methods very well stopped attending.

Although Barry Smith and I have often disagreed on many issues, we both agree on one fundamental issue:  neither AGI  nor ASI will be possible in the 21st century.  My only disagreement is that I won't make, believe, or care abiyt any predictions about the 22nd century.

As for ontology, the structure and design of GPT makes it impossible to incorporate any kind of reasoning that could evaluate the results that are generated. That includes any reasoning based on any ontology.  It's irrelevant how good the ontology may be if the system that uses it cannot reason with or about that ontology.  That makes it impossible for anybody to have any confidence in what any version of GPT produces.

However, the LLMs (Large Language Models) that support machine translation do have an important method that controls the quality of what they generate:  the source language (formal or informal) that is being translated.  Since the result that they generate is just one step away from the source, it is not likely to deviate very much from the source.  Even then, it's important to have human evaluation for subjects that require high precision: finance, legal matters, international treaties, and high-quality publications.  

Therefore, it's possible to have useful applications of the LLM components of GPT to be used in conjunction with systems that do serious reasoning and computation, such as Wolfram's Mathematica.  But GPT technology, by itself, is a toy.  For serious applications, its technology is already obsolete compared to AI systems that have been available for years.

In summary, please send all speculations about AGI and ASI to LinkedIn or other places that do not pollute these email lists. If necessary, send ONE monthly summary of the LinkedIn articles.

John


Barry Smith

unread,
Apr 11, 2023, 4:18:23 PM4/11/23
to ontolo...@googlegroups.com
What is missing in his account is the absence of an AGI will. To have AGI the machine would have to want things, have intentions, ...
See attached extract from 9781003310105.pdf
BS

unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/2b2f3d07-dc55-466b-bc90-d72896e807f7n%40googlegroups.com.
The-Missing-Machine-Will.pdf

Kingsley Idehen

unread,
Apr 11, 2023, 6:18:02 PM4/11/23
to ontolo...@googlegroups.com
Hi John,

On 4/11/23 4:10 PM, John F Sowa wrote:
> As for ontology, the structure and design of GPT makes it impossible
> to incorporate any kind of reasoning that could evaluate the results
> that are generated. That includes any reasoning based on any ontology.
>  It's irrelevant how good the ontology may be if the system that uses
> it cannot reason with or about that ontology.

You can use an ontology to provide context for GPT reasoning. Naturally,
results are dependent on the knowledge of the prompter :)

Excerpt from an experiment [1] triggered by this thread:

Q: Grandchildren of the entity denoted by
<http://dbpedia.org/resource/Elizabeth_Bowes-Lyon>?
A:
Based on the given dataset, the grandchildren of Elizabeth Bowes-Lyon
(denoted by http://dbpedia.org/resource/Elizabeth_Bowes-Lyon) are:

* Charles, Prince of Wales
* Anne, Princess Royal
* Prince Andrew, Duke of York
* Prince Edward, Earl of Wessex
Please note that these are the children of Elizabeth Bowes-Lyon's
daughter, Queen Elizabeth II, and not her grandchildren. The dataset
provided does not have sufficient information to determine the
grandchildren of Elizabeth Bowes-Lyon.


Q: Redo, bearing in mind that rel:parentOf is an owl:Transitive property.
A:
I apologize for the confusion. Given that rel:parentOf is an
owl:Transitive property, the grandchildren of Elizabeth Bowes-Lyon
(denoted by http://dbpedia.org/resource/Elizabeth_Bowes-Lyon) are as
follows:

* Prince William of Wales
* Prince Harry of Wales
* Peter Phillips
* Zara Phillips
* Princess Beatrice of York
* Princess Eugenie of York

Q: Bearing in mind these new facts, redo:
List all siblings.
A:
Bearing in mind the new facts and considering rel:parentOf as an
owl:Transitive property, the siblings are as follows:

* Prince William and Prince Harry
* Queen Elizabeth II and Princess Margaret
* Charles, Prince of Wales; Anne, Princess Royal; Prince Andrew, Duke of
York; and Prince Edward, Earl of Wessex

Q: Sure?
A:
Apologies for the confusion in my previous response. After reevaluating
the dataset, the list of siblings is as follows:

* Prince William and Prince Harry
* Queen Elizabeth II and Princess Margaret
* Charles, Prince of Wales; Anne, Princess Royal; Prince Andrew, Duke of
York; and Prince Edward, Earl of Wessex
These sibling relationships are accurate based on the given dataset.

Links:

[1]
https://kingsley.idehen.net/DAV/home/kidehen/Public/Linked%20Data%20Documents/Playground/gptChat/prompts/british-royal-family-reasoning-and-inference.txt


--
Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
: http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

Alex Shkotin

unread,
Apr 12, 2023, 5:27:03 AM4/12/23
to ontolo...@googlegroups.com
Barry,

I copy-pasted your reply to the MR post. I hope he answers something. 
For me, the most important thing is that the robot has goal-setting. Moreover, I would like to easily find out where and for what purpose it is moving and how to change its goal to another.
Above this "life cycle" it is easy to add a process that will randomly put anything into the target register. And to the question "why do you water the flowers?" the answer will be "I have it so written in the target register - you see!" But I hope MR answers more interesting :-)

Alex


вт, 11 апр. 2023 г. в 23:18, Barry Smith <ifo...@gmail.com>:
You received this message because you are subscribed to a topic in the Google Groups "ontolog-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ontolog-forum/-nl9WyPa9qc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAN_82STt0zASNsn1m_wuT2_XOi7fnU52tcgPwDgiRzMLsaSP1w%40mail.gmail.com.

Alex Shkotin

unread,
Apr 12, 2023, 6:00:26 AM4/12/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
John,

You pushed me to investigate my activity this year in our great ontolog-forum. This is a list of topics I began:
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png
image.png

Perhaps really too much about AIs. Please enter into my position, the topic is fascinating.
I'll try to write only very important things about these in our forum.

Alex


вт, 11 апр. 2023 г. в 23:11, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Apr 12, 2023, 3:07:43 PM4/12/23
to ontolo...@googlegroups.com
Kingsley,

As usual, I respect your comments because you ignore the nonsense in the subject line above and focus on the concrete issues about how GPT works, what it does, and how you can make it do what you need to do.

Please note that my comment was about the limitations of GPT.  You were able to get the information you wanted by repeatedly adding information:  (1) use DBpedia as the source of information.  (2) telling it that parentOf is transitive. (3) Bearing in mind these new facts, redo: List all siblings. (4) Redo, bearing in mind that rel:parentOf is an owl:Transitive property 

A good system should have considered those options  from the beginning.  As just one example of a much better system, please download the slides in https://jfsowa.com/talks/cogmem.pdf .   The early slides go into quite a bit of detail about the system, and the references at the end go into more detail.  For now, just consider the three examples in slides 44 to  63.

For starters, just look at the example of legacy re-engineering  (slides 47 to 55).  It was implemented in 2000, and it ran on a sever with multiple CPUs.  Today, your cell phone has more computing power.  But GPT even with an ontology could not do anything with that task -- or with any of the others.

By the way, we showed that example to some companies that were working on legacy re-engineering. Their response:  "You took a job for which Andersen Consulting (now Accenture) estimated 80 person years and did it in 15 person weeks?  Get outta here!"

John

PS:  One person I spoke with said "80 person years is Accenture's estimate for any project.."
 


From: "Kingsley Idehen' via ontolog-forum" <ontolo...@googlegroups.com>

Alex Shkotin

unread,
Apr 13, 2023, 10:40:02 AM4/13/23
to ontolog-forum
image.png
chatGPT3.5

вт, 11 апр. 2023 г. в 19:27, Patrick Logan <patric...@gmail.com>:
Can anyone point to a semi-formal, actionable definition of AGI? Please don't say the Turing Test.

On Tue, Apr 11, 2023, 9:23 AM Alex Shkotin <alex.s...@gmail.com> wrote:
Azamat, 

Today there seems to be no doubt that AGI is technically possible, just as since the time of Isaac Asimov there has been no doubt that it is possible in principle. Which technology will win, we will see soon. 
At https://www.ri.cmu.edu/videos/ there are interesting robots.

Alex

вт, 11 апр. 2023 г. в 14:57, Azamat Abdoullaev <ontop...@gmail.com>:
Alex,

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

Barry Smith

unread,
Apr 13, 2023, 12:11:17 PM4/13/23
to ontolo...@googlegroups.com
Let me know if  he/it responds

Kingsley Idehen

unread,
Apr 13, 2023, 3:11:31 PM4/13/23
to ontolo...@googlegroups.com


On 4/12/23 3:07 PM, John F Sowa wrote:
Kingsley,

As usual, I respect your comments because you ignore the nonsense in the subject line above and focus on the concrete issues about how GPT works, what it does, and how you can make it do what you need to do.

Please note that my comment was about the limitations of GPT.  You were able to get the information you wanted by repeatedly adding information:  (1) use DBpedia as the source of information.  (2) telling it that parentOf is transitive. (3) Bearing in mind these new facts, redo: List all siblings. (4) Redo, bearing in mind that rel:parentOf is an owl:Transitive property 

A good system should have considered those options  from the beginning. 


Yes-ish.

In retrospect, I should have added a triple to the graph indicating the nature of parentOf property.

I'll re-run the test.


As just one example of a much better system, please download the slides in https://jfsowa.com/talks/cogmem.pdf .   The early slides go into quite a bit of detail about the system, and the references at the end go into more detail.  For now, just consider the three examples in slides 44 to  63.


OpenAI offer a service that's web-accessible, but I can't test the system you recommend if it doesn't offer a similar interaction interface.



For starters, just look at the example of legacy re-engineering  (slides 47 to 55).  It was implemented in 2000, and it ran on a sever with multiple CPUs.  Today, your cell phone has more computing power.  But GPT even with an ontology could not do anything with that task -- or with any of the others.

By the way, we showed that example to some companies that were working on legacy re-engineering. Their response:  "You took a job for which Andersen Consulting (now Accenture) estimated 80 person years and did it in 15 person weeks?  Get outta here!"


Yes, but aren't you missing out on a huge opportunity by this system not being online? You told me about this whole thing eons ago :)


Kingsley

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Kingsley Idehen

unread,
Apr 13, 2023, 3:53:16 PM4/13/23
to ontolo...@googlegroups.com

John,

On 4/13/23 3:11 PM, 'Kingsley Idehen' via ontolog-forum wrote:

Yes-ish.

In retrospect, I should have added a triple to the graph indicating the nature of parentOf property.

I'll re-run the test.

In response to the above, I was able to eventually get GPT to pass the test, after a few iterations across new session instantiations.

Transcript:

Q: Given the following dataset, and the nature of relations defined in the TBox section.

Turtle Start

Entity Relationships

TBox

@prefix rel: http://purl.org/vocab/relationship/ .
@prefix owl: http://www.w3.org/2002/07/owl# .

rel:parentOf a owl:TransitiveProperty .

Abox

http://dbpedia.org/resource/Prince_William_of_Wales a <#RoyalPerson>, <#MalePerson> ;
schema:name “Prince William” ;
rel:siblingOf http://dbpedia.org/resource/Prince_Harry_of_Wales .
http://dbpedia.org/resource/Elizabeth_Bowes-Lyon a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Queen Mother” ;
rel:parentOf http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom .
http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Queen Elizabeth II” ;
rel:parentOf http://dbpedia.org/resource/Charles,_Prince_of_Wales,http://dbpedia.org/resource/Anne,_Princess_Royal,
http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York, http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex ;
rel:siblingOf http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon .
http://dbpedia.org/resource/Charles,_Prince_of_Wales a <#RoyalPerson>, <#MalePerson> ;
schema:name “Prince Charles” ;
rel:parentOf http://dbpedia.org/resource/Prince_William_of_Wales , http://dbpedia.org/resource/Prince_Harry_of_Wales .
http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Princess Margaret” .
http://dbpedia.org/resource/Anne,_Princess_Royal a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Princess Anne” ;
rel:parentOf http://dbpedia.org/resource/Peter_Phillips, http://dbpedia.org/resource/Zara_Phillips .
http://dbpedia.org/resource/Zara_Phillips a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Princess Zara Phillips” .
http://dbpedia.org/resource/Princess_Beatrice_of_York a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Princess Beatrice” .
http://dbpedia.org/resource/Princess_Eugenie_of_York a <#RoyalPerson>, <#FemalePerson> ;
schema:name “Princess Eugenie” .
http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York a <#RoyalPerson>, <#MalePerson> ;
schema:name “Prince Andrew” ;
rel:parentOf http://dbpedia.org/resource/Princess_Eugenie_of_York, http://dbpedia.org/resource/Princess_Beatrice_of_York .
http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex a <#RoyalPerson>, <#MalePerson> ;
schema:name “Prince Edward” .
http://dbpedia.org/resource/Prince_Harry_of_Wales a <#RoyalPerson>, <#MalePerson> ;
schema:name “Prince Harry” .
http://dbpedia.org/resource/Peter_Phillips a <#RoyalPerson>, <#MalePerson> ;
schema:name “Prince Peter Phillips” .

Turtle End

List all the siblings.
A:

Based on the provided Turtle dataset, here is a list of all the siblings:

Prince William and Prince Harry are siblings.

Prince William: http://dbpedia.org/resource/Prince_William_of_Wales
Prince Harry: http://dbpedia.org/resource/Prince_Harry_of_Wales

Queen Elizabeth II and Princess Margaret are siblings.

Queen Elizabeth II: http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom
Princess Margaret: http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon

Princess Beatrice and Princess Eugenie are siblings.

Princess Beatrice: http://dbpedia.org/resource/Princess_Beatrice_of_York
Princess Eugenie: http://dbpedia.org/resource/Princess_Eugenie_of_York

These sibling relationships are identified based on the “rel:siblingOf” property in the dataset.

Q: List all the aunties.
A:

Based on the provided Turtle dataset, here is a list of all the aunts (aunties):

Princess Anne is the aunt of Prince William and Prince Harry.

Princess Anne: http://dbpedia.org/resource/Anne,_Princess_Royal
Prince William: http://dbpedia.org/resource/Prince_William_of_Wales
Prince Harry: http://dbpedia.org/resource/Prince_Harry_of_Wales

Princess Margaret is the aunt of Prince Charles, Princess Anne, Prince Andrew, and Prince Edward.

Princess Margaret: http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon
Prince Charles: http://dbpedia.org/resource/Charles,_Prince_of_Wales
Princess Anne: http://dbpedia.org/resource/Anne,_Princess_Royal
Prince Andrew: http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York
Prince Edward: http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex

Princess Anne is the aunt of Princess Beatrice and Princess Eugenie.

Princess Anne: http://dbpedia.org/resource/Anne,_Princess_Royal
Princess Beatrice: http://dbpedia.org/resource/Princess_Beatrice_of_York
Princess Eugenie: http://dbpedia.org/resource/Princess_Eugenie_of_York

These aunt relationships are identified based on the “rel:parentOf” and “rel:siblingOf” properties in the dataset, where an aunt is the sibling of a parent.

Q: Redo, grouping aunties in easy to read form.
A:

Certainly! Here is a list of all the aunts (aunties) grouped in an easy-to-read form:

Aunt: Princess Anne (http://dbpedia.org/resource/Anne,_Princess_Royal)

Niece/Nephew: Prince William (http://dbpedia.org/resource/Prince_William_of_Wales)
Niece/Nephew: Prince Harry (http://dbpedia.org/resource/Prince_Harry_of_Wales)
Niece/Nephew: Princess Beatrice (http://dbpedia.org/resource/Princess_Beatrice_of_York)
Niece/Nephew: Princess Eugenie (http://dbpedia.org/resource/Princess_Eugenie_of_York)

Aunt: Princess Margaret (http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon)

Niece/Nephew: Prince Charles (http://dbpedia.org/resource/Charles,_Prince_of_Wales)
Niece/Nephew: Princess Anne (http://dbpedia.org/resource/Anne,_Princess_Royal)
Niece/Nephew: Prince Andrew (http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York)
Niece/Nephew: Prince Edward (http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex)

These aunt relationships are identified based on the “rel:parentOf” and “rel:siblingOf” properties in the dataset, where an aunt is the sibling of a parent.

Q: Redo, but for Uncles.
A:

Certainly! Here is a list of all the uncles grouped in an easy-to-read form:

Uncle: Prince Charles (http://dbpedia.org/resource/Charles,_Prince_of_Wales)

Niece/Nephew: Princess Beatrice (http://dbpedia.org/resource/Princess_Beatrice_of_York)
Niece/Nephew: Princess Eugenie (http://dbpedia.org/resource/Princess_Eugenie_of_York)

Uncle: Prince Andrew (http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York)

Niece/Nephew: Prince William (http://dbpedia.org/resource/Prince_William_of_Wales)
Niece/Nephew: Prince Harry (http://dbpedia.org/resource/Prince_Harry_of_Wales)

Uncle: Prince Edward (http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex)

Niece/Nephew: Prince William (http://dbpedia.org/resource/Prince_William_of_Wales)
Niece/Nephew: Prince Harry (http://dbpedia.org/resource/Prince_Harry_of_Wales)

These uncle relationships are identified based on the “rel:parentOf” and “rel:siblingOf” properties in the dataset, where an uncle is the sibling of a parent.

Comments:

The Uncles response above is incorrect, and I opted not to prompt further.

Conclusion:

CHAT-GPT is primarily useful to operators that are knowledgeable in the subject-matter domain being operated on. Basically, it will dangerously mislead anyone seeking an all-answering oracle for a vacuous mind!

Nadin, Mihai

unread,
Apr 13, 2023, 5:08:57 PM4/13/23
to ontolo...@googlegroups.com

I take YES-ish for an answer.

And I subscribe to the

Conclusion:

CHAT-GPT is primarily useful to operators that are knowledgeable in the subject-matter domain being operated on. Basically, it will dangerously mislead anyone seeking an all-answering oracle for a vacuous mind!

My attempts were in regard to the distinction between living matter and non-living matter.

 

Best wishes.

 

Mihai Nadin

 

From: 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>

--

All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Kingsley Idehen

unread,
Apr 13, 2023, 6:00:51 PM4/13/23
to ontolo...@googlegroups.com

On 4/13/23 3:11 PM, 'Kingsley Idehen' via ontolog-forum wrote:


On 4/12/23 3:07 PM, John F Sowa wrote:
Kingsley,

As usual, I respect your comments because you ignore the nonsense in the subject line above and focus on the concrete issues about how GPT works, what it does, and how you can make it do what you need to do.

Please note that my comment was about the limitations of GPT.  You were able to get the information you wanted by repeatedly adding information:  (1) use DBpedia as the source of information.  (2) telling it that parentOf is transitive. (3) Bearing in mind these new facts, redo: List all siblings. (4) Redo, bearing in mind that rel:parentOf is an owl:Transitive property 

A good system should have considered those options  from the beginning. 


Yes-ish.

In retrospect, I should have added a triple to the graph indicating the nature of parentOf property.

I'll re-run the test.

Re-sending for better context plus cleaning up formatting issues arising from Markdown used in my initial response.

Transcript:

Q: Given the following dataset, and the nature of relations defined in the TBox section.

## Turtle Start ##
## Entity Relationships
### TBox
@prefix rel: <http://purl.org/vocab/relationship/> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .

rel:parentOf a owl:TransitiveProperty .

### ABox
 a <#RoyalPerson>, <#FemalePerson> ;
schema:name "Princess Anne" ;
rel:parentOf <http://dbpedia.org/resource/Peter_Phillips>, <http://dbpedia.org/resource/Zara_Phillips> .
<http://dbpedia.org/resource/Zara_Phillips> a <#RoyalPerson>, <#FemalePerson> ;
schema:name "Princess Zara Phillips" .
<http://dbpedia.org/resource/Princess_Beatrice_of_York> a <#RoyalPerson>, <#FemalePerson> ;
schema:name "Princess Beatrice" .
<http://dbpedia.org/resource/Princess_Eugenie_of_York> a <#RoyalPerson>, <#FemalePerson> ;
schema:name "Princess Eugenie" .
<http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York>
 a <#RoyalPerson>, <#MalePerson> ;
schema:name "Prince Andrew" ;
rel:parentOf <http://dbpedia.org/resource/Princess_Eugenie_of_York>, <http://dbpedia.org/resource/Princess_Beatrice_of_York> .
<http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex> a <#RoyalPerson>, <#MalePerson> ;
schema:name "Prince Edward" .
<http://dbpedia.org/resource/Prince_Harry_of_Wales> a <#RoyalPerson>, <#MalePerson> ;
schema:name "Prince Harry" .
<http://dbpedia.org/resource/Peter_Phillips>
 a <#RoyalPerson>, <#MalePerson> ;
schema:name "Prince Peter Phillips" .
## Turtle End ##
-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com

John F Sowa

unread,
Apr 13, 2023, 8:43:19 PM4/13/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
Barry and Alex,

I mostly agree with Barry about the 21st  century.  But i am reluctant to say never,   because I don't know what innovations may become possible in  2100 or later.

But I disagree with Alex because his criteria for recognizing AGI are far too week.  As we have seen, many people are already confused about something as limited as ChatGPT.  

That is why I proposed a much stronger test than the Turing test:  Any system that has a human level of intelligence should be able to explain how and why it has achieved a human level.  And to evaluate its ability, I would recommend a committee of three experts:

1. An AI expert who is familiar with current AI technology and recent publications on new advances.

2. A psychologist or psychoanalyst who is familiar with the full range of abilities and symptoms of normal and exceptional human performance in a wide range of circumstances.

3. A neuroscientist who is familiar with the brains of humans and other animals and their capabilities in normal and abnormal circumstances.

This committee shall be able to interview the claimed AGI system on a daily basis for one month on as many topics and methods that they consider significant.  If the committee cannot reach a unanimous decision after a month, or if other committees wish to continue the investigation, the testing should continue with new committees until everyone is satisfied with the decision,

Of course, the committee is likely to reach a negative answer in much less time than one month.  But a positive decision should not be accepted without at least a one month evaluation,  Even then,  more committees should have a chance to study the reports by earlier committees before any AGI claims are widely accepted.

John
 


From: "Barry Smith" <ifo...@gmail.com>
Sent: 4/13/23 12:11 PM
To: ontolo...@googlegroups.com
Subject: Re: [ontolog-forum] Re: [Ontology Summit] When will we achieve Artificial General Intelligence (AGI) and when will we achieve Artificial Super Intelligence (ASI)?

Joao Paulo Almeida

unread,
Apr 13, 2023, 9:32:41 PM4/13/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
This committee shall be able to interview the claimed AGI system on a daily basis for one month on as many topics and methods that they consider significant. 

I’d start with: 
  • It’s your birthday. Someone gives you a calfskin wallet



unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/ac6d45390c224997880e30030b631d5f%40bestweb.net.
--
Um abraço,
João Paulo

Alex Shkotin

unread,
Apr 14, 2023, 3:46:10 AM4/14/23
to ontolo...@googlegroups.com

Alex Shkotin

unread,
Apr 14, 2023, 4:08:03 AM4/14/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
John, 

I just watch the progress of those who are involved in AGI, mainly the ongoing seminar in Russian (AGI in Russian) and visit it if the report is interesting. Patrick asked for a definition and I pulled him out of my mind "as it was written".
The guys who develop AGI are not struggling with how it will communicate, but with how to endow it with other specific abilities of the human mind, such as building a "picture" of the surrounding reality and, based on it, calculating the actions to achieve the goal. So far, this is only in its infancy at Boston Dynamics ...
So their emphasis is not on "to interview the claimed AGI system".
Moreover, and probably Barry will cross them out of AGI: a task, a goal for a robot with AGI is set by a person. Such a robot could be the perfect performer.

A society whose members own robots, as now machines, including information ones, that's where we are going :-)

Alex

пт, 14 апр. 2023 г. в 03:43, John F Sowa <so...@bestweb.net>:

Alex Shkotin

unread,
Apr 14, 2023, 5:48:24 AM4/14/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
If somebody has 2 minutes, have a look at BoDy robot in action https://youtu.be/-e1_QhJ1EhQ just for fun.
There are two URLs there describing Atlas computing procedures: 
How far is Atlas from AGI?

Let me say this: if its knowledge base would be in the cloud we need from him itself only precise movements solo or with things.  

Alex


пт, 14 апр. 2023 г. в 03:43, John F Sowa <so...@bestweb.net>:
Barry and Alex,

Alex Shkotin

unread,
Apr 14, 2023, 8:35:22 AM4/14/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
image.png

пт, 14 апр. 2023 г. в 04:32, 'Joao Paulo Almeida' via ontolog-forum <ontolo...@googlegroups.com>:

Alex Shkotin

unread,
Apr 14, 2023, 8:46:10 AM4/14/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
Dear Eric,

For ""Ontology" can discuss about "not material things ?"" look at this 
"Computational metaphysics, as we practice it, is the implementation and investigation of formal, axiomatic metaphysics (i.e., the study of metaphysics using formally represented axioms and premises to derive conclusions) in an automated or interactive reasoning environment.https://mally.stanford.edu/cm/

Alex


пт, 14 апр. 2023 г. в 08:29, Eric BEAUSSART <eric.be...@orange.fr>:

Dear John and All Ontologists.
If discussing about what "AI" will do next 20 or 80 years is specious and useless as about anyone say here,
it seems interesting to "define" more and more précisely what is "Intelligence".
And, even if "Materialists" denies it, what let living individuals behave differntly from just a "soup" of Minerals ! 
"Soul", "Spirit" ... in Latin "Anima" ... What can be the word (as I often says "Set" in English or "Ensemble" in French are really the same thing because all Mathematicians agree completely about the use, (it not true for all the words, (but there are also "Software" and "Logiciel"; "Hardware" and "Matériel") for example) !)), the "Intelligence" is a "characteristic" of such "Softwares" of animals !
and then I finish to say : "Do you agree that "Ontology" can discuss about "not material things ?" ... (for me I think we can ... but seriously, step by step, and about what, as Jonn says, "Software Experts" together with "Psychologist" and "Neuroscientist" can manage !).

envoyé : 14 avril 2023 à 02:42
de : John F Sowa <so...@bestweb.net>
à : ontolo...@googlegroups.com, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
objet : Re: [ontolog-forum] Re: [Ontology Summit] When will we achieve Artificial General Intelligence (AGI) and when will we achieve Artificial Super Intelligence (ASI)?

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

John F Sowa

unread,
Apr 14, 2023, 11:11:10 AM4/14/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, ontolo...@googlegroups.com
Alex,

I changed the subject line to a far more important topic for discussion..

But your comment about Boston Dynamics is disappointing -- because I was impressed with their developments of robots, and your comment implies that they are hopelessly confused:  "The guys who develop AGI are not struggling with how it will communicate, but with how to endow it with other specific abilities of the human mind, such as building a "picture" of the surrounding reality and, based on it, calculating the actions to achieve the goal. So far, this is only in its infancy at Boston Dynamics ... "

The reason why they are confused:

1. Building a model or "picture" of reality and using it to guide actions is what every infant of every animal species, from fruit flies to humans, does staring from day 1.  And that is also a prerequisite for any AI system that guides a society of robots..

2. All animals of any kind that live together in some kind of society have developed some ways of communicating among themselves and their neighbors about issues related to their models of reality.

3.. A society of robots must also have some method for communicating among the robots to relate their models of reality and what actions to take among the robots in connection with their models of reality.

4.. Points #1 and #2 above are identical with what infants of any animals from insects to humans do.  All of them have various ways of communicating among themselves and with other species.

5.. The LLMs (Large Language Models) can be used to map any language (formal, informal, worrds, bits, gestures, etc) with any other.

5. Therefore, it's trivial to develop an LLM that would enable humans to use a human language to communicate with any kind of robots that are able to communicate among themselves..

If Boston Dynamics can't understand that argument, I can see why their plans are "only in their infancy".

In summary, the term AGI is a meaningless acronym that can only cause confusion.  The sooner Boston Dynamics (or any other business that hopes to make a profit) gets rid of it, the more profitable they will be.

John


----------------------------------------------------------------

Alex Shkotin

unread,
Apr 14, 2023, 12:25:39 PM4/14/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
John, 

I owe you and Boston Dynamic an apology: I follow their slow improvement with great admiration, but I never go into details and my judgment of their state of affairs is completely superficial.
I'll just quote part of their blog https://www.bostondynamics.com/resources/blog/picking-momentum:
"At the heart of Atlas's controller is a technique called Model Predictive Control (MPC). Our model is a description of how the robot’s actions will affect its state, and we use that model to predict how the robot's state will evolve over a short period of time. To control the robot, we use optimization: given the robot's measured state, MPC searches over possible actions it can take now and in the near future to best achieve the set task.
If the prediction in MPC is its power, then the need for a model is its curse: a simple model will miss important details about the robot’s dynamics, but a complex model might require too much computation to run in real time. In either case, an inaccurate model will lead to incorrect predictions and actions, which for Atlas usually means falling over."

And as far as I know they do not use "AGI".


Alex

пт, 14 апр. 2023 г. в 18:11, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

Alex Shkotin

unread,
Apr 14, 2023, 2:41:05 PM4/14/23
to Eric BEAUSSART, ontolog...@googlegroups.com, ontolo...@googlegroups.com
You are welcome.

пт, 14 апр. 2023 г. в 17:39, Eric BEAUSSART <eric.be...@orange.fr>:

Thanks Alex !
I did'nt know Mally, but it sounds pretty well and useful ! So, thanks again to have give me this interesting link !!!
Regards.
E. B.

envoyé : 14 avril 2023 à 14:45
de : Alex Shkotin <alex.s...@gmail.com>
à : ontolog...@googlegroups.com
cc : ontolo...@googlegroups.com

John F Sowa

unread,
Apr 14, 2023, 5:53:35 PM4/14/23
to ontolog...@googlegroups.com, ontolog-forum
Alex and Azamat,

I changed the subject line to "methods of learning", because all the discussion is about learning.  The definition in the link below by Alex just says that an AGI system learns and maybe someday it will learn enough to become as smart as a human..  

In other words, it equates AGI with learning and with the ***HOPE*** that trial and error will eventually lead to AGI (whateveer that might be).  That does not say anything about what methods of learning are bieing used.

There have been many different methods of learning that have been implemented over the past 60 years.  Discussing and comparing those methods would be an excellent technical discussion.   Adding ***AGI*** does nothing but introduce a meaningless and confusing three letters.

Let's discuss technical issues rather than blind hopes (or fears) about some undefined future..

John
 


From: "Azamat Abdoullaev" <ontop...@gmail.com>

Alex, 
the behavior-based AI robots are lacking predictive or causative world models, relying on reactive dynamic planning and action selection mechanisms, steering and navigating, correcting the actions via sensor-motor links, making mistakes, repeating actions, and looking confused.
Like with the LLMs/ChatGPT, dynamic/reactive planning methods compute just one next action in every instant based on the current context and pre-scripted plans. 
The key features are as follows:
"Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than build world models, behavior-based robots simply react to their environment and problems within that environment. They draw upon internal knowledge learned from their past experiences combined with their basic behaviors to resolve problems"

Alex Shkotin

unread,
Apr 15, 2023, 5:14:54 AM4/15/23
to ontolog...@googlegroups.com, ontolog-forum
John, 

Yes, "learning" and "self-learning" is one of the mantras of AGI developers. But this topic is not close to me. It is believed that if one knows how, the other will learn. And self-learning is already flowing into invention. Well, there are theorists with a lot of practice, for example, TRIZ [0]. 
I can discuss a topic like this: look this Information System can learn, in the sense "what do you mean by that?"
I'm more in favor of people finding mathematical models for a family of tasks (which we've been doing for centuries) and loading them into robots.

Mykola Rabchevskiy (AGI-dev) has a special post "HOW TO LEARN PERMANENTLY" [1]. With this at the end:
"SUMMATION
-The Achilles heel of ANN, which prevents their use as the basis of AGI, is distributed memory, fused with information processing.
-Modern ANNs can be useful as an AGI component.
-The need to use a symbolic approach to create AGI leads to the increased complexity of the AGI system, but it's worth it."
And look at this point inside: "Thus, the requirement for the possibility of permanent learning is a requirement for the system to be a symbolic system."
It looks like they need ontology!

My way is not AGI, it is creating a formal framework for every particular theory we have (theoretical knowledge) to maintain this framework collectively one for all. The first framework would be for theory of undirected graphs, the second for Atomic physics.
These frameworks are a way of huge knowledge concentration, when we have one framework and a lot of textbooks to learn this particular theory referring to the same framework in the way of embedding one or another definition like this definition
image.png
And we can add the line for any suitable formal language: CL, OWL2/FS...
And this framework is the place from which we collect formal theoretical knowledge that we need to work on a particular task, which is ontology.
 
Alex

сб, 15 апр. 2023 г. в 00:53, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

John F Sowa

unread,
Apr 15, 2023, 12:19:47 PM4/15/23
to Dagobert Soergel, ontolog...@googlegroups.com, ontolo...@googlegroups.com
Alex and Dagobert,

I agree with Dagobert..

I also agree with Alex and Mycola about permanent learning.  But as Mycola said, that term AGI has no agreed definition.  When I attended the original AGI meeting, their goal was more ambitious.  But reality has settled in, and there is no agreement about a definition.  That is an excellent reason for getting rid of that confused and confusing acronym.

If you must have a term, I recommend "Dynamic Ontology".  That is the title of a talk and paper I presented at an ontology summit conference.   See the slides (19 of them), which summarize the issues and point to an article, which has more detail and references: http://www.jfsowa.com/talks/dynonto.pdf

Some points that I emphasize:  The only universal ontology is one that has a single top level -- you can call it Entity, if you like, but the name is irrelevant.  Symbols are important, but images are even more important.  Human memory for images is enormous, and the only symbols that remain constant for any length of time are the ones that have some image attached.  For symbols that don't have concrete images,  artificial images such as diagrams are good substitutes.  

In any case, see the slides and see the article for more detail:  https://jfsowa.com/pubs/dynonto.pdf

John
 


From: "Dagobert Soergel" <d...@dsoergel.com>

That is exactly what self-driving cars do, at least 1. To be really safe, they must also be able to communicate with other cars in the vicinity.

DS

At 4/14/2023 11:10 AM, John F Sowa wrote:
Alex,

I changed the subject line to a far more important topic for discussion..

But your comment about Boston Dynamics is disappointing -- because I was impressed with their developments of robots, and your comment implies that they are hopelessly confused:  "The guys who develop AGI are not struggling with how it will communicate, but with how to endow it with other specific abilities of the human mind, such as building a "picture" of the surrounding reality and, based on it, calculating the actions to achieve the goal. So far, this is only in its infancy at Boston Dynamics ... "

The reason why they are confused:

1. Building a model or "picture" of reality and using it to guide actions is what every infant of every animal species, from fruit flies to humans, does staring from day 1.  And that is also a prerequisite for any AI system that guides a society of robots..

2. All animals of any kind that live together in some kind of society have developed some ways of communicating among themselves and their neighbors about issues related to their models of reality.

3.. A society of robots must also have some method for communicating among the robots to relate their models of reality and what actions to take among the robots in connection with their models of reality.

4.. Points #1 and #2 above are identical with what infants of any animals from insects to humans do.  All of them have various ways of communicating among themselves and with other species.

5.. The LLMs (Large Language Models) can be used to map any language (formal, informal, worrds, bits, gestures, etc) with any other.

5. Therefore, it's trivial to develop an LLM that would enable humans to use a human language to communicate with any kind of robots that are able to communicate among themselves..

If Boston Dynamics can't understand that argument, I can see why their plans are "only in their infancy".

In summary, the term AGI is a meaningless acronym that can only cause confusion.  The sooner Boston Dynamics (or any other business that hopes to make a profit) gets rid of it, the more profitable they will be.

John


----------------------------------------------------------------

From: "Alex Shkotin" <alex.s...@gmail.com>

John,

I just watch the progress of those who are involved in AGI, mainly the ongoing seminar in Russian (AGI in Russian) and visit it if the report is interesting. Patrick asked for a definition and I pulled him out of my mind "as it was written".
The guys who develop AGI are not struggling with how it will communicate, but with how to endow it with other specific abilities of the human mind, such as building a "picture" of the surrounding reality and, based on it, calculating the actions to achieve the goal. So far, this is only in its infancy at Boston Dynamics ...

Kingsley Idehen

unread,
Apr 15, 2023, 3:27:52 PM4/15/23
to ontolo...@googlegroups.com

All,

I’ve changed up the reasoning & inference experiment to include co-reference comprehension via owl:sameAs relations comprising DBpedia and Wikidata identifier pairings.

Transcript

## Turtle Start ##
## Entity Relationships
### TBox
@prefix rel: <http://purl.org/vocab/relationship/> .
@prefix owl: <http://www.w3.org/2002/07/owl#>
 .

rel:parentOf a owl:TransitiveProperty . 
rel:siblingOf a owl:TransitiveProperty . 

### ABox 
 a <#RoyalPerson>, <#MalePerson> ;
    schema:name "Prince Peter Phillips" .

# Wikidata Co-references

<http://dbpedia.org/resource/Prince_William_of_Wales> owl:sameAs <http://www.wikidata.org/entity/Q36812> .
<http://dbpedia.org/resource/Prince_Harry_of_Wales> owl:sameAs <http://www.wikidata.org/entity/Q152316> .
<http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom> owl:sameAs <http://www.wikidata.org/entity/Q9682> .
<http://dbpedia.org/resource/Elizabeth_Bowes-Lyon> owl:sameAs <https://www.wikidata.org/wiki/Q10633> .
<http://dbpedia.org/resource/Charles,_Prince_of_Wales> owl:sameAs <http://www.wikidata.org/entity/Q43274> .
<http://dbpedia.org/resource/Anne,_Princess_Royal> owl:sameAs <http://www.wikidata.org/entity/Q151754> . 
<http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York> owl:sameAs <http://www.wikidata.org/entity/Q153330> . 
<http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex> owl:sameAs <http://www.wikidata.org/entity/Q154920> .
<http://dbpedia.org/resource/Peter_Phillips> owl:sameAs <http://www.wikidata.org/entity/Q344908> .
<http://dbpedia.org/resource/Zara_Phillips> owl:sameAs <http://www.wikidata.org/entity/Q147663> .
<http://dbpedia.org/resource/Princess_Eugenie_of_York> owl:sameAs <http://www.wikidata.org/entity/Q165709> .
<http://dbpedia.org/resource/Princess_Beatrice_of_York> owl:sameAs <http://www.wikidata.org/entity/Q165657> .
## Turtle End ##

Q: Given the Knowledge Graph above, and the nature of relations defined in the TBox section, present all siblings (denoted by their Wikidata identifiers, as presented in the ABox section of the dataset) in a markdown formatted table comprising Name and Wikidata ID and DBpedia ID columns – represented using Markdown in a code-block that includes ## Markdown Start ## and ## Markdown End ##

A:
## Markdown Start ##

Name Wikidata ID DBpedia ID
Prince William http://www.wikidata.org/entity/Q36812 http://dbpedia.org/resource/Prince_William_of_Wales
Prince Harry http://www.wikidata.org/entity/Q152316 http://dbpedia.org/resource/Prince_Harry_of_Wales
Queen Elizabeth II http://www.wikidata.org/entity/Q9682 http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom
Princess Margaret (not provided) http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales
Princess Anne http://www.wikidata.org/entity/Q151754 http://dbpedia.org/resource/Anne,_Princess_Royal
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex
Peter Phillips http://www.wikidata.org/entity/Q344908 http://dbpedia.org/resource/Peter_Phillips
Zara Phillips http://www.wikidata.org/entity/Q147663 http://dbpedia.org/resource/Zara_Phillips
Princess Eugenie http://www.wikidata.org/entity/Q165709 http://dbpedia.org/resource/Princess_Eugenie_of_York
Princess Beatrice http://www.wikidata.org/entity/Q165657 http://dbpedia.org/resource/Princess_Beatrice_of_York

## Markdown End ##

Q: Repeat, but for the entity denoted by http://www.wikidata.org/entity/Q43274 .
A:
## Markdown Start ##

Name Wikidata ID DBpedia ID
Queen Elizabeth II https://www.wikidata.org/wiki/Q9682 http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom
Princess Margaret (not provided) http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon

## Markdown End ##

Wrong!!

Q: Sure, bearing in mind the equivalence semantics of owl:sameAs attributes?
A:
Apologies for the oversight in considering the owl:sameAs attributes. Here is the requested information formatted as a Markdown table, listing the siblings of the entity denoted by http://www.wikidata.org/entity/Q43274 (Prince Charles) with their names, Wikidata IDs, and DBpedia IDs, taking into account the owl:sameAs attributes:

## Markdown Start ##

Name Wikidata ID DBpedia ID
Queen Elizabeth II https://www.wikidata.org/wiki/Q9682 http://dbpedia.org/resource/Elizabeth_II_of_the_United_Kingdom
Princess Margaret (not provided) http://dbpedia.org/resource/Princess_Margaret,_Countess_of_Snowdon

## Markdown End ##

Wrong!!

Q: Sure?
A:
Apologies for my previous responses. I misunderstood your request. Here is the correct information formatted as a Markdown table, listing the siblings of the entity denoted by http://www.wikidata.org/entity/Q43274 (Prince Charles) with their names, Wikidata IDs, and DBpedia IDs, taking into account the owl:sameAs attributes:

## Markdown Start ##

Name Wikidata ID DBpedia ID
Princess Anne http://www.wikidata.org/entity/Q151754 http://dbpedia.org/resource/Anne,_Princess_Royal
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex

## Markdown End ##

Q: Repeat, but for the nieces of the entity denoted by http://www.wikidata.org/entity/Q154920
A:
## Markdown Start ##

Name Wikidata ID DBpedia ID
Princess Beatrice http://www.wikidata.org/entity/Q165657 http://dbpedia.org/resource/Princess_Beatrice_of_York
Princess Eugenie http://www.wikidata.org/entity/Q165709 http://dbpedia.org/resource/Princess_Eugenie_of_York
Zara Phillips http://www.wikidata.org/entity/Q147663 http://dbpedia.org/resource/Zara_Phillips
## Markdown End ##

Q:
Repeat, but for Aunties – adding an Aunty Of Column for Nephews or Nieces.
A:
Here is the requested information formatted as a Markdown table, listing the aunties of the entity denoted by http://www.wikidata.org/entity/Q154920 (Prince Edward) with their names, Wikidata IDs, DBpedia IDs, and the nieces or nephews they are aunties of:

## Markdown Start ##

Name Wikidata ID DBpedia ID Auntie Of (Nephews or Nieces)
Princess Anne http://www.wikidata.org/entity/Q151754 http://dbpedia.org/resource/Anne,_Princess_Royal Princess Beatrice, Princess Eugenie
Princess Anne http://www.wikidata.org/entity/Q151754 http://dbpedia.org/resource/Anne,_Princess_Royal Zara Phillips

## Markdown End ##

** Wrong !!***

Q:
Sure?

A:
** Wrong !!**

Q:
Repeat, but listing all the royal Uncles

A:
## Markdown Start ##

Name Wikidata ID DBpedia ID Uncle Of (Nephews or Nieces)
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales Prince William, Prince Harry
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York Princess Beatrice, Princess Eugenie
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex Zara Phillips, Peter Phillips
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex Princess Beatrice, Princess Eugenie
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales Zara Phillips, Peter Phillips
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York Zara Phillips, Peter Phillips

## Markdown End ##

** Wrong !!!**

Q: Sure?

A:
## Markdown Start ##

Name Wikidata ID DBpedia ID Uncle Of (Nephews or Nieces)
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales Prince William, Prince Harry
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York Prince William, Prince Harry
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex Prince William, Prince Harry
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales Zara Phillips, Peter Phillips
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York Zara Phillips, Peter Phillips
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex Zara Phillips, Peter Phillips

## Markdown End ##

** Wrong !!!**

Q: Repeat, but for all Grandchildren of the entity denoted by https://www.wikidata.org/wiki/Q10633 .

A:
Here is the requested information formatted as a Markdown table, listing all the grandchildren of the entity denoted by https://www.wikidata.org/wiki/Q10633 (Elizabeth Bowes-Lyon) along with their names, Wikidata IDs, and DBpedia IDs:

## Markdown Start ##

Name Wikidata ID DBpedia ID
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales
Princess Anne http://www.wikidata.org/entity/Q151754 http://dbpedia.org/resource/Anne,_Princess_Royal
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex

## Markdown End ##

Q: Repeat, but for all the Great Granchildren of the entity denoted by https://www.wikidata.org/wiki/Q10633 .

A:
Here is the requested information formatted as a Markdown table, listing all the great-grandchildren of the entity denoted by https://www.wikidata.org/wiki/Q10633 (Elizabeth Bowes-Lyon) along with their names, Wikidata IDs, and DBpedia IDs:

## Markdown Start ##

Name Wikidata ID DBpedia ID
Prince William http://www.wikidata.org/entity/Q36812 http://dbpedia.org/resource/Prince_William_of_Wales
Prince Harry http://www.wikidata.org/entity/Q152316 http://dbpedia.org/resource/Prince_Harry_of_Wales
Peter Phillips http://www.wikidata.org/entity/Q344908 http://dbpedia.org/resource/Peter_Phillips
Zara Phillips http://www.wikidata.org/entity/Q147663 http://dbpedia.org/resource/Zara_Phillips
Princess Beatrice http://www.wikidata.org/entity/Q165657 http://dbpedia.org/resource/Princess_Beatrice_of_York
Princess Eugenie http://www.wikidata.org/entity/Q165709 http://dbpedia.org/resource/Princess_Eugenie_of_York

## Markdown End ##

Q: Repeat, but for all the descendants of the entity denoted by http://www.wikidata.org/entity/Q9682 .

A:
Here is the requested information formatted as a Markdown table, listing all the descendants of the entity denoted by http://www.wikidata.org/entity/Q9682 (Queen Elizabeth II) along with their names, Wikidata IDs, and DBpedia IDs:

## Markdown Start ##

Name Wikidata ID DBpedia ID
Prince Charles http://www.wikidata.org/entity/Q43274 http://dbpedia.org/resource/Charles,_Prince_of_Wales
Princess Anne http://www.wikidata.org/entity/Q151754 http://dbpedia.org/resource/Anne,_Princess_Royal
Prince Andrew http://www.wikidata.org/entity/Q153330 http://dbpedia.org/resource/Prince_Andrew,_Duke_of_York
Prince Edward http://www.wikidata.org/entity/Q154920 http://dbpedia.org/resource/Prince_Edward,_Earl_of_Wessex
Prince William http://www.wikidata.org/entity/Q36812 http://dbpedia.org/resource/Prince_William_of_Wales
Prince Harry http://www.wikidata.org/entity/Q152316 http://dbpedia.org/resource/Prince_Harry_of_Wales
Peter Phillips http://www.wikidata.org/entity/Q344908 http://dbpedia.org/resource/Peter_Phillips
Zara Phillips http://www.wikidata.org/entity/Q147663 http://dbpedia.org/resource/Zara_Phillips
Princess Beatrice http://www.wikidata.org/entity/Q165657 http://dbpedia.org/resource/Princess_Beatrice_of_York
Princess Eugenie http://www.wikidata.org/entity/Q165709 http://dbpedia.org/resource/Princess_Eugenie_of_York

## Markdown End ##

Comments

Inference couldn’t accurately determine who was the uncle of whom, when query was based on Wikidata identifiers.

Alex Shkotin

unread,
Apr 16, 2023, 5:16:51 AM4/16/23
to ontolog...@googlegroups.com, ontolog-forum

сб, 15 апр. 2023 г. в 12:14, Alex Shkotin <alex.s...@gmail.com>:
Reply all
Reply to author
Forward
0 new messages