1 https://www.youtube.com/watch?v=aX7aAauO37U
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/13701d5cd6bd4af19ccb5ccc859ccfdf%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/044d78bf564945e0a26b84c43838f203%40bestweb.net.
Dear John and Alex,
Everyhing you say is as usual very good.
But there is something most people forget : "Principle of reality" !
So, as many others, Piaget says that iltelligence is built also with the body and real things !
Robotics are as fundamental as "Algorithms" and "case learning" !
So many people try to "drown the fish" (a french way of speaking !) by saying for example
"An ape us almost like an human ... 95% in common ... "Sorry, we have 46 chroùosomes, Apes 48 !" !
Also "Women are same as Men ... " ... "Sorry, but "Y" is different from "X" !!!
Hum ... about "Sex" in Robots ...
Besides, a "System" with enough "freedom" but also "resilience", to evolve must be a combination of three entities ! :
(We have brains to control : 1§ Body, 2§ Emotions, 3§ Concience ! ) !
Last but not least : What can be "Intelligence" without "Mind", and all "Psychic features" ?
Friendly.
E. B.envoyé : 5 juillet 2023 à 15:51
de : John F Sowa <so...@bestweb.net>
à : ontolo...@googlegroups.com, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
cc : Arun Majumdar <ar...@permion.ai>, Steve Cook <sdc...@gmail.com>, Mary Keeler <mke...@uw.edu>
objet : [Ontology Summit] Re: [ontolog-forum] FYI:Thought Cloning: Learning to Think while Acting by Imitating Human Thinking
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/044d78bf564945e0a26b84c43838f203%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/1322296590.1693289.1688572303382.JavaMail.open-xchange%40opme11oxm16nd1.pom.fr.intraorange.
On Jul 5, 2023, at 1:37 PM, John F Sowa <so...@bestweb.net> wrote:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/d1d63e97c527490aab40b13a7f23c9e5%40bestweb.net.
If your stuff is so great where is the traction?
From: Ray Martin <marsa...@gmail.com>
Date: July 5, 2023 at 6:44:12 PM EDT
To: ontolo...@googlegroups.com
Cc: ontolog...@googlegroups.com
Subject: Re: [Ontology Summit] Re: [ontolog-forum] FYI:Thought Cloning: Learning to Think while Acting by Imitating Human Thinking
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/d1d63e97c527490aab40b13a7f23c9e5%40bestweb.net.
Alex,I'll start with your last question: " If you upload a pdf article in your system, is it possible to ask the system about the article content?"Answer: Absolutely! In fact, our old VivoMind system kept track of every single statement and where it came from, in ***all*** its sources. Please read and study the last three projects (slides 47 to the end) of https://jfsowa.com/talks/cogmem.pdf . Those slides also have many references to many sources other than the ones by our company.I have no idea what those thought-cloning people will produce. Maybe they'll do something outstanding. But right now, all they have is an unfounded claim that they can discover thoughts just by looking at actions. All the spy agencies around the world would love to know how they could do that.One reader complained that I was "tooting my own horn". But that is a very, very minor point. The main point I was emphasizing is the 60+ years of R & D that has been done in AI.And note that a very large part of AI research has become absorbed in the mainstream of computer science and everyday applications: programming languages, compiler design, database systems (relational, list based, and network based),The version of the Semantic Web that Tim Berners-Lee proposed (in 2000) was based on the latest AI R & D. The subset of the SW that was implemented in 2005 was a tiny subset of what the AI community had developed and was using in 2000.The LLMs have made a major contribution to the technology for translating languages, natural and artificial. I give them a huge amount of credit for that. But they cannot do reasoning.The only dependable way to use LLMs (and GPT-like systems) is to use them as a useful subroutine. The main program has to use precise methods for evaluating and controlling what the LLMs do. That is what Wolfram is doing. And that is also what Kingsley is doing. And that is what Permion.ai is dong.Anybody who is using GPT systems without strict methods for evaluating and controlling what they do is either (1) playing games or (2) jumping off a cliff without a hang glider or a parachute.
John,Exactly: My way is to find out what does it mean "train agents to act but before to explain a reason for action." in this particular project.You understand the statement quoted in your "context". I am interested in what it mean in their one.It is the same with your project: If you upload a pdf article in your system, is it possible to ask the system about the article content?Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/23a46a5e07d34f59bf97745130027019%40bestweb.net.
Hi John,
On 7/7/23 10:48 AM, John F Sowa wrote:
The LLMs have made a major contribution to the technology for translating languages, natural and artificial. I give them a huge amount of credit for that. But they cannot do reasoning.
Yes, because a language processors aren’t knowledgebases [1].
Links;
[1] ChatGPT and Semantic Web Symbiosis
-- Regards, Kingsley Idehen Founder & CEO OpenLink Software Home Page: http://www.openlinksw.com Community Support: https://community.openlinksw.com Weblogs (Blogs): Company Blog: https://medium.com/openlink-software-blog Virtuoso Blog: https://medium.com/virtuoso-blog Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers Personal Weblogs (Blogs): Medium Blog: https://medium.com/@kidehen Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/ http://kidehen.blogspot.com Profile Pages: Pinterest: https://www.pinterest.com/kidehen/ Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen Twitter: https://twitter.com/kidehen Google+: https://plus.google.com/+KingsleyIdehen/about LinkedIn: http://www.linkedin.com/in/kidehen Web Identities (WebID): Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/0b6ff6db-1bf0-4668-eb50-2b0fa656c904%40openlinksw.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/0b6ff6db-1bf0-4668-eb50-2b0fa656c904%40openlinksw.com.
Hi Kingsley,
I skimmed through the article and will try to delve deeper, but Ш have not yet found about "reasoning".
For me an impressive example is here: "Let’s have a quick look at the 🤗 Hosted Inference API.Main features:...Run Classification, NER, Conversational, Summarization, Translation, Question-Answering, Embeddings Extraction tasks
Get up to 10x inference speedup to reduce user latency
..."
About your point"LLMs provide a foundation for powerful natural language processing based on their understanding of sentence syntax and semantics; for instance, they comprehend the underlying semantics of multiple variations of the same sentence."For me, "understanding" is too strong a word - another term is needed. Since they don't understand anything or let me say politely: it has to be proven :-)Consider such a thought experiment. They create a database that stores the answer to every question, whenever it comes to my mind. If I talk to this system, I have the feeling that it understands everything, but it just "remembers". It has a request-response mapping and nothing else.
Topic of Sem Web, LOD and LLM integration is very interesting.You know my point: every data may be read as text louder as an answer to the question: What does it mean?Let's take an example: "<#albert> fam:child <#brian>, <#carol>." from here.Q:What does it mean?
A:Albert is a child of Brian and Carol.So any LOD can be converted to NL-sentences and uploaded to LLM :-) Who is responsible for the pattern of conversion? LOD author :-)
Alex
Alex,
Understand the linguistic structure of a sentence doesn't imply
an understanding of what the sentence is about.
My point is that LLM-based natural language processors understand
sentence structure and semantics. That's why they perform so well
at language generation and translation related tasks. A master
linguist would understand the sentence structure of sentences in a
biology text book, but none of that makes them a biologist let
alone a master biologist :)
Hi Alex,
Kingsley,
And yet chatGPT3.5 should not be used in precise activities. A programming activity is a precise one.
Again, my example about the linguist not being a biologist
applies here.
I only advocate use of LLM-based natural language processors for
sentence comprehension related exercises e.g., providing an new
modality for UI/UX.
I have Support & Sales Agents that have been constructed using ChatGPT for UI/UX plus fine-tuning offered by FAQ and How-To Guide Knowledge Graphs. It works!
As of today, you can fine-tune GPT along the following lines:
1. Declarative Query Languages e.g., SQL and SPARQL
2. External function invocation -- a new feature released by
OpenAI 3 weeks ago
3. Plugins
The Agents I described above leverage all three methods. We even
plan to release this work to the public in the next week or so.
David, Alex, Kingsley, List,I don't disagree with your recent notes, but it's important to clarify the terminology.David> Is “artificial language” the label you (industry?) uses for the flip side of NLP?People who use the term 'natural language' use many different terms to describe other kinds of notations that have been defined by people: logic, programming language, controlled natural language, database language. ... Each one of those terms is highly specialized.English happens to have the word 'artificial' that covers things that (a) are not natural, and (b) are designed by people. Can you think of any other common English word that would be more precise or easier to understand?Alex> And yet chatGPT3.5 should not be used in precise activities. A programming activity is a precise one.Digital computers are designed to be precise. That's their major strength. The reason why Wikipedia is so widely used is that the people who control the development emphasize accuracy and precision. They insist on citations for every claim, and they put warning flags on points that are unclear and claims without citations. Other than playing games, what kind of applications would not require precision or accuracy?JFS> LLMs ... cannot do reasoning.Kingsleye> Yes, because language processors aren’t knowledgebases [1].That's true. But in the good old 20th century, AI systems were expected to do reasoning of some kind. A system that cannot do reasoning may be a useful component or subroutine of an AI system, but not an independent intelligent system.John
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/ba5aafb599944e5ebd6018bfd938f556%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/d2eee0b0-4a49-bdfa-cb27-faa457665b5b%40openlinksw.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/8bcc5844-842f-cd5e-97fc-dd3e240b715c%40openlinksw.com.