FYI:Thought Cloning: Learning to Think while Acting by Imitating Human Thinking

62 views
Skip to first unread message

alex.shkotin

unread,
Jul 4, 2023, 2:31:55 PM7/4/23
to ontolog-forum
" This repository provides the official implementation for Thought Cloning: Learning to Think while Acting by Imitating Human Thinking. Thought Cloning (TC) is a novel imitation learning framework that enhances agent capability, AI Safety, and Interpretability by training agents to think like humans. This repository implements TC on a simulated partially observable 2D gridworld domain BabyAI with a synthetic human thought dataset. Also check the introduction tweet thread.  "

Alex

John F Sowa

unread,
Jul 4, 2023, 6:10:57 PM7/4/23
to ontolo...@googlegroups.com
Alex,

This thread is about the same article I sent to Ontolog Forum two days ago.   The title of that thread was "Response to Hinton by two of his colleagues: Yann LeCun and Yoshua Bengio". 
 
If you want to comment on the same article, you should continue the same thread.  For the record, following is a copy of my previous note about that article:

LeCun says that GPT is not as smart as a dog.  I agree.  It talks a lot better than a dog, but so does C3PO -- the shiny robot in Star Wars.  But R2D2, the little guy that beeps, is much smarter.  The two of them together make a good pair.

One pont I disagree with:  At the end, they talk about embedding world models into LLMs.  I believe that's Bass Ackwards.   It's much more important to put world modeling (perception and action) in charge and use LLMs for what they do best:  communication.  Just note how C3PO and R2D2 interact in Star Wars:  R2D2 leads the way.  C3PO is always confused, but he tags along when he's needed for communication.

The LLMs do one thing very well:  translate from one language, natural or artificial, to another.  But they don't think like humans -- or even like dogs.  It's irrelevant whether you get the input from sentences or from images.  The only thing that LLMs can do is to record, save, and imitate.  They cannot reason at a human level or even at a dog level.

World modeling, by the way, is a subject that AI has been working on for many years.  The usual title of that subject is "Ontology".

John

From: "alex.shkotin" <alex.s...@gmail.com>
Sent: 7/4/23 2:32 PM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: [ontolog-forum] FYI:Thought Cloning: Learning to Think while Acting by Imitating Human Thinking

" This repository provides the official implementation for Thought Cloning: Learning to Think while Acting by Imitating Human Thinking. Thought Cloning (TC) is a novel imitation learning framework that enhances agent capability, AI Safety, and Interpretability by training agents to think like humans. This repository implements TC on a simulated partially observable 2D gridworld domain BabyAI with a synthetic human thought dataset. Also check the introduction tweet thread.  "

Alex


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 5, 2023, 3:49:12 AM7/5/23
to ontolo...@googlegroups.com
John,

This thread is about a particular open source project as it usually is on github.
It would be great to discuss this direction in detail. 
Additional materials I took from fb.ru are [1] [2] [3].

Alex

1 https://www.youtube.com/watch?v=aX7aAauO37U

2 https://bdtechtalks.com/2023/07/03/ai-thought-cloning/

3 https://arxiv.org/abs/2306.00323


ср, 5 июл. 2023 г. в 01:10, John F Sowa <so...@bestweb.net>:

John F Sowa

unread,
Jul 5, 2023, 9:51:50 AM7/5/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, Arun Majumdar, Steve Cook, Mary Keeler
Alex,

I read those references, and those people are just making the same mistakes they made with GPT. 

They admit that learning the patterns of words that people use does not enable GPT to do correct  reasoning.  Therefore, they are hoping that learning patterns of actions will lead to learning correct reasoning.  They are just repeating the same mistakes over and over again.

Major failure:  they won't admit that there is something called ***thinking*** -- sitting down and wondering what went wrong and why --  then going to your parents or your local guru or somebody who is an expert  -- and then asking the most fundamental question:  "Why?"

The question word "How?" can be answered by looking at behavior.  But the answer to a why-question involves much more analysis and discussion.  And you can't understand a discussion just by looking at the words.  You must ***understand*** the words, the language, and what that combination ***means***.  

In fact, that question word Why? is what pre-school children. starting around the age 3, are constantly asking their parents.  But the parents sometimes get so frustrated that they just say "Because I said so!"  But that is the worst possible thing to say.  That just shuts down the most critical period of learning.   And that's why some kind of early teaching & learning activity is important.

Methods of reasoning and understanding are the fundamental issues that AI has been analyzing and programming for over 60 years.  Then the NN guys threw away 60 years of fundamental research for their hot new idea.  When that failed, they didn't try to find their mistakes.   Instead, they're just repeating them.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

Alex Shkotin

unread,
Jul 5, 2023, 12:15:28 PM7/5/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
John,

Thank you for your opinion.

Alex

ср, 5 июл. 2023 г. в 16:51, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/044d78bf564945e0a26b84c43838f203%40bestweb.net.

Alex Shkotin

unread,
Jul 5, 2023, 12:28:13 PM7/5/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
Colleagues,

I do not have my own opinion about this project. I got positive remarks in Russian here fb.ru.
Till now it has been an interesting project for me.
Thank you for your points. 

Alex


ср, 5 июл. 2023 г. в 18:51, Eric BEAUSSART <eric.be...@orange.fr>:

Dear John and Alex,
Everyhing you say is as usual very good.
But there is something most people forget : "Principle of reality" !
So, as many others, Piaget says that iltelligence is built also with the body and real things !
Robotics are as fundamental as "Algorithms" and "case learning" !
So many people try to "drown the fish" (a french way of speaking !) by saying for example
"An ape us almost like an human ... 95% in common ... "Sorry, we have 46 chroùosomes, Apes 48 !" !
Also "Women are same as Men ... " ... "Sorry, but "Y" is different from "X" !!!
Hum ... about "Sex" in Robots ...
Besides, a "System" with enough "freedom" but also "resilience", to evolve must be a combination of three entities ! :
(We have brains to control : 1§ Body, 2§ Emotions, 3§ Concience ! ) !
Last but not least : What can be "Intelligence" without "Mind", and all "Psychic features" ?
Friendly.
E. B.

envoyé : 5 juillet 2023 à 15:51
de : John F Sowa <so...@bestweb.net>
à : ontolo...@googlegroups.com, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
cc : Arun Majumdar <ar...@permion.ai>, Steve Cook <sdc...@gmail.com>, Mary Keeler <mke...@uw.edu>
objet : [Ontology Summit] Re: [ontolog-forum] FYI:Thought Cloning: Learning to Think while Acting by Imitating Human Thinking

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/044d78bf564945e0a26b84c43838f203%40bestweb.net.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/1322296590.1693289.1688572303382.JavaMail.open-xchange%40opme11oxm16nd1.pom.fr.intraorange.

John F Sowa

unread,
Jul 5, 2023, 1:37:41 PM7/5/23
to ontolog...@googlegroups.com, ontolo...@googlegroups.com
Alex,

That comment is an insult:  "Thank you for your opinion."

You have just rejected  60+  years of research & development in AI, much of which has been adopted as the foundation for database systems (both SQL and NoSQL), for all the work on the Semantic Web, and for all the work on ontologies.   Those projects usually aren't called AI because the relational DB community adopted them from AI in the 1970s.  Tim Berners-Lee adopted the AI projects of the 1990s for his proposal for the Semantic Web.  And the work on ontology and world models is still an active topic in AI.

Nobody who has been working on ontology would attempt to derive an ontology by the methods in those papers.  Why did you take them seriously?  All they have is a couple of papers and a hope of getting some clueless funders to give them $$$.

My comments are based on comparing what those people are claiming to the current research in neuroscience.  I am not a neuroscientist, but I have read enough of their writings to know that they have highly sophisticated technology and experimental techniques. They would just laugh (or cry) about those people who claim to do "thought cloning".

Please note that those people you cited are claiming that they can get to the inside of  what goes on in the brain just by looking at people from the outside.   Anybody who knows anything about neuroscience would laugh at them, if it weren't for the fact that they'll probably get and waste  $$$ on clueless research projects.

Ray Martin

unread,
Jul 5, 2023, 6:44:16 PM7/5/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
What is insulting is your continual horn tooting.
If your stuff is so great where is the traction?

On Jul 5, 2023, at 1:37 PM, John F Sowa <so...@bestweb.net> wrote:


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/d1d63e97c527490aab40b13a7f23c9e5%40bestweb.net.

David Eddy

unread,
Jul 5, 2023, 7:00:48 PM7/5/23
to 'Kingsley Idehen' via ontolog-forum
Ray -


On Jul 5, 2023, at 6:44 PM, Ray Martin <marsa...@gmail.com> wrote:

If your stuff is so great where is the traction?

When life was simple….


Back in the day Sir Isaac Newton’s (& others) alchemy was received wisdom.  Base metals + some secret sauce ==> gold!

It took some 150+ years to progress from Newton to Mendeleev’s periodic table.  

Excellent reads: 

"Lonely Ideas," Loren Graham 
"A Well-Ordered Thing: Dmitrii Mendeleev & the Shadow of the Periodic Table," Michael D. Gordin




Today we’re 130 years (arbitrarily picking the 1890 census using Hollerith cards for starting point) into computing & as yet we have no means of actually measuring what we’re doing.

I leave it to the motivated student to connect the dots.

- David

Ray Martin

unread,
Jul 5, 2023, 7:42:42 PM7/5/23
to ontolo...@googlegroups.com



Begin forwarded message:

From: Ray Martin <marsa...@gmail.com>
Date: July 5, 2023 at 6:44:12 PM EDT
To: ontolo...@googlegroups.com
Cc: ontolog...@googlegroups.com
Subject: Re: [Ontology Summit] Re: [ontolog-forum] FYI:Thought Cloning: Learning to Think while Acting by Imitating Human Thinking



John F Sowa

unread,
Jul 6, 2023, 12:13:43 AM7/6/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
Ray,

When you have a great horn, it's a good idea to toot it.  For starters, look at a sample of the results from 2000 to 2010 by our company, VivoMind LLC:  https://jfsowa.com/talks/cogmem.pdf

For three major applications, see slides 47 to the end.  (There were others that are also impressive, but for proprietary reasons, we cannot discuss them in public.)   For a description of the technology and what it can do, see slides 3 to 30.  For a discussion of various issues in AI and cognitive science, see slides 31 to 46.

Note the information extraction project in slides 46 and 47.  That was in a competition among the top dozen companies and university-corporation coalitions that had NLP systems in 2010.   VivoMind came in first with a score of 96% correct.  (We would have had 100%, but there was an error in the PDFs they gave to the competitors.  Nobody got that one.)

In any case the closest competitor (which was a large corporation) only got 73%.  Two others were above 50%, and the other eight were below 50%.  The big company told the US Dept. of Energy (which sponsored the project) that they could not trust a rinky dink company like 'VivoMind.  They insisted on another test, which they specified.  The sponsors agreed.  We beat that big company on their own test and were awarded the contract.  Unfortunately, Newt Gingrich shut down the US gov't that year.  When the gov't got back in business, many projects, including ours, were canceled.  So we won a contract for $0.00.

As for current GPT technology, they could not begin to do anything with any of the projects in those slides.   However, the LLM techniques,when under the strict control of logic-based systems, could do those projects.  That is what our current company, Permion.ai is doing.  See the the talk that Arun Majumdar and I presented on May 31st.  For the slides of my talk, see https://ontologforum.s3.amazonaws.com/General/EvaluatingGPT--JohnSowa_20230531.pdf 

For the complete audio/video of both talks plus a long Q/A session, google "YouTube Sowa Majumdar"
 
John
___________________________________

Alex Shkotin

unread,
Jul 6, 2023, 4:01:23 AM7/6/23
to ontolo...@googlegroups.com, ontolog...@googlegroups.com
John,

It is mostly a misunderstanding. My task is very simple: to study this project in detail as far as it is open source.
For example I keep in mind to find its training datasets - very interesting material should be. For example training datasets of another open source project BLOOM is here [1] and very interesting!
It is an engineering job to study projects.
You gave your evaluation. I wrote you "thank you."
What attracts me in this project: they train agents to act but before to explain a reason for action.

I asked you and Arun: What is the name (just to refer to) and functionality of your great closed system. You answered nothing. What a pity! 
It would be a very interesting thread on our forum. For example, Arun on May 31, 2023, showed us some queries to an article and web-page loaded to your system. Is it possible to ask about the content of the paper xor only about some text features and summary?

We discussed AI history from time to time. Let's do it in another thread.

Alex



ср, 5 июл. 2023 г. в 20:37, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/d1d63e97c527490aab40b13a7f23c9e5%40bestweb.net.

Alex Shkotin

unread,
Jul 7, 2023, 2:08:15 PM7/7/23
to ontolog...@googlegroups.com, Arun Majumdar, ontolog-forum
John,

Super! And let me write to Arun: IMHO this is the first point to show during his demo: the proto-ontology is keeping every sentence from text uploaded. You convert text to a structure. What a pity you did not give this service to the public.
Is it possible to just showcase: this is a scientific article and this is a  structure (CG like) keeping the same knowledge?

As far as I know nobody can do this for an arbitrary article.
The rumor is that to convert texts properly from this particular domain we need a lot of adjustments of the text2structure engine.

But why a scientific article! You could participate in a competition that the Japanese are running (I hope so far): the text of the detective is loaded (without the exposé page) and the system has to act as Poirot. Hype guaranteed!

And of course GPT is a component for a more sophisticated Information System.
To rephrase one famous point: All algorithms make mistakes, but some of them are useful.

Alex


пт, 7 июл. 2023 г. в 17:48, John F Sowa <so...@bestweb.net>:
Alex,

I'll start with your last question:  " If you upload a pdf article in your system, is it possible to ask the system about the article content?"

 Answer:  Absolutely!  In fact,  our old VivoMind system kept track of every single statement and where it came from,  in ***all*** its sources.  Please read and study the last three projects (slides 47 to the end) of https://jfsowa.com/talks/cogmem.pdf  .  Those slides also have many references to many sources other than the ones by our company.    

I  have no idea what those thought-cloning people will produce.  Maybe they'll do something outstanding.  But right now, all they have is an unfounded claim that they can discover thoughts just by looking at actions.  All the spy agencies around the world would love to know how they could do that.

One reader complained that I was "tooting my own horn".    But that is a very, very minor point.  The main point I was emphasizing is the 60+ years of R & D that has been done in AI.   

And note that a very large part of AI research has become absorbed in the mainstream of computer science and everyday applications:  programming languages, compiler design, database systems (relational, list based, and network based),  

The version of the Semantic Web that Tim Berners-Lee proposed (in 2000) was based on the latest AI R & D.  The subset of the SW that was implemented in 2005 was a tiny subset of what the AI community had developed and was using in 2000. 

The LLMs have made a major contribution to the technology for translating languages, natural and artificial. I give them a huge amount of credit for that.  But they cannot do reasoning. 

The only dependable way to use LLMs (and GPT-like systems) is to use them as a useful subroutine.  The main program has to use precise methods for evaluating and controlling what the LLMs do.  That is what Wolfram is doing.  And that is also what Kingsley is doing.  And that is what Permion.ai is dong. 

Anybody who is using GPT systems without strict methods for evaluating and controlling what they do is either (1) playing games or (2) jumping off a cliff without a hang glider or a parachute.

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>

John,

Exactly: My way is to find out what does it mean "train agents to act but before to explain a reason for action." in this particular project.
You understand the statement quoted in your "context". I am interested in what it mean in their one.
It is the same with your project: If you upload a pdf article in your system, is it possible to ask the system about the article content?

Alex

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/23a46a5e07d34f59bf97745130027019%40bestweb.net.

Alex Shkotin

unread,
Jul 8, 2023, 5:26:43 AM7/8/23
to ontolog...@googlegroups.com, ontolog-forum
Hi Kingsley,

I skimmed through the article and will try to delve deeper, but Ш have not yet found about "reasoning".
For me an impressive example is here: "Let’s have a quick look at the 🤗 Hosted Inference API.
Main features:
...
Run Classification, NER, Conversational, Summarization, Translation, Question-Answering, Embeddings Extraction tasks
Get up to 10x inference speedup to reduce user latency
...
"

About your point 
"LLMs provide a foundation for powerful natural language processing based on their understanding of sentence syntax and semantics; for instance, they comprehend the underlying semantics of multiple variations of the same sentence."
For me, "understanding" is too strong a word - another term is needed. Since they don't understand anything or let me say politely: it has to be proven :-) 
Consider such a thought experiment. They create a database that stores the answer to every question, whenever it comes to my mind. If I talk to this system, I have the feeling that it understands everything, but it just "remembers". It has a request-response mapping and nothing else.

Topic of Sem Web, LOD and LLM integration is very interesting.
You know my point: every data may be read as text louder as an answer to the question: What does it mean?
Let's take an example: "<#albert> fam:child <#brian>, <#carol>." from here.
Q:What does it mean?
A:Albert is a child of Brian and Carol.
So any LOD can be converted to NL-sentences and uploaded to LLM :-) Who is responsible for the pattern of conversion? LOD author :-)

Alex


пт, 7 июл. 2023 г. в 20:17, 'Kingsley Idehen' via ontology-summit <ontolog...@googlegroups.com>:

Hi John,

On 7/7/23 10:48 AM, John F Sowa wrote:
The LLMs have made a major contribution to the technology for translating languages, natural and artificial. I give them a huge amount of credit for that.  But they cannot do reasoning. 


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/0b6ff6db-1bf0-4668-eb50-2b0fa656c904%40openlinksw.com.

Alex Shkotin

unread,
Jul 8, 2023, 12:46:43 PM7/8/23
to ontolog...@googlegroups.com, ontolog-forum
Kingsley, 

And yet chatGPT3.5 should not be used in precise activities. A programming activity is a precise one.

Alex


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/0b6ff6db-1bf0-4668-eb50-2b0fa656c904%40openlinksw.com.

Kingsley Idehen

unread,
Jul 8, 2023, 11:20:30 PM7/8/23
to ontolo...@googlegroups.com


On 7/8/23 5:26 AM, Alex Shkotin wrote:
Hi Kingsley,

I skimmed through the article and will try to delve deeper, but Ш have not yet found about "reasoning".
For me an impressive example is here: "Let’s have a quick look at the 🤗 Hosted Inference API.
Main features:
...
Run Classification, NER, Conversational, Summarization, Translation, Question-Answering, Embeddings Extraction tasks
Get up to 10x inference speedup to reduce user latency
...
"

About your point 
"LLMs provide a foundation for powerful natural language processing based on their understanding of sentence syntax and semantics; for instance, they comprehend the underlying semantics of multiple variations of the same sentence."
For me, "understanding" is too strong a word - another term is needed. Since they don't understand anything or let me say politely: it has to be proven :-) 
Consider such a thought experiment. They create a database that stores the answer to every question, whenever it comes to my mind. If I talk to this system, I have the feeling that it understands everything, but it just "remembers". It has a request-response mapping and nothing else.

Topic of Sem Web, LOD and LLM integration is very interesting.
You know my point: every data may be read as text louder as an answer to the question: What does it mean?
Let's take an example: "<#albert> fam:child <#brian>, <#carol>." from here.
Q:What does it mean?
A:Albert is a child of Brian and Carol.
So any LOD can be converted to NL-sentences and uploaded to LLM :-) Who is responsible for the pattern of conversion? LOD author :-)

Alex

Alex,

Understand the linguistic structure of a sentence doesn't imply an understanding of what the sentence is about.

My point is that LLM-based natural language processors understand sentence structure and semantics. That's why they perform so well at language generation and translation related tasks. A master linguist would understand the sentence structure of sentences in a biology text book, but none of that makes them a biologist let alone a master biologist :)

Kingsley Idehen

unread,
Jul 8, 2023, 11:30:59 PM7/8/23
to ontolo...@googlegroups.com

Hi Alex,

On 7/8/23 12:46 PM, Alex Shkotin wrote:
Kingsley, 

And yet chatGPT3.5 should not be used in precise activities. A programming activity is a precise one.


Again, my example about the linguist not being a biologist applies here.

I only advocate use of LLM-based natural language processors for sentence comprehension related exercises e.g., providing an new modality for UI/UX.

I have Support & Sales Agents that have been constructed using ChatGPT for UI/UX plus fine-tuning offered by FAQ and How-To Guide Knowledge Graphs. It works!

As of today, you can fine-tune GPT along the following lines:

1. Declarative Query Languages e.g., SQL and SPARQL
2. External function invocation -- a new feature released by OpenAI 3 weeks ago
3. Plugins

The Agents I described above leverage all three methods. We even plan to release this work to the public in the next week or so.

Alex Shkotin

unread,
Jul 9, 2023, 3:46:09 AM7/9/23
to ontolog...@googlegroups.com, ontolog-forum
John,

You: "Other than playing games, what kind of applications would not require precision or accuracy?"
I am not talking about applications, but about activities, like natural number multiplication or code writing or NL2data and data2NL processing. 
But the point was that people mostly ask questions that have been asked before and we know an answer. In this case LLM is just a huge QA-knowledge base able to chat.
It works for programming also, but in the way "we have programmed this function before."

Alex


вс, 9 июл. 2023 г. в 00:15, John F Sowa <so...@bestweb.net>:
David, Alex, Kingsley, List,

I don't disagree with your recent notes, but it's important to clarify the terminology.

David> Is “artificial language” the label you (industry?) uses for the flip side of NLP? 

People who use the term 'natural language' use many different terms to describe other kinds of notations that have been defined by people:  logic, programming language, controlled natural language, database language. ...  Each one of those terms is highly specialized.

English happens to have the word 'artificial' that  covers things that (a) are not natural, and (b) are designed by people.  Can you think of any other common English word that would be more precise or easier to understand?

Alex> And yet chatGPT3.5 should not be used in precise activities. A programming activity is a precise one.

Digital computers are designed to be precise.  That's their major strength.  The reason why Wikipedia is so widely used is that the people who control the development emphasize accuracy and precision.  They insist on citations for every claim, and they put warning flags on points that are unclear and claims without citations.   Other than playing games, what kind of applications would not require precision or accuracy?

JFS> LLMs ... cannot do reasoning.
Kingsleye> Yes, because language processors aren’t knowledgebases [1]. 

That's true.  But in the good old 20th century, AI systems were expected to do reasoning of some kind.  A system that cannot do reasoning may be a useful component or subroutine of an AI system, but not an independent intelligent system.

John

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.

Alex Shkotin

unread,
Jul 9, 2023, 4:07:55 AM7/9/23
to ontolo...@googlegroups.com
Kingsley,

OK, it is not very important that you use the term "understand" for LLM. But for me, and I hope for JFS and others, to understand means to be able to explain :-)
And I am with you about syntax and semantics. In the simple case of CNL like ACE we can build syntactic structure of the text on its own and sentence by sentence. 
And then in phase II we create a structure partially isomorphic to part of reality, and this structure may be far away from input text.
This is why I asked JFS and Arun to give us an example of a web page as input text and proto-ontology as a structure created. Arun demonstrated this in action on May 31, 2023.

Alex

вс, 9 июл. 2023 г. в 06:20, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/d2eee0b0-4a49-bdfa-cb27-faa457665b5b%40openlinksw.com.

Alex Shkotin

unread,
Jul 9, 2023, 5:47:45 AM7/9/23
to ontolo...@googlegroups.com
Hi Kingsley,

Very interesting. When open to the public please inform. 
Just my opinion: We don't know about chatGPT training datasets. If you train your GPT yourself you can at least check that it remembers what was trained :-)
All beyond training datasets are under hallucination suspicion :-)

Alex

вс, 9 июл. 2023 г. в 06:30, 'Kingsley Idehen' via ontolog-forum <ontolo...@googlegroups.com>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/8bcc5844-842f-cd5e-97fc-dd3e240b715c%40openlinksw.com.
Reply all
Reply to author
Forward
0 new messages