Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

ChatG)T is BS

51 views
Skip to first unread message

John F Sowa

unread,
Jul 5, 2024, 6:27:33 PM7/5/24
to ontolo...@googlegroups.com, CG
I received the following reply in an offline note:

Anonymous:  ChatGPT is BS.  It says what is most likely to come next in our use of language without regard to its truth or falsity. That seems to me to be its primary threat to us. It can BS so much better than we can, more precisely and more effectively using statistics with a massive amount of "test data," than we can ever do with our intuition regarding a relatively meager amount of learning. 

That is partly true.   LLMs generate a text that is derived by using probabilities derived from a massive amount of miscellaneous texts of any kind:  books, articles, notes, messages, etc.   They have access to a massive amount of true information -- more than any human could learn in a thousand years.   But they also have a massive amount of false, misleading, or just irrelevant data.

Even worse, they have no methods for determining what is true, false, or irrelevant.  Furthermore, they don't keep track of where the data comes from.  That means they can't use information about the source(s) as a basis for determining reliability.

As I have said repeatedly, whatever LLMs generate is a hypothesis -- I would call it a guess, but the term BS is just as good,  Hypotheses (guesses or BS) can be valuable as starting points for new ways of thinking.  But they need to be tested and evaluated before they can be trusted.

The idea that LLM-based methods can become more intelligent by using massive amounts of computation is false.  They can generate more kinds of BS, but at an enormous cost in hardware and in the electricity to run that massive hardware.  But without methods of evaluation, the probability that random mixtures of data are true or useful or worth the cost of generating them becomes less and less likely.

Conclusion:  Without testing and evaluation, the massive amounts of computer hardware and the electricity to run it is a massive waste of money and resources.

John
 

John F Sowa

unread,
Jul 5, 2024, 6:43:30 PM7/5/24
to ontolo...@googlegroups.com, EG, CG, Peirce List
I sent the previous message to both Ontolog Forum and CG list.

I received a copy from both, but CG list inserted a warning in the Subject line.  They probably keep a list of dubious words.

My mistake:  When I type the P in GPT,  my finger went too high, and I hit ")".   Sorry.

John
 


From: "John F Sowa" <so...@bestweb.net>
Sent: 7/5/24 6:28 PM
To: "ontolo...@googlegroups.com" <ontolo...@googlegroups.com>, "CG" <c...@lists.iccs-conference.org>
Subject: [CG] {Suspected Spam?} ChatG)T is BS



James Davenport

unread,
Jul 9, 2024, 10:12:56 PM7/9/24
to ontolo...@googlegroups.com, CG

There’s a good article today in the Financial Times, showing that, while ChatGPT can solve well-known puzzles (Monty Hall etc.), that’s because it has seen the solution, and it can’t even solve alpha-converted variants. The conclusion is good.

 

A computer that is capable of seeming so right yet being so wrong is a risky tool to use. It’s as though we were relying on a spreadsheet for our analysis (hazardous enough already) and the spreadsheet would occasionally and sporadically forget how multiplication worked.

 

Not for the first time, we learn that large language models can be phenomenal bullshit engines. The difficulty here is that the bullshit is so terribly plausible. We have seen falsehoods before, and errors, and goodness knows we have seen fluent bluffers. But this? This is something new.

 

https://www.ft.com/content/7cb55561-8315-487a-a904-d5a92f37551d?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

 

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of John F Sowa
Sent: Friday, July 5, 2024 11:27 PM
To: ontolo...@googlegroups.com; CG <c...@lists.iccs-conference.org>
Subject: [ontolog-forum] ChatG)T is BS

 

CAUTION:  This email came from outside of the University. To keep your account safe, only click on links and open attachments if you know the person who sent the email, or you expected to receive this communication.

 

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4c068de844f1436ba79d00047189622a%40bestweb.net.

John F Sowa

unread,
Jul 10, 2024, 1:39:25 PM7/10/24
to ontolo...@googlegroups.com, CG
James Davenport found an article that shows how simple-minded ChatGPT happens to be.   If it can find an appropriate reasoning method stored in its immense volume of stored data, it can seem to be a genius.  But if the problem requires a simple transformation of that reasoning method, it can be very stupid or horribly wrong.

Observation:  There are three safe and dependable ways of using LLMs:

1. Translate languages (including computer notations) to and from equivalent forms in other languages.  As we have seen,  Wolfram, Kingsley Idehen, and others have successfully used LLMs to provide English-like front ends to their systems.

2. Use LLMs with a relatively small corpus of closely related data, such as user manuals for some equipment or the complete corpus of a single author to support Q/A sessions about what that author said or wrote.

3. Use LLMs with a larger amount of data about a fairly large field to generate hypotheses (guesses) about topics in that field, and then use the 70+ years of work in AI and Computer Science to test, evaluate, and correct whatever the LLMs generate.

All three of these methods can be run on a good laptop computer with a disk drive that holds the data (a couple of terabytes would be sufficient).   The laptop could be extended to a larger systems for supporting the workload of a large corporation.  But the monstrous computational systems used by Google, OpenGPT, and others is an irresponsible waste of hardware, electricity, water, and other resources.  

The European Union is already putting restrictions on companies that are trying to emulate Google, OpenGPT, and other wasteful systems.  And by the way, there are hints coming from Google employees who are becoming disillusioned about the value of processing more and bigger volumes of data.

When a system cannot do simple reasoning and generalization, it can never be truly intelligent.  Adding more power to a stupid system generates larger volumes of stupidity.

John
 


From: "James Davenport' via ontolog-forum" <ontolo...@googlegroups.com>
Sent: 7/9/24 10:13 PM

There’s a good article today in the Financial Times, showing that, while ChatGPT can solve well-known puzzles (Monty Hall etc.), that’s because it has seen the solution, and it can’t even solve alpha-converted variants. The conclusion is good.

A computer that is capable of seeming so right yet being so wrong is a risky tool to use. It’s as though we were relying on a spreadsheet for our analysis (hazardous enough already) and the spreadsheet would occasionally and sporadically forget how multiplication worked.

Not for the first time, we learn that large language models can be phenomenal bullshit engines. The difficulty here is that the bullshit is so terribly plausible. We have seen falsehoods before, and errors, and goodness knows we have seen fluent bluffers. But this? This is something new.

 https://www.ft.com/content/7cb55561-8315-487a-a904-d5a92f37551d?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

_________________

Jack Park

unread,
Jul 10, 2024, 1:43:22 PM7/10/24
to ontolo...@googlegroups.com, CG

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Pushkar Sawant

unread,
Jul 11, 2024, 11:03:19 AM7/11/24
to ontolo...@googlegroups.com, CG


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Nadin, Mihai

unread,
Jul 11, 2024, 12:35:21 PM7/11/24
to ontolo...@googlegroups.com
It is syntactic. I made this claim repeatedly. It is a mimicking machine of high performance (brute computation).
Mihai Nadin
Sent from planet earth

On Jul 11, 2024, at 8:03 AM, Pushkar Sawant <pushkar...@gmail.com> wrote:


<43EDA6F7-E6DC-4530-AF74-CCBFBFA880E9.png>

Pushkar Sawant

unread,
Jul 11, 2024, 1:37:31 PM7/11/24
to ontolo...@googlegroups.com
Sorry Dr. Nadin - I visited this group after a long time and completely missed yours and others’ posts in between.

Pushkar.

te...@earthlink.net

unread,
Jul 11, 2024, 2:49:40 PM7/11/24
to ontolo...@googlegroups.com

Truth is contextual – the truth is in the context.  The next probably word is contextless, not only that – it mixes and mashes up disparate disregarded contexts with aplomb, a waring blender of truths and not.

 

That’s kind of addressed in the ‘context’ below…

 

Bobbin Teegarden

CTO/Chief Architect, OntoAge

bob...@ontoage.com

206.979.0196 cell

425.378.0131 home/fax

Bellevue, WA

John F Sowa

unread,
Jul 11, 2024, 3:12:59 PM7/11/24
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG
That is certainly true.   The people who designed and developed GPT and related LLM-based systems admit that fact:

Mihai Nadin:  [ChatGPT] is syntactic. I made this claim repeatedly. It is a mimicking machine of high performance (brute computation).

But the proponents of Generative AI  confuse the issues with a large cloud of highly technical terminology (AKA human generated BS).   They claim that if they increase the amount of data to some immense amount, they will have covered all the options so that the probability of finding a correct answer will auto-magically converge to 0.9999999....

They have persuaded Elon Musk and other gullible investors that by pouring more billions and even trillions of $$$ into building ultra-massive computer systems, they will magically become ultra-intelligent.

Unfortunately, the WWW has huge amounts of false, fraudulent, mistaken, misled, social media, espionage, counter-espionage, dangerous, and disastrous data.  Detecting and deleting all that garbage is extremely difficult.  People have tried to use LLM-based technology to find, evaluate, and erase such data -- and they have failed, miserably.

As I have repeatedly said, anything LLMs generate is a hypothesis (AKA abduction or guess).  Before any abduction can be accepted, it must be evaluated by deduction (AKA reliable reasoning methods).  There are almost 80 years of reliable methods developed by AI and computer science.  They are essential for reliable computation.

All commercial computing systems that require high reliability (banking, engineering, scientific research, aeronautics, space exploration, etc.) require extremely high precision.  They also use statistical methods for many purposes, but they use statistics with precise error bounds.

Those high precision methods control the world economy and support human life.   None of those computations can be replaced by LLM-based methods.  Many of them can benefit from LLM-based computations -- but ONLY IF those computations are EVALUATED by traditional deductive methods.
 
John


From: "Nadin, Mihai" <na...@utdallas.edu>

Jack Park

unread,
Jul 11, 2024, 4:11:11 PM7/11/24
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 11, 2024, 5:12:43 PM7/11/24
to ontolog...@googlegroups.com, ontolo...@googlegroups.com, CG
For all practical purposes,  Gödel's theorem is irrelevant.  It shows that certain very complex propositions stated in first-order logic are undecidable.  However, the only people who can state an undecidable proposition are professional logicians who have studied  Gödel's proof.

 Gödel stated that theorem about 50 years after Frege and Peirce specified FOL.   In those 50 years, no logician had ever written or encountered an undecidable proposition.

For the Cyc system, which uses a superset of FOL, Doug Lenat said that  in over a thousand person-years of developers who wrote knowledge representations in Cyc, nobody had ever written an undecidable statement.

The designers of OWL made a terrible mistake in restricting its expressive power to avoid undecidable propositions.  That made OWL more complex and more difficult to learn and use.  The people who made that mistake were professional logicians.  I have a high regard for their theoretical knowledge, but very little regard for their practical knowledge.

John
__________________________________________
 
From: "Wartik, Steven P "Steve"" <swa...@ida.org>
Sent: 7/11/24 4:30 PM

Thanks for posting that. It makes me angry my licentious youth didn’t lead to any divine revelations. 

I’m surprised the author didn’t mention Kurt Gödel. He provides the proof a perfect logic machine isn’t possible, right?

John F Sowa

unread,
Jul 13, 2024, 1:11:29 PM7/13/24
to Lyle Anderson, ontolo...@googlegroups.com, CG
Lyle,

Many readers of this list have been using  versions of ChatGPT and related systems extensively.   The most critical articles about ChatGPT and related systems that have been posted are not by people on this list.  They come from publications by developers and users who have worked with LLMs and related tools since the early days.

As for machine translation, that is the original application of LLMs by Google.  Everybody agrees that those are the best available machine translations.  They are not perfect, but they are a good starting point for anybody who needs to do detailed checking and corrections for high precision.

When you read a critical article cited by anyone on this list, please check the source.  The most critical ones are not by subscribers to this list.  They're by developers who are frustrated with the limitations of LLMs.

John
 


From: "Lyle Anderson" <lyle...@cox.net>

I find it very amusing that you folks are discussing articles about ChatGPT-4 but not actually using it and discussing his relative strengths and weaknesses. I have been using him extensively to translate and discuss scientific German papers. I ran the translation by a native German speaker who also knows the subject matter and he said Chat did really well. We recently worked through the calculation of the age of the universe when the Hubble expansion rate was 0.866c. It took some guidance on my part but he finally got an answer I could verify as correct. 
I found I could eliminate wokeness by instructing Chat to ignore and conclusion based on ex falso quodlibet i.e., False implies True is True. 
Best regards,
Lyle Anderson 

John F Sowa

unread,
Jul 14, 2024, 2:40:56 PM7/14/24
to ontolog-forum, ontolog...@googlegroups.com, CG
Peter,

Thanks for that link.  That cartoon is a precise characterization of how LLMs process data.  It was drawn in the 1990s when linear algebra usually meant something computed with matrices.  LLMs go one step farther by using tensors, but the results are in the same ballpark (or sewer).


Fundamental principle:  any machine learning system must be used with a system for evaluating or checking the answers.  For simple factual questions, a database can be used.  For more complex questions, logical deduction is necessary.  For any kind of system, ontology can detect obvious hallucinations, but ontology by itself is insufficient to detect incorrect details that happen to be in the correct category.

John
_____________________________________________
 
From: "Peter Ludemann" <peter.l...@gmail.com>
Sent: 7/14/24 12:26 PM



Alex Shkotin

unread,
Jul 15, 2024, 4:46:44 AM7/15/24
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

John,


Sure! Theoretical knowledge may be checked only by the theoretical knowledge handling system.

Meanwhile it may be interesting LMM++ advancement here https://arxiv.org/abs/2407.04903?fbclid=IwZXh0bgNhZW0CMTEAAR0oph0y 

"The rapid advancement of Large Language Models (LLMs) and Large Multimodal Models (LMMs) has heightened the demand for AI-based scientific assistants capable of understanding scientific articles and figures. Despite progress, there remains a significant gap in evaluating models' comprehension of professional, graduate-level, and even PhD-level scientific content. Current datasets and benchmarks primarily focus on relatively simple scientific tasks and figures, lacking comprehensive assessments across diverse advanced scientific disciplines. To bridge this gap, we collected a multimodal, multidisciplinary dataset from open-access scientific articles published in Nature Communications journals. This dataset spans 72 scientific disciplines, ensuring both diversity and quality. We created benchmarks with various tasks and settings to comprehensively evaluate LMMs' capabilities in understanding scientific figures and content. Our evaluation revealed that these tasks are highly challenging: many open-source models struggled significantly, and even GPT-4V and GPT-4o faced difficulties. We also explored using our dataset as training resources by constructing visual instruction-following data, enabling the 7B LLaVA model to achieve performance comparable to GPT-4V/o on our benchmark. Additionally, we investigated the use of our interleaved article texts and figure images for pre-training LMMs, resulting in improvements on the material generation task. The source dataset, including articles, figures, constructed benchmarks, and visual instruction-following data, is open-sourced."

They don't lose hope.


Alex




вс, 14 июл. 2024 г. в 21:40, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/be9a8e1a82824f349a7f590a167851de%40bestweb.net.
_______________________________________________
CG mailing list -- c...@lists.iccs-conference.org
To unsubscribe send an email to cg-l...@lists.iccs-conference.org


--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 15, 2024, 3:07:40 PM7/15/24
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG
Alex,

The article you cited is interesting, and I recommend it as a useful technique for limiting the possible hallucinations of LLMs.  But the cartoon I copy below is just as applicable to this article as it is to any other application of LLMs.  There is no breakthrough here.

Furthermore, the authors' claim that their system understands or comprehends anything is absurd.  What their system generates is "a large pile of linear algebra" that does not differ from the pile in the cartoon in any essential way.   The only reason why their system performs better than the huge pile generated by OpenGPT is that it's restricted to peer-reviewed scientific articles.  The main reason why the results are fairly good is that different scientific disciplines use very different terminology.  Therefore, the texts from different articles do not mix or interfere or pollute one another.

Please note what the authors have done:  They created a collection of LLMs from a collection of published scientific articles from multiple disciplines and used LLMs to represent both English text and diagrams in those texts.   

Mixing the two different kinds of syntax is a useful enhancement, but there is nothing new in the underlying technology.   You can get the same or better enhancement by mixing data in three very different linear syntaxes:  English, SQL, and OWL.  IT's useful to use the same spelling  for the same concepts.  But if there are enough examples, the LLMs are able to detect the similarities and do the equivalent translations.

The information in the diagrams is expressed in the same words as the English text, but the two-dimensional syntax of the diagrams represents a second language.  There is nothing new there, since LLMs can relate languages with different syntax.  For any language processor,  the difference between a linear string and a 2-D diagram is trivial.   When you send the diagram to another system, the 2-D syntax  is mapped to a 1-D syntax that uses the same kinds of syntactic markers as describing a 2-D diagram in English.

Fundamental principle:  For the LLMs, it's irrelevant whether the source is a linear language or a system of diagrams that were mapped to a linear string.  The result is a pile of linear algebra.  See the cartoon.
 
John 

From: "Alex Shkotin" <alex.s...@gmail.com>

John,


Sure! Theoretical knowledge may be checked only by the theoretical knowledge handling system.

Meanwhile it may be interesting LMM++ advancement here https://arxiv.org/abs/2407.04903?fbclid=IwZXh0bgNhZW0CMTEAAR0oph0y 


They don't lose hope.   [JFS:  More precisely, they have no hope.  They are just confusing the issues.]


Alex



Kingsley Idehen

unread,
Jul 15, 2024, 5:06:01 PM7/15/24
to ontolo...@googlegroups.com

Hi John and others,

On 7/11/24 3:12 PM, John F Sowa wrote:
That is certainly true.   The people who designed and developed GPT and related LLM-based systems admit that fact:

Mihai Nadin:  [ChatGPT] is syntactic. I made this claim repeatedly. It is a mimicking machine of high performance (brute computation).

But the proponents of Generative AI  confuse the issues with a large cloud of highly technical terminology (AKA human generated BS).   They claim that if they increase the amount of data to some immense amount, they will have covered all the options so that the probability of finding a correct answer will auto-magically converge to 0.9999999....

They have persuaded Elon Musk and other gullible investors that by pouring more billions and even trillions of $$$ into building ultra-massive computer systems, they will magically become ultra-intelligent.

Unfortunately, the WWW has huge amounts of false, fraudulent, mistaken, misled, social media, espionage, counter-espionage, dangerous, and disastrous data.  Detecting and deleting all that garbage is extremely difficult.  People have tried to use LLM-based technology to find, evaluate, and erase such data -- and they have failed, miserably.

As I have repeatedly said, anything LLMs generate is a hypothesis (AKA abduction or guess).  Before any abduction can be accepted, it must be evaluated by deduction (AKA reliable reasoning methods).  There are almost 80 years of reliable methods developed by AI and computer science.  They are essential for reliable computation.

All commercial computing systems that require high reliability (banking, engineering, scientific research, aeronautics, space exploration, etc.) require extremely high precision.  They also use statistical methods for many purposes, but they use statistics with precise error bounds.

Those high precision methods control the world economy and support human life.   None of those computations can be replaced by LLM-based methods.  Many of them can benefit from LLM-based computations -- but ONLY IF those computations are EVALUATED by traditional deductive methods.
 
John


Yes!

Here’s a stark reminder of the problem with LLMs when used beyond their natural language processing box: They aren’t sure about their responses. If you pose the question “sure?” after every response, you end up in an endless loop!

Watch: https://linkeddata.uriburner.com/chat/?chat_id=s-8M3HARk2FcNH2inGeGw7nndqTe2Wh876aHZutUAW1MC8&t=120#usr-rxpbsjd5mpd

Or just go to the following page and scroll manually: https://linkeddata.uriburner.com/chat/?chat_id=s-8M3HARk2FcNH2inGeGw7nndqTe2Wh876aHZutUAW1MC8#usr-rxpbsjd5mpd

Imagine how much computing is being gobbled up by these interaction cycles, let alone what subscribers are paying for such uncertainty.

As you’ve repeatedly outlined, LLMs are a piece of the puzzle that need to operate within a controlled system; they cannot be the entire system.

-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com
Weblogs (Blogs):
Company Blog: https://medium.com/openlink-software-blog
Virtuoso Blog: https://medium.com/virtuoso-blog
Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers

Personal Weblogs (Blogs):
Medium Blog: https://medium.com/@kidehen
Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/
              http://kidehen.blogspot.com

Profile Pages:
Pinterest: https://www.pinterest.com/kidehen/
Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen
Twitter: https://twitter.com/kidehen
Google+: https://plus.google.com/+KingsleyIdehen/about
LinkedIn: http://www.linkedin.com/in/kidehen

Web Identities (WebID):
Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i
        : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this

poole

unread,
Jul 15, 2024, 11:23:23 PM7/15/24
to 'James Davenport' via ontolog-forum, ontolog...@googlegroups.com, CG
Goedel’s theorem does not “show that certain very complex propositions stated in first-order logic are undecidable.

The proposition is this statement cannot be proved”
If it is true, the logic is incomplete. If it is false the logic is unsound.
(It doesn’t look very complex to me. I doubt that  "no logician had ever written or encountered" this proposition, as other similar “paradoxes" were common).

The only way to get around Goedel’s theorem is to make the logic too weak to state this. What Goedel proved was that any logic that can represent arithmetic can represent this. His proof was complicated because he had to invent programming and therem proving. Now the proof should be straightforward as we can assume that computers and theorem provers, and a computer is just a big arithmetic operation (the memory of a computer is just a large integer).

I agree with the comment on OWL. Restricting a logic to be decidable, does not make it efficient, it just means you can state less. There are things you just cannot state. 

David

On Jul 11, 2024, at 2:12 PM, John F Sowa <so...@bestweb.net> wrote:

[CAUTION: Non-UBC Email]
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Alex Shkotin

unread,
Jul 16, 2024, 3:52:34 AM7/16/24
to ontolog...@googlegroups.com, ontolo...@googlegroups.com, CG

John,


For enthusiasts who are studying LLM++ systems it is important that "The source dataset, including articles, figures, constructed benchmarks, and visual instruction-following data, is open-sourced."

This is a crucial difference from the Hyper-Huge-Hidden LM we know.


Alex



пн, 15 июл. 2024 г. в 22:07, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/0919bd604a5948d593a4867020c5431f%40bestweb.net.

Nadin, Mihai

unread,
Jul 16, 2024, 1:16:32 PM7/16/24
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG

 

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> On Behalf Of poole
Sent: Monday, July 15, 2024 10:23 PM
To: 'James Davenport' via ontolog-forum <ontolo...@googlegroups.com>
Cc: ontolog...@googlegroups.com; CG <c...@lists.iccs-conference.org>
Subject: Re: [ontolog-forum] ChatG)T is BS

 

Goedel’s theorem does not “show that certain very complex propositions stated in first-order logic are undecidable”.

Alex Shkotin

unread,
Jul 16, 2024, 1:30:24 PM7/16/24
to ontolog-forum, CG
Hello Eric!

Let me take your question in the form " How "Intelligence", artificial or natural  can come without "Psychic" and "Body control" systems? "
And ask it to Claude 3.5 Sonnet - my favorite Huge LM.
The C3S answer is this:
"This is an interesting and complex question that touches on fundamental issues in cognitive science, philosophy of mind, and artificial intelligence. Let's break it down:

1. Intelligence without "Psychic" systems:
If by "psychic" we mean consciousness or subjective experience, this relates to the philosophical problem of consciousness and the "hard problem" posed by David Chalmers. Many AI researchers and philosophers argue that functional intelligence can exist without consciousness. Current AI systems demonstrate problem-solving, learning, and decision-making capabilities without any clear evidence of conscious experience.

2. Intelligence without "Body control" systems:
This touches on debates about embodied cognition - the idea that intelligence is fundamentally shaped by the body and its interactions with the environment. While embodied cognition has gained traction, there are differing views on how essential physical embodiment is for intelligence:

   a) Some argue true intelligence requires a body to interact with the world.
   b) Others contend that simulated or virtual environments can provide sufficient grounding for intelligence.
   c) Abstract reasoning and problem-solving may not necessarily require direct physical interaction.

Current AI systems demonstrate forms of intelligence without direct body control, though they often rely on data derived from physical world interactions.

It's worth noting that natural intelligence (in humans and animals) did evolve in conjunction with sensory and motor systems. However, the question of whether artificial intelligence can exist independently of such systems remains open and debated.

This is a complex topic with no definitive consensus. Would you like me to elaborate on any specific aspect?"

Alex


вт, 16 июл. 2024 г. в 18:24, 'Eric BEAUSSART' via ontology-summit <ontolog...@googlegroups.com>:

Hello All !
"Garbege un, garbage out !" ... a long ago this was told !
Anyway, <How "Intelligence", artificial or natural  can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>.  "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.

envoyé : 14 juillet 2024 à 20:40
de : John F Sowa <so...@bestweb.net>
à : ontolog-forum <ontolo...@googlegroups.com>, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
cc : CG <c...@lists.iccs-conference.org>
objet : [Ontology Summit] Machine learning (was ChatGPT is ...

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/be9a8e1a82824f349a7f590a167851de%40bestweb.net.

_______________________________________________
CG mailing list -- c...@lists.iccs-conference.org
To unsubscribe send an email to cg-l...@lists.iccs-conference.org

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/be9a8e1a82824f349a7f590a167851de%40bestweb.net.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/1260437509.1258145.1721143474615.JavaMail.open-xchange%40opme11oxm03nd1.rouen.francetelecom.fr.

Alex Shkotin

unread,
Jul 16, 2024, 1:43:37 PM7/16/24
to ontolog-forum, CG
And just one more question to C3S.
Q

Consider this proposition: "Conscience can only come with internal mirrors systems." What does "internal mirrors" mean here?

A

The proposition "Conscience can only come with internal mirrors systems" is intriguing and touches on some complex ideas in cognitive science and neuroscience. Let's explore what "internal mirrors" might mean in this context:


1. Mirror Neurons: 

This could be referring to mirror neurons, which are a type of brain cell that responds both when an individual performs an action and when they observe someone else performing the same action. These neurons are thought to play a crucial role in empathy, social cognition, and possibly the development of self-awareness.


2. Self-reflection:

"Internal mirrors" might metaphorically represent the ability to reflect on one's own thoughts, actions, and mental states. This self-reflective capacity is often considered a key component of consciousness and conscience.


3. Internal Models:

In cognitive science, "internal models" refer to mental representations of the world and oneself. These models allow for prediction, simulation, and understanding of actions and consequences. "Internal mirrors" could be referring to these models as they relate to understanding oneself and others.


4. Feedback Loops:

The concept might also relate to internal feedback systems in the brain that allow for self-monitoring and adjustment of behavior. These systems could be seen as "mirroring" one's own actions and their consequences back to oneself.


5. Theory of Mind:

This term could be alluding to the ability to attribute mental states to oneself and others, which is crucial for social cognition and moral reasoning.


6. Embodied Simulation:

In theories of embodied cognition, our understanding of others is thought to involve internally simulating or "mirroring" their actions and experiences.


The proposition suggests that conscience - our sense of right and wrong and our ability to reflect on and judge our own actions - might be dependent on these "internal mirror" systems. This idea aligns with theories that link moral reasoning and behavior to our ability to understand and simulate the experiences of others, as well as reflect on our own mental states.


It's an intriguing concept that bridges neuroscience, psychology, and philosophy. Would you like to explore any of these interpretations further, or discuss how they might relate to the development of conscience?

◻️

Alex


вт, 16 июл. 2024 г. в 18:24, 'Eric BEAUSSART' via ontology-summit <ontolog...@googlegroups.com>:

Hello All !
"Garbege un, garbage out !" ... a long ago this was told !
Anyway, <How "Intelligence", artificial or natural  can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>.  "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.

envoyé : 14 juillet 2024 à 20:40
de : John F Sowa <so...@bestweb.net>
à : ontolog-forum <ontolo...@googlegroups.com>, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
cc : CG <c...@lists.iccs-conference.org>
objet : [Ontology Summit] Machine learning (was ChatGPT is ...


--

John F Sowa

unread,
Jul 16, 2024, 3:52:02 PM7/16/24
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG
David,

Goedel showed that there are infinitely many undecidable propositions in first-order logic. 

The example you cited, this statement cannot be proved”,  cannot be stated in FOL because it requires metalanguage -- a method for talking about statements.  FOL, as it is usually defined, does not contain any operator or method that would enable any statement to talk about any statement of  any kind, not even about itself.  Therefore, that is not one of the statements that Goedel was proving theorems about.

Puzzles that contain metalanguage were debated by the Greeks thousands of years ago.  A famous example is "All Cretans are liars."  That statement was uttered by a Cretan. 

If that statement is false because it was uttered by a Cretan, it would imply that the Cretan who said it was not a liar.  But that would imply that the statement is true.

John
 


From: "poole" <po...@cs.ubc.ca>

Philip Jackson

unread,
Jul 16, 2024, 4:12:17 PM7/16/24
to ontolo...@googlegroups.com
John,

Of course, the puzzle disappears if we allow that liars may sometimes tell the truth, and sometimes tell lies.

In that case, a Cretan who says "All Cretans are liars" could be telling the truth or could be lying.

Phil

From: ontolo...@googlegroups.com <ontolo...@googlegroups.com> on behalf of John F Sowa <so...@bestweb.net>
Sent: Tuesday, July 16, 2024 3:51 PM
To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
Cc: ontolog...@googlegroups.com <ontolog...@googlegroups.com>; CG <c...@lists.iccs-conference.org>
Subject: Re: [ontolog-forum] ChatG)T is ...
 
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 16, 2024, 4:22:09 PM7/16/24
to ontolo...@googlegroups.com
Phil,

Yes, yes.  I agree.

All I'm trying to do is to steer this thread away from thousands of years of endless debate.

As my grandmother used to say (in Polish),, "How much bread can you bake with that?"

John
 


From: "Philip Jackson" <philipcj...@hotmail.com>

Philip Jackson

unread,
Jul 16, 2024, 4:27:45 PM7/16/24
to ontolo...@googlegroups.com
Thanks, John. That's a very good question, in any language.

Phil

Sent: Tuesday, July 16, 2024 4:21 PM
To: ontolo...@googlegroups.com <ontolo...@googlegroups.com>

Subject: Re: [ontolog-forum] ChatG)T is ...
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 16, 2024, 4:40:29 PM7/16/24
to ontolo...@googlegroups.com, ontolog-forum, CG
Alex and Eric,

Re intelligence, general or not:

As I replied to Phil in another branch of this thread,  my grandmother asked the right question:  "How much bread can you bake with that?"

For a more scientific answer, please note the cartoon that Peter L. found (copy below).   

Everything that LLMs do is generate and process a pile of linear algebra.  That is a very useful operation for many kinds of problems.  It can indeed bake bread (among millions of other tasks).  But there is no magic intelligence in it.   The cartoon below says it all.

John
_______________________________________
 
From: "Alex Shkotin" <alex.s...@gmail.com>
And just one more question to C3S.
Q

Consider this proposition: "Conscience can only come with internal mirrors systems." What does "internal mirrors" mean here?

вт, 16 июл. 2024 г. в 18:24, 'Eric BEAUSSART' via ontology-summit <ontolog...@googlegroups.com>:

Hello All !
"Garbege un, garbage out !" ... a long ago this was told !


Anyway, <How "Intelligence", artificial or natural  can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>.  "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.

_____________________________




Alex Shkotin

unread,
Jul 17, 2024, 5:04:05 AM7/17/24
to ontolo...@googlegroups.com, CG

John,


The use of difference schemes for calculating physical processes of any complexity with a given accuracy is a very developed mathematical approach. The same goes for using triangles to represent any surface. And if we talk about linear algebra (LA), for me it's better to talk about the algebra of piecewise linear functions. Very powerful tool.

Let me say this way, LA is used to analyze ANN to get conclusions like this (2001): "We show also that neural networks are equivalent to iterative versions of standard statistical and optimization models such as multiple regression analysis and principal component analysis."


Nowadays, when somebody touches a topic on the far border of our community of practice, it may be a good idea to involve AI into discussion.


Eric, what do you think?

 

Alex




вт, 16 июл. 2024 г. в 23:40, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3d4c16512bf84da4927f53ad45e0a7a9%40bestweb.net.
Reply all
Reply to author
Forward
0 new messages