There’s a good article today in the Financial Times, showing that, while ChatGPT can solve well-known puzzles (Monty Hall etc.), that’s because it has seen the solution, and it can’t even solve alpha-converted variants. The conclusion is good.
A computer that is capable of seeming so right yet being so wrong is a risky tool to use. It’s as though we were relying on a spreadsheet for our analysis (hazardous enough already) and the spreadsheet would occasionally and sporadically forget how multiplication worked.
Not for the first time, we learn that large language models can be phenomenal bullshit engines. The difficulty here is that the bullshit is so terribly plausible. We have seen falsehoods before, and errors, and goodness knows we have seen fluent bluffers. But this? This is something new.
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
On Behalf Of John F Sowa
Sent: Friday, July 5, 2024 11:27 PM
To: ontolo...@googlegroups.com; CG <c...@lists.iccs-conference.org>
Subject: [ontolog-forum] ChatG)T is BS
CAUTION: This email came from outside of the University. To keep your account safe, only click on links and open attachments if you know the person who sent the email, or you expected to receive this communication. |
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
ontolog-foru...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/ontolog-forum/4c068de844f1436ba79d00047189622a%40bestweb.net.
There’s a good article today in the Financial Times, showing that, while ChatGPT can solve well-known puzzles (Monty Hall etc.), that’s because it has seen the solution, and it can’t even solve alpha-converted variants. The conclusion is good.
A computer that is capable of seeming so right yet being so wrong is a risky tool to use. It’s as though we were relying on a spreadsheet for our analysis (hazardous enough already) and the spreadsheet would occasionally and sporadically forget how multiplication worked.
Not for the first time, we learn that large language models can be phenomenal bullshit engines. The difficulty here is that the bullshit is so terribly plausible. We have seen falsehoods before, and errors, and goodness knows we have seen fluent bluffers. But this? This is something new.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/e1192d726529423ea5ba59eef094810e%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/4c068de844f1436ba79d00047189622a%40bestweb.net.
On Jul 11, 2024, at 8:03 AM, Pushkar Sawant <pushkar...@gmail.com> wrote:
<43EDA6F7-E6DC-4530-AF74-CCBFBFA880E9.png>
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAGVLRzUtprMtkGrJCrhVEQXurtEaBF2FU18pvGunBBftJt_y_g%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/9EBCE578-D9C5-4365-8826-C324C7FEA16B%40utdallas.edu.
Truth is contextual – the truth is in the context. The next probably word is contextless, not only that – it mixes and mashes up disparate disregarded contexts with aplomb, a waring blender of truths and not.
That’s kind of addressed in the ‘context’ below…
Bobbin Teegarden
CTO/Chief Architect, OntoAge
206.979.0196 cell
425.378.0131 home/fax
Bellevue, WA
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/CAGVLRzWnjgEVtE%3DnOGfjapxZMgk1m2db2k%2B98AOcHAAOfU9-PA%40mail.gmail.com.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/5fa9607847774bb2bb79eb7a2ecc4b04%40bestweb.net.
Thanks for posting that. It makes me angry my licentious youth didn’t lead to any divine revelations.
I’m surprised the author didn’t mention Kurt Gödel. He provides the proof a perfect logic machine isn’t possible, right?
John,
Sure! Theoretical knowledge may be checked only by the theoretical knowledge handling system.
Meanwhile it may be interesting LMM++ advancement here https://arxiv.org/abs/2407.04903?fbclid=IwZXh0bgNhZW0CMTEAAR0oph0y
"The rapid advancement of Large Language Models (LLMs) and Large Multimodal Models (LMMs) has heightened the demand for AI-based scientific assistants capable of understanding scientific articles and figures. Despite progress, there remains a significant gap in evaluating models' comprehension of professional, graduate-level, and even PhD-level scientific content. Current datasets and benchmarks primarily focus on relatively simple scientific tasks and figures, lacking comprehensive assessments across diverse advanced scientific disciplines. To bridge this gap, we collected a multimodal, multidisciplinary dataset from open-access scientific articles published in Nature Communications journals. This dataset spans 72 scientific disciplines, ensuring both diversity and quality. We created benchmarks with various tasks and settings to comprehensively evaluate LMMs' capabilities in understanding scientific figures and content. Our evaluation revealed that these tasks are highly challenging: many open-source models struggled significantly, and even GPT-4V and GPT-4o faced difficulties. We also explored using our dataset as training resources by constructing visual instruction-following data, enabling the 7B LLaVA model to achieve performance comparable to GPT-4V/o on our benchmark. Additionally, we investigated the use of our interleaved article texts and figure images for pre-training LMMs, resulting in improvements on the material generation task. The source dataset, including articles, figures, constructed benchmarks, and visual instruction-following data, is open-sourced."
They don't lose hope.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/be9a8e1a82824f349a7f590a167851de%40bestweb.net.
_______________________________________________
CG mailing list -- c...@lists.iccs-conference.org
To unsubscribe send an email to cg-l...@lists.iccs-conference.org
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/be9a8e1a82824f349a7f590a167851de%40bestweb.net.
John,
Sure! Theoretical knowledge may be checked only by the theoretical knowledge handling system.
Meanwhile it may be interesting LMM++ advancement here https://arxiv.org/abs/2407.04903?fbclid=IwZXh0bgNhZW0CMTEAAR0oph0y
They don't lose hope. [JFS: More precisely, they have no hope. They are just confusing the issues.]
Alex
Hi John and others,
That is certainly true. The people who designed and developed GPT and related LLM-based systems admit that fact:
Mihai Nadin: [ChatGPT] is syntactic. I made this claim repeatedly. It is a mimicking machine of high performance (brute computation).
But the proponents of Generative AI confuse the issues with a large cloud of highly technical terminology (AKA human generated BS). They claim that if they increase the amount of data to some immense amount, they will have covered all the options so that the probability of finding a correct answer will auto-magically converge to 0.9999999....
They have persuaded Elon Musk and other gullible investors that by pouring more billions and even trillions of $$$ into building ultra-massive computer systems, they will magically become ultra-intelligent.
Unfortunately, the WWW has huge amounts of false, fraudulent, mistaken, misled, social media, espionage, counter-espionage, dangerous, and disastrous data. Detecting and deleting all that garbage is extremely difficult. People have tried to use LLM-based technology to find, evaluate, and erase such data -- and they have failed, miserably.
As I have repeatedly said, anything LLMs generate is a hypothesis (AKA abduction or guess). Before any abduction can be accepted, it must be evaluated by deduction (AKA reliable reasoning methods). There are almost 80 years of reliable methods developed by AI and computer science. They are essential for reliable computation.
All commercial computing systems that require high reliability (banking, engineering, scientific research, aeronautics, space exploration, etc.) require extremely high precision. They also use statistical methods for many purposes, but they use statistics with precise error bounds.
Those high precision methods control the world economy and support human life. None of those computations can be replaced by LLM-based methods. Many of them can benefit from LLM-based computations -- but ONLY IF those computations are EVALUATED by traditional deductive methods.John
As you’ve repeatedly outlined, LLMs are a piece of the puzzle that need to operate within a controlled system; they cannot be the entire system.
-- Regards, Kingsley Idehen Founder & CEO OpenLink Software Home Page: http://www.openlinksw.com Community Support: https://community.openlinksw.com Weblogs (Blogs): Company Blog: https://medium.com/openlink-software-blog Virtuoso Blog: https://medium.com/virtuoso-blog Data Access Drivers Blog: https://medium.com/openlink-odbc-jdbc-ado-net-data-access-drivers Personal Weblogs (Blogs): Medium Blog: https://medium.com/@kidehen Legacy Blogs: http://www.openlinksw.com/blog/~kidehen/ http://kidehen.blogspot.com Profile Pages: Pinterest: https://www.pinterest.com/kidehen/ Quora: https://www.quora.com/profile/Kingsley-Uyi-Idehen Twitter: https://twitter.com/kidehen Google+: https://plus.google.com/+KingsleyIdehen/about LinkedIn: http://www.linkedin.com/in/kidehen Web Identities (WebID): Personal: http://kingsley.idehen.net/public_home/kidehen/profile.ttl#i : http://id.myopenlink.net/DAV/home/KingsleyUyiIdehen/Public/kingsley.ttl#this
On Jul 11, 2024, at 2:12 PM, John F Sowa <so...@bestweb.net> wrote:
[CAUTION: Non-UBC Email]
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/2420c2d987624fbc926b72f0ad197ee5%40bestweb.net.
John,
For enthusiasts who are studying LLM++ systems it is important that "The source dataset, including articles, figures, constructed benchmarks, and visual instruction-following data, is open-sourced."
This is a crucial difference from the Hyper-Huge-Hidden LM we know.
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/0919bd604a5948d593a4867020c5431f%40bestweb.net.
From: ontolo...@googlegroups.com <ontolo...@googlegroups.com>
On Behalf Of poole
Sent: Monday, July 15, 2024 10:23 PM
To: 'James Davenport' via ontolog-forum <ontolo...@googlegroups.com>
Cc: ontolog...@googlegroups.com; CG <c...@lists.iccs-conference.org>
Subject: Re: [ontolog-forum] ChatG)T is BS
Goedel’s theorem does not “show that certain very complex propositions stated in first-order logic are undecidable”.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/E4C67FED-3035-42CA-BFD0-F15EF0C69614%40cs.ubc.ca.
Hello All !
"Garbege un, garbage out !" ... a long ago this was told !
Anyway, <How "Intelligence", artificial or natural can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>. "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.envoyé : 14 juillet 2024 à 20:40
de : John F Sowa <so...@bestweb.net>
à : ontolog-forum <ontolo...@googlegroups.com>, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
cc : CG <c...@lists.iccs-conference.org>
objet : [Ontology Summit] Machine learning (was ChatGPT is ...
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/be9a8e1a82824f349a7f590a167851de%40bestweb.net.
_______________________________________________
CG mailing list -- c...@lists.iccs-conference.org
To unsubscribe send an email to cg-l...@lists.iccs-conference.org
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/be9a8e1a82824f349a7f590a167851de%40bestweb.net.
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the email, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontology-summit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontology-summ...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontology-summit/1260437509.1258145.1721143474615.JavaMail.open-xchange%40opme11oxm03nd1.rouen.francetelecom.fr.
Consider this proposition: "Conscience can only come with internal mirrors systems." What does "internal mirrors" mean here?
The proposition "Conscience can only come with internal mirrors systems" is intriguing and touches on some complex ideas in cognitive science and neuroscience. Let's explore what "internal mirrors" might mean in this context:
1. Mirror Neurons:
This could be referring to mirror neurons, which are a type of brain cell that responds both when an individual performs an action and when they observe someone else performing the same action. These neurons are thought to play a crucial role in empathy, social cognition, and possibly the development of self-awareness.
2. Self-reflection:
"Internal mirrors" might metaphorically represent the ability to reflect on one's own thoughts, actions, and mental states. This self-reflective capacity is often considered a key component of consciousness and conscience.
3. Internal Models:
In cognitive science, "internal models" refer to mental representations of the world and oneself. These models allow for prediction, simulation, and understanding of actions and consequences. "Internal mirrors" could be referring to these models as they relate to understanding oneself and others.
4. Feedback Loops:
The concept might also relate to internal feedback systems in the brain that allow for self-monitoring and adjustment of behavior. These systems could be seen as "mirroring" one's own actions and their consequences back to oneself.
5. Theory of Mind:
This term could be alluding to the ability to attribute mental states to oneself and others, which is crucial for social cognition and moral reasoning.
6. Embodied Simulation:
In theories of embodied cognition, our understanding of others is thought to involve internally simulating or "mirroring" their actions and experiences.
The proposition suggests that conscience - our sense of right and wrong and our ability to reflect on and judge our own actions - might be dependent on these "internal mirror" systems. This idea aligns with theories that link moral reasoning and behavior to our ability to understand and simulate the experiences of others, as well as reflect on our own mental states.
It's an intriguing concept that bridges neuroscience, psychology, and philosophy. Would you like to explore any of these interpretations further, or discuss how they might relate to the development of conscience?
◻️Hello All !
"Garbege un, garbage out !" ... a long ago this was told !
Anyway, <How "Intelligence", artificial or natural can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>. "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.
envoyé : 14 juillet 2024 à 20:40
de : John F Sowa <so...@bestweb.net>
à : ontolog-forum <ontolo...@googlegroups.com>, "ontolog...@googlegroups.com" <ontolog...@googlegroups.com>
cc : CG <c...@lists.iccs-conference.org>
objet : [Ontology Summit] Machine learning (was ChatGPT is ...
--
Consider this proposition: "Conscience can only come with internal mirrors systems." What does "internal mirrors" mean here?
Hello All !
"Garbege un, garbage out !" ... a long ago this was told !
Anyway, <How "Intelligence", artificial or natural can come without ""Psychic" and "Body control" sytems" (even if the "body" is "Data center" !) ! ???>. "Conscience" can only come with "internal "mirrors"" sytems ! And real "Learning" in living bodies can only come other individuals of the same specie (or close enough !) (especially parents of course !) !!!
Even "rats" (maybe "fishes also !), without "social" interactions develop psychic anomalies !
Regards.
E. B.
_____________________________
John,
The use of difference schemes for calculating physical processes of any complexity with a given accuracy is a very developed mathematical approach. The same goes for using triangles to represent any surface. And if we talk about linear algebra (LA), for me it's better to talk about the algebra of piecewise linear functions. Very powerful tool.
Let me say this way, LA is used to analyze ANN to get conclusions like this (2001): "We show also that neural networks are equivalent to iterative versions of standard statistical and optimization models such as multiple regression analysis and principal component analysis."
Nowadays, when somebody touches a topic on the far border of our community of practice, it may be a good idea to involve AI into discussion.
Eric, what do you think?
Alex
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/3d4c16512bf84da4927f53ad45e0a7a9%40bestweb.net.