Using LLMs for an interface to a chess problem solver (was LLM Chess Chase

65 views
Skip to first unread message

John F Sowa

unread,
Jul 24, 2023, 7:10:43 PM7/24/23
to ontolo...@googlegroups.com
Alex,

I changed to subject line to emphasize that there is nothing new in that application.  

What they demonstrated is obvious:  LLMs can support a dialog with a system that solves chess problems.  That is a tiny subset of what Wolfram does.

The chess program is in charge in exactly the same sense that the Wolfram system is in charge of doing mathematics.  That does not demonstrate any new principle. 

John
 


From: "Alex Shkotin" <alex.s...@gmail.com>
Sent: 7/24/23 2:40 PM
To: ontolog-forum <ontolo...@googlegroups.com>
Subject: [ontolog-forum] LLM Chess Chase

See how boldly Google-Palm talks about [1]!
image.png

Alex


Alex Shkotin

unread,
Jul 25, 2023, 4:01:27 AM7/25/23
to ontolo...@googlegroups.com
John,

This particular LLM does not use any chess problem solver, and is reasoning on its own boldly and wrongly.
I study and compare the reasoning of different LLMs.
And you are right, they need just add a new kind of command: "Use XXX to answer ...", where XXX may be any GPT-4 plugin.

Alex

вт, 25 июл. 2023 г. в 02:10, John F Sowa <so...@bestweb.net>:
--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/8615429374914c0e86c0edb4c88ae18c%40bestweb.net.

alex.shkotin

unread,
Jul 25, 2023, 8:50:33 AM7/25/23
to ontolog-forum
Let me add that fighting for faithful reasoning is one of LLM direction https://twitter.com/AnthropicAI/status/1681341063083229189

Alex

вторник, 25 июля 2023 г. в 02:10:43 UTC+3, John F Sowa:

alex.shkotin

unread,
Jul 25, 2023, 12:35:40 PM7/25/23
to ontolog-forum
This Medium post is unfortunately member only, but the title is impressive: "LLMs and Memory is Definitely All You Need: Google Shows that Memory-Augmented LLMs Can Simulate Any Turing Machine"
I think this is another example of one point of Anatoly: nobody will insist on pure LLM if they found a new useful feature.


And research pdf is here https://arxiv.org/pdf/2301.04589.pdf "Memory Augmented Large Language Models are Computationally Universal"
Abstract
We show that transformer-based large language models are computationally universal when augmented with an external memory. Any deterministic language model that conditions on strings of bounded length is equivalent to a finite automaton, hence computationally limited. However, augmenting such models with a read-write memory creates the possibility of processing arbitrarily large  inputs and, potentially, simulating any algorithm. We establish that an existing large language model, Flan-U-PaLM 540B, can be combined with an associative read-write memory to exactly simulate the execution of a universal Turing machine, U15,2. A key aspect of the finding is that it does not require any modification of the language model weights. Instead, the construction relies solely on designing a form of stored instruction computer that can subsequently be programmed with a specific set of prompts.

Alex

вторник, 25 июля 2023 г. в 15:50:33 UTC+3, alex.shkotin:
Reply all
Reply to author
Forward
0 new messages