an alternative

21 views
Skip to first unread message

Nadin, Mihai

unread,
Jul 10, 2024, 9:36:56 PM (12 days ago) Jul 10
to Ontolog Forum

https://arxiv.org/pdf/2403.02164

No endorsement. Just sharing. I interacted with one of the authors.

 

Mihai Nadin

https://www.nadin.ws

https://www.anteinstitute.org

Google Scholar

 

Ravi Sharma

unread,
Jul 11, 2024, 2:31:50 AM (11 days ago) Jul 11
to ontolo...@googlegroups.com
Mihai
Excellent paper.
Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member



--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/BL3PR01MB6897A71546F1BD1BB1FDC959DAA52%40BL3PR01MB6897.prod.exchangelabs.com.

John F Sowa

unread,
Jul 11, 2024, 4:26:13 PM (11 days ago) Jul 11
to ontolo...@googlegroups.com, ontolog...@googlegroups.com, CG
Just after I sent my previous note, I saw the reference to a 63-page article with a title and direction that I strongly endorse: Cognition is All You Need, The Next Layer of AI Above Large Language Models .  

Mihai Nadin:  No endorsement. Just sharing. I interacted with one of the authors.

Since I only had time to flip the pages, look at the diagrams, and read some explanations, I can't say much more than I strongly endorse the direction as an important step beyond LLMs.   I would ask the questions in my previous note:  Can their cognitive methods do the evaluation necessary to avoid the failures (hallucinations) of generative AI?  

Since I have recommended the term 'neuro-cognitive' as an upgrade to 'neuro-symbolic', I believe that future research along lines that the authors discuss is a promising direction.   Even more important, their methods can use ontologies to check and evaluate whatever LLMs produce.  Since ontology is the primary theme of this forum, it's good to see that ontology will still be needed for a long time to come.

As I said in my previous note, generative AI without something that can evaluate the results cannot be trusted as a foundation for advanced AI systems.   Cognitive AI is a good term for that something.  This article is more of a promise than a finished solution.   But the directions they recommend are related to the Cognitive Memory System of our old VivoMind company.  See https://jfsowa.com/talks/cogmem.pdf .

In fact, our current Permion.ai company is developing tools that combine LLMs with an extension of the cognitive methods of VivoMind.  And by the way, the Permion methods can use as much computational power as available.  But they can also run very effectively on just a large laptop with a disk drive of a couple of terabytes.  All the old VivoMind technology was developed on the laptops of that generation, and its performance could be extended linearly to the very large computer systems of our customers.

John
___________________________________

John Bottoms

unread,
Jul 11, 2024, 5:16:51 PM (11 days ago) Jul 11
to ontolo...@googlegroups.com

JohnS,

I suggest care with this approach. In effect, "All you need is Cognition" is a semantic tautology. It's like saying "To make AI, all you need is AI".

-John Bottoms

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

John F Sowa

unread,
Jul 11, 2024, 5:40:59 PM (11 days ago) Jul 11
to ontolo...@googlegroups.com
John B,

I agree with your comment.  For  the title line of my note, I quoted the title of the article I was discussing.  

The full title + subtitle was  too long:  Cognition is All You Need, The Next Layer of AI Above  Large Language Models .  Perhaps I should have quoted the subtitle.

I recommend the 63-page article, which was cited by Michai Nadin:  https://arxiv.org/pdf/2403.02164 

I agree with their goal and directions, but a lot more R & D is necessary to implement them.

In any case, I believe that cognition can be simulated with much, much less hardware than the multi-billion $$$ machines used to support GPT.   For a much more powerful implementation, look in a mirror.

John 
_________________________________________

John Bottoms

unread,
Jul 11, 2024, 6:26:07 PM (11 days ago) Jul 11
to ontolo...@googlegroups.com

Agreed, this is a fog-of-war situation. Onomastics would be of use. We have too many "AI's" running around without any consensus on which intelligence we are trying to simulate. Perhaps we need to make some suggestions about which AI we are trying to do. I'll start: CAI for "Contractual AI". Lord knows jurisprudence is important and yet is still quite scrambled.

The paper sounds interesting. I'll take a look.

-John Bottoms, FirstStarSystems.com

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.

Michael DeBellis

unread,
Jul 12, 2024, 4:59:06 PM (10 days ago) Jul 12
to ontolog-forum
  Therefore, our response to the foundational paper of Conversational AI, “Attention is All You Need” is no, in fact, Cognition is all you need.  

This is a cheap shot. The authors of the paper were never claiming that the Transformer architecture is all that is needed for AI. They were making a point about the difference between their transformer approach and previous approaches to neural nets such as convolutional neural networks. 

I think it is interesting to contrast the Google paper and the paper  in this thread.

 What I find in the Google paper:

1) Architecture diagrams that have very clear, specific meanings and have been  implemented in code. The motivation behind the diagrams are clear, and the benefits such as increased parallelism and performance are clearly explained. 
2) Mathematical formulas and reasoning.
3) Performance benchmarks comparing  their new approach favorably to alternatives. 

All I've found in this paper are boxes and arrow diagrams that are vague and as far as I can tell no clear definition of what is going on exactly in the boxes, how are they communicating, etc. 

For example, figure 4 is a poor model of how a Retrieval Augmented Generation system works. Nowhere do they make clear that the User's question is itself turned into a vector by one of the modules of the LLM and that that vectors is then compared with vectors for strings in documents in the document corpus. In fact they don't even make the point that an LLM is a system of modules (not a monolithic black box as in their diagrams) and that the individuals modules can be communicated with via an API which is the capability that makes the RAG architecture possible. 

Also, regarding RAG they say: "This makes these circuits able to incorporate new information that is not in the original training of the underlying LLMs."  This is a poor description of what a RAG system does.  it  sounds as if the RAG system just improves the LLM but that's not the case. The RAG architecture fundamentally changes the way an LLM is used. It substitutes a specific curated group of documents for the previous training of the LLM. The LLM uses it's training to generate natural language but it only uses the documents in the RAG for the content of the answers. This eliminates hallucinations because part of the architecture is a metric for how close in semantic space the question vector and matching documents must be. If no documents are found within that threshold the system will reply it can't answer the question thus preventing hallucinations. This comes at a cost though. The RAG is not a general tool the way an LLM is. It can't give you recipes for chocolate chip cookies if your domain is healthcare. But unlike an LLM, you can strictly control what information is used to answer questions in your domain. This is also a solution to problems such as bias in LLMs because the documents can be curated with standards such as bias in mind.

Or Figure 15. They have two different curves but there aren't even hash marks or numbers on either of the axes. What are some data points on this curve? Are there any? How did they measure problem complexity and quality of response?

Unlike the Google article, I couldn't find any benchmarks. They claim that: 

Cognitive AI's unique combination of deterministic and non-deterministic reasoning processes establishes a new benchmark for what artificial intelligence systems can achieve in terms of autonomous reasoning and cognitive collaboration.

However, I could find no actual benchmarks or references to benchmarks for their system. Nor could I find any description of real world problems that their approach has solved. If I missed those and anyone can point me to them I would appreciate it. 

Finally, discussion of topics like Super Intelligence IMO are appropriate for science fiction not science. They define Super Intelligence as: "a form of intelligence that transcends the limitations of finite organisms and organizations in both space and time" Why "finite organisms"? Are there also infinite organisms? Don't current software and the Internet already "transcend limitations... in time and space"?

Michael
Reply all
Reply to author
Forward
0 new messages