Following is a note I sent to another email list, and I thought it would also be useful for the Ontolog and CG lists. In fact, the three links below are to talks and slides that had been discussed on these lists. But a reminder may be useful, since these issues are still hot topics.
Katie,
That point about hallucinations is true for AI systems that are based ONLY on LLMs. But many AI systems have been producing excellent results based on the 60+ years of symbolic AI that reached a high level of sophistication long before LLMs were invented.
A combination of LLMs + symbolic AI can support the best of both worlds. There are quite a few such systems, but the news about them is drowned out by people who are promoting LLM technology by itself without using symbolic (AKA logic-based) methods.
After my 30 years with IBM, I have been a cofounder of two AI startup companies: VivoMind LLC from 2000 to 2010 and Permion Inc. more recently. For the slides of a talk that describes that technology and several very powerful applications, see the slides for "Cognitive Memory for Language, Learning, and Reasoning,"
https://www.jfsowa.com/talks/cogmem.pdf .
For three sample applications, skip to slide 44. None of those applications (and many others) could be implemented by LLMs alone. That is because all three of them require high precision, absolute accuracy, and no hallucinations. All of them accept natural language input in English and several other languages -- the usual European languages + Russian, Chinese, and Arabic + artificial languages such as names and notations for organic chemistry, computer notatioTns, and any specialized notations for which a grammar can be written. See the examples of the applications starting at slide 44.
None of those examples could be implemented with LLMs. But our new Permion Inc. company combines LLMs with an upgraded version of the technology of LLMs. It can do the kinds of applications that use LLMs, the kinds of applications that used Cognitive Memory, and applications that combine both.
Most importantly, Permion avoids the hallucinations by using LLMs ONLY for two purposes: (1) translation from one language to another (either or both may be natural or artificial); and (2) abduction (educated guessing), Abduction is the most creative result of LLMs, but it is also the process that produces hallucinations (stupid guesses). The reasoning methods of deduction and induction (the great strength of the symbolic methods) can detect and correct the stupid guesses that are often caused by LLMs.
That talk got more than 10K downloads. For another talk with my colleague Arun Majumdar in 2025, see
https://www.youtube.com/watch?v=zRYJE6QJZx0&t=45s , Arun discusses the ways our methods for using LLMs are similar and different from methods used by DeepSeek.
In the Q/A discussion at the end of this last talk, Arun and other participants discuss issues of human interpretation that won't be found by LLMs or by a search of data on the WWW. Humans are not obsolete.
John