Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Khanmigo, AI tutor

27 views
Skip to first unread message

John F Sowa

unread,
Dec 14, 2024, 4:55:51 PM12/14/24
to ontolo...@googlegroups.com, CG
On Sunday, CBS 60 Minutes presented a segment about Khanmigo, an AI tutor that is powered by LLM technology.  It has shown some very impressive results, and the teachers who use it in their classes have found it very helpful.  It doesn't replace teachers.  It helps them by offloading routine testing and tutoring,


As I have said many times. there are serious limitations to the LLM technology, which requires evaluation to avoid serious errors and hallucinogenic disasters.   Question;  How can Khanmigo and related systems avoid those disasters?

I do not know the details of the Khanmigo implementation.  But from the examples they showed, I suspect that they avoid mistakes by (1) Starting with a large text that was written, tested, and verified by humans (possibly with some computer aid); (2) For each topic, the system does Q/A primarily by translation; (3) And the LLM technology was first developed for translation and Q/A; (4) if the source text is tested and verified, a Q/A system that is based on that text can usually be very good and dependable.

But the CBS program did show an example where the system made some mistakes.

Summary:  This example shows great potential for the LLM technology.  But it also shows the need for evaluation by the traditional AI symbolic methods.  Those methods have been tried and tested for over 50 years, and they are just as important today as they ever were.

As a reminder:  LLMs can be used with a large volume of sources to find information and to generate hypotheses.  But if the source is very large and unverified for accuracy, it can and does find and generate erroneous or even dangerously false information.  That is why traditional AI methods are essential for evaluating what they find in a large volume of source data.

Danger;  The larger the sources, the more likely that the LLMs will find bad data.  Without evaluation, bigger is definitely not better.  I am skeptical about attempts to create super large volumes of LLM data.  Those systems consume enormous amounts of electricity with a diminishing return on investment.  

There is already a backlash by employees of Google and Elon M. 

John




John F Sowa

unread,
Dec 14, 2024, 5:14:15 PM12/14/24
to ontolo...@googlegroups.com, CG
Since my previous note was very positive about LLMs, I'll follow it with a warning.  See below.

Summary:  LLMs and related technology are very good for what they do.  But beware of the hype.  They most definitely do not achieve "human level intelligence",   Their only intelligence comes from finding and combining what some humans had already done -- good, bad, indifferent, or disastrous.  Without evaluation, you never know.

John
_________________

Did artificial intelligence begin to plateau in 2024, following a boom year in 2023? It depends on who you ask. Some say AI models that were released this year and can apparently reason more effectively than their predecessors show that the lofty goal of artificial general intelligence (AGI) is still on track, but not everyone is convinced.

Certainly, tech firms have continued to talk up the hype. When it launched GPT-4 in 2023, OpenAI boasted that the model had “human-level” performance on professional tests…


Kingsley Idehen

unread,
Dec 15, 2024, 1:42:11 PM12/15/24
to ontolo...@googlegroups.com

Hi John,

Yes, the fundamental rule of thumb should be:

Never trust, always verify.

This principle must be at the core of any serious AI agent that incorporates LLMs.

SeeAlso: https://blog.jonudell.net/2024/01/14/7-guiding-principles-for-working-with-llms/


-- 
Regards,

Kingsley Idehen	      
Founder & CEO 
OpenLink Software   
Home Page: http://www.openlinksw.com
Community Support: https://community.openlinksw.com

Social Media:
LinkedIn: http://www.linkedin.com/in/kidehen
Twitter : https://twitter.com/kidehen


John Bottoms

unread,
Dec 15, 2024, 2:05:04 PM12/15/24
to ontolo...@googlegroups.com

Yes KI, agreed.

One book I was reading discussed the number of LLM/ChatGPT courses that are offered. And, I came away with two conclusions from that writer.

First, there are very few good courses, and the best one tops out at $ 13,000. And, by the time you get to the bottom of the barrel they are less that useful..

Second, the value of a LLM application is useful in part; but the real value is in the connection to a specific field such as Fin-Tech, Medicine or Education. Soft skill studies work is getting into focus these days. If you have not seen the episode of 60 minutes on the Khan academy tool, "KhanMigo" then I recommend it. A copy is on YouTube.

If you have any recommendations on courses or materials on the various LLM apps, I'm interested. I am planning on teaching a course at a local Maker Space. My current count is about 50 different LLM's. And an area of interest is for those for your own document library at no cost. "Quill" might be an example.

-John Bottoms

* * * * * * *

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ontolog-forum/d3fd5a2a-93ec-4f4f-ba97-631080f0f9c7%40openlinksw.com.

Virus-free.www.avg.com

Kingsley Idehen

unread,
Dec 15, 2024, 4:35:51 PM12/15/24
to ontolo...@googlegroups.com

Hi Everyone,

Here's a recent lecture/presentation that's relevant to all of this.

https://www.youtube.com/watch?v=1yvBqasHLZs

Kingsley

Michael DeBellis

unread,
Dec 20, 2024, 4:42:07 PM12/20/24
to ontolog-forum
This playlist on YouTube is the best I've found for understanding the basics of what neural networks and then on LLMs and how they model meaning with vectors: https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

Ravi Sharma

unread,
Dec 23, 2024, 12:18:20 PM12/23/24
to ontolo...@googlegroups.com
Yes Michael
I have found it eucative and just viewed only one neural net based youTube video.

Thanks.
Ravi
(Dr. Ravi Sharma, Ph.D. USA)
NASA Apollo Achievement Award
Former Scientific Secretary iSRO HQ
Ontolog Board of Trustees
Particle and Space Physics
Senior Enterprise Architect
SAE Fuel Cell Standards Member



Reply all
Reply to author
Forward
0 new messages