fw: What GPT-3 "thinks" of GPT-3

24 views
Skip to first unread message

John F Sowa

unread,
Dec 10, 2022, 12:56:00 AM12/10/22
to ontolog-forum
See below for a criticism of GPT-3 by GPT-3.
 
The person who asked the question and received the answer below wrote that GPT-3 keeps getting better and better.
 
That is true.  But the only thing it does is to search, find, and compile information, which it then converts to a nicely written English summary.  It does not do any kind of reasoning, evaluation, or explanation of what it finds.
 
For serious applications, it should attach footnotes with links for each point it makes.  Those footnotes and links would enable reporters and even scientific researchers to use it for their own work,
 
In fact, I would like to use such a system for my own R & D.  I usually start with a considerable body of material that I have accumulated on my own.  But I always appreciate links to relevant material I may have missed.
 
However, I would *never* accept contributions without citations from anybody, certainly not a mindless thing like GPT-N.
 
John
 
 
 

From: An offline note by an Ontolog subscriber

In answer to the question (something like) "What are the problems of GPT-3",  it responded:

As with any technology, there are potential limitations and drawbacks to using GPT-3. One potential problem with GPT-3 is that it is a large and complex model, which can make it difficult to use and deploy in some applications. GPT-3 requires a significant amount of computational resources to run, which can make it challenging to use in real-time applications or on low-powered devices. In addition, GPT-3 is a "black box" model, which means that it is difficult to interpret how it makes decisions and generates text. This lack of interpretability can make it difficult to understand and control the output of the model, which can be a problem in applications where the generated text needs to be highly accurate and reliable. Finally, GPT-3, like other language processing models, can generate biased or offensive text if it is trained on biased data. This can be a problem if the generated text is used in sensitive applications, such as in healthcare or education, where it is important to avoid bias and promote inclusivity.

alex.shkotin

unread,
Dec 10, 2022, 12:34:17 PM12/10/22
to ontolog-forum
Here there are two interesting QA: one in the post and one in the comment.

Alex

суббота, 10 декабря 2022 г. в 08:56:00 UTC+3, John F Sowa:

dr.matt...@gmail.com

unread,
Dec 11, 2022, 5:07:38 AM12/11/22
to ontolo...@googlegroups.com

Dear John,

Did you ask it to provide footnotes and sources?

Regards

Matthew West

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/37673c6b-1e2e-43ae-9824-4bee1254e4b2n%40googlegroups.com.

Jon Awbrey

unread,
Dec 11, 2022, 3:40:56 PM12/11/22
to ontolo...@googlegroups.com, dr.matt...@gmail.com
Matthew, John, ...

I've already seen discussion of this point in other forums.
Apparently, the GrandPlagiarismTheft Automaton is just as
capable of making up fake sources as making up fake facts.

Jon
Reply all
Reply to author
Forward
0 new messages