Why ChatGPT does not trust itself as an author

4 views
Skip to first unread message

John F Sowa

unread,
Jun 22, 2024, 4:39:00 PMJun 22
to ontolo...@googlegroups.com, CG, ontolog...@googlegroups.com
I recommend the following article as an appendix to the Communiqué  on the recent Ontology Summit.  It is a succinct summary of the issues.

John
____________________________

Why ChatGPT Should Not Be Used to Write Academic Scientific Manuscripts for Publication

The Annals of Family Medicine, March 2023, 2958; DOI: https://doi.org/10.1370/afm.2982

Editor Caroline Richardson asked Chat GPT-4, OpenAI’s advanced language processing tool, about authoring scientific manuscripts.

Prompt:   Write a short editorial about why ChatGPT should not be used to write academic scientific manuscripts for publication.

Response:  A language model trained by OpenAI, ChatGPT is a remarkable tool that can generate coherent and grammatically correct text on a wide range of topics.  However, when it comes to writing academic scientific manuscripts for publication, ChatGPT falls short in several critical ways.  First and foremost, ChatGPT lacks the domain-specific knowledge and expertise that is essential for producing high quality scientific manuscripts.  While it may be able to generate text that sounds plausible and coherent, it cannot provide the kind of nuanced and sophisticated analysis that is required for scientific research.  This is especially true for research that involves complex statistical analyses, experimental design, and interpretation of results.
Furthermore, ChatGPT cannot ensure the accuracy, validity, and reliability of scientific claims and findings.  Scientific research depends on the rigorous collection and analysis of data, and the careful interpretation of results.  This requires not only expertise in the relevant field of study, but also a keen eye for detail and a commitment to the highest standards of scientific rigor.

Another key limitation of ChatGPT is its inability to engage in the iterative and collaborative process that is characteristic of scientific research.  Writing a scientific manuscript is not a one-time event, but rather a process of continuous refinement and improvement.  This process involves multiple rounds of feedback and revision, as well as collaboration with colleagues and reviewers to ensure that the final product meets the highest standards of quality.

In conclusion, while ChatGPT may be a useful tool for generating ideas and basic drafts of text, it should not be relied upon for writing academic scientific manuscripts for publication.  The complex and nuanced nature of scientific research requires the expertise and input of human researchers who can ensure the accuracy, validity, and reliability of scientific claims and findings, as well as engage in the iterative and collaborative process that is essential for producing high quality scientific manuscripts.

Submitted March 20, 2023; accepted March 20, 2023.

Disclaimer:   This is the only time a chatbot will be allowed or cited as an author in Annals of Family Medicine.  See our Policy on Authorship.
 

Mike Bergman

unread,
Jun 22, 2024, 5:22:28 PMJun 22
to ontolo...@googlegroups.com, John F Sowa, CG, ontolog...@googlegroups.com

Hi All,

In the sense of fairness by providing an alternative viewpoint, I prompted ChatGPT 4o (as of today) with the inverse question. I am not personally endorsing the practice, and I further believe any LLM used to support an academic (or other) manuscript should be disclosed as to how used, even if allowed by the publisher. 

Best, Mike

Prompt: Write a short editorial about why ChatGPT should be used to write academic manuscripts for publication.

ChatGPT: Using ChatGPT to assist in writing academic manuscripts for publication offers several compelling advantages that can enhance the research and writing process. First and foremost, ChatGPT can significantly increase efficiency, allowing researchers to draft, revise, and polish their manuscripts more quickly. This acceleration can lead to more rapid dissemination of findings, fostering a more dynamic and responsive academic environment.

ChatGPT's ability to process and synthesize large volumes of information can also be an asset in the literature review phase, helping researchers identify relevant studies, summarize key findings, and highlight gaps in existing research. This can lead to more comprehensive and well-informed manuscripts, enriching the academic discourse with thorough and diverse perspectives.

Additionally, ChatGPT can aid non-native English speakers in articulating their ideas more clearly and fluently, leveling the playing field in global academia. By providing support in grammar, syntax, and style, AI can help researchers focus on the substance of their work rather than linguistic barriers, promoting inclusivity and diversity in academic publishing.

Furthermore, ChatGPT can serve as a valuable tool for brainstorming and generating ideas, offering novel approaches or perspectives that researchers might not have considered. This can stimulate creativity and innovation, driving forward the boundaries of knowledge in various fields.

While concerns about originality and integrity are valid, these can be addressed through proper oversight and ethical guidelines. Researchers should use ChatGPT as a supplementary tool rather than a replacement for their intellectual contributions. By ensuring transparency about the use of AI in the writing process and maintaining rigorous standards for verification and citation, the academic community can harness the benefits of ChatGPT while preserving the integrity of scholarly work.

In conclusion, when used responsibly, ChatGPT has the potential to enhance the academic writing process, making it more efficient, inclusive, and innovative. Embracing this technology can help researchers produce high-quality manuscripts that contribute meaningfully to their fields, while upholding the principles of academic integrity and originality.

--
All contributions to this forum are covered by an open-source license.
For information about the wiki, the license, and how to subscribe or
unsubscribe to the forum, see http://ontologforum.org/info/
---
You received this message because you are subscribed to the Google Groups "ontolog-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ontolog-foru...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ontolog-forum/47924f4a25334c3fbd04ad6574e12f61%40bestweb.net.

  

John F Sowa

unread,
Jun 22, 2024, 10:49:03 PMJun 22
to Mike Bergman, ontolo...@googlegroups.com, CG, ontolog...@googlegroups.com
Mike,

I agree with your comments below.    As I keep repeating, LLMs are an extremely valuable ADDITION to the AI toolkit.  They have a wide variety of applications, especially for translating languages, natural and artificial.  But they are an addition, not a replacement.  Since "ontology' is the focus of this forum, I would emphasize the role of ontology in evaluating, testing, revising, and enhancing the output generated by LLMs.

For reasoning, LLMs are an excellent method for abduction (guessing).  But by themselves, they cannot do deduction, testing, and evaluation.  They are able to find and apply certain patterns of deduction.  and if their source data is limited to a single set of consistent statements, the results are usually correct.  

But "consistent" and  "usually" are problematical.  That is why we need methods that control how the results LLMs generate are tested, evaluated, and used.  Kingsley does that.  Wolfram does that.  Our Permion.ai company does that.   In fact, you do that when you work with LLM-based software and make your own decisions about what to use or ignore.

There are many more options and combinations to explore.  But it's important to remember that somebody or something must test and evaluate what to do with the LLM output.  GOFAI (Good Old Fashioned AI) is not obsolete.

John
________________________________________
Reply all
Reply to author
Forward
0 new messages