another profession bites the dust

13 views
Skip to first unread message

Glenn Hampson

unread,
Feb 7, 2023, 6:24:21 PM2/7/23
to osi20...@googlegroups.com

I finally succumbed to curiosity and sicked an AI text bot on my writing. Wow. Granted, I give it plenty to work with, but I can’t see why anyone would ever pay for editing work again with tools available like this. Another profession bites the dust. Full disclosure, I used Quillbot (Quillbot.com) to rewrite key sections in the latest OSI paper that I just sent you---the executive summary, introduction, conclusion, and a few tough paragraphs. Quillbot provided several different output styles ---simple, creative, formal, etc.---and also created different rewrite versions at the touch of a button, plus the ability to make more granular changes (swapping out adjectives here and there as needed, for example). I’ve credited Quillbot’s editorial help in the acknowledgements section.

 

On a sort of related note, Jason Steinhauer’s History Communication Institute held an in-depth conversation on Monday on the ramifications of this technology for history. Here’s a link to the video from this event: https://youtu.be/Eu0ARkdZ2X0

 

And in case you didn’t see it, today’s New York Times has an article about combining this technology with deepfake videos, so that news (fake news too) can be created with the tap of a button and presented by a realistic looking bot (customizable) in numerous languages. See How Deepfake Videos Are Used to Spread Disinformation - The New York Times (nytimes.com).

 

 

Here’s Quillbot’s elegant rewrite of the above email:

 

I finally gave in to curiosity and used an AI text bot to edit my writing. Wow. Granted, I give it plenty to work with, but I can't imagine anyone ever paying for editing work again with tools like this available. Another profession has died. To be completely honest, this morning I used Quillbot (Quillbot.com) to rewrite key sections of the latest OSI paper that I sent you yesterday, including the executive summary, introduction, conclusion, and a few difficult paragraphs. Quillbot offered a variety of output styles (simple, creative, formal, etc.) as well as the ability to create different rewrite versions at the touch of a button, as well as the ability to make more granular changes (swapping out adjectives here and there as needed, for example). I acknowledged Quillbot's editorial assistance in the acknowledgements section.

 

On a related note, Jason Steinhauer's History Communication Institute hosted an in-depth discussion on the implications of this technology for history on Monday. Here's a link to the event's video: https://youtu.be/Eu0ARkdZ2X0

 

And, in case you missed it, today's New York Times has an article about combining this technology with deepfake videos, so that news (including fake news) can be created with the click of a button and presented by a realistic-looking (customizable) bot in a variety of languages. The New York Times examines how deepfake videos are used to spread misinformation (nytimes.com).

 

Gluck, Michelle

unread,
Feb 8, 2023, 10:25:54 AM2/8/23
to Glenn Hampson, osi20...@googlegroups.com

I’ll grant you that eliminating the hyphens created a more “elegant” text,  but IMHO “bites the dust” says something that “has died” does not. 

 

Michelle Gluck

Associate General Counsel

Office of the General Counsel

The Pennsylvania State University

227 W. Beaver Ave, Suite 507

(814) 863-2657

 

This electronic mail transmission may contain privileged or confidential information. If you believe you have received this message in error, please notify the sender by reply transmission and delete the message without copying or disclosing it.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB6064E9CF24749DB54E83674AC5DB9%40DM4PR17MB6064.namprd17.prod.outlook.com.

Glenn Hampson

unread,
Feb 8, 2023, 11:02:25 AM2/8/23
to Gluck, Michelle, osi20...@googlegroups.com, Margaret Winker

Hi Michelle,

 

Any talk in the legal field yet about how tools like this might be used?---research, writing briefs, even writing draft opinions?

 

Jason has the history field covered.

 

Maggie---what about medicine (making diagnoses, researching treatments, etc.)?

 

And for folks in the know, what is the root requirement here? That the databases being used are searchable? Or is there more too it (like good metadata, etc.)?

 

Best,

 

Glenn

Gluck, Michelle

unread,
Feb 8, 2023, 11:07:49 AM2/8/23
to Glenn Hampson, osi20...@googlegroups.com, Margaret Winker

Since I am in higher ed, so far all of the discussion has been about academic uses and detection.  I’ll keep an ear out.

 

Michelle

Bryan Alexander

unread,
Feb 8, 2023, 11:35:52 AM2/8/23
to Glenn Hampson, Gluck, Michelle, osi20...@googlegroups.com, Margaret Winker
Since ChatGPT appeared at the end of November we've held three Future Trends Forum sessions on the topic.  A bunch of bright guests and tons of community questions from academics:




--

Margaret Winker

unread,
Feb 8, 2023, 12:53:25 PM2/8/23
to Glenn Hampson, osi20...@googlegroups.com, Gluck, Michelle
Glenn, glad you found the editing software useful and thank you for your transparency. Thanks for asking about medicine  (I hope you don't regret it when you see the length of my response ;) .

AI has been used for some time in the medical field, for example to develop algorithms. However, to illustrate its challenges, electronic medical records (EMRs) are generally used as data for the research, and it’s much easier to use billing codes than patient history, which is often (necessarily) unstructured text. This article explains what happened when the EMR company Epic (the largest EMR provider in the US) developed an algorithm for sepsis using that approach. TLDR — it had many false positives and found few cases that doctors had not already identified. (False positives are a big problem in medicine — EMRs already generate so many alerts that health care practitioners experience click fatigue, eg, when trying to access records, going through multiple alerts about lab tests that are slightly out of the normal range and potentially missing important alerts and diverting attention from more urgent issues.) STAT News has an assessment of the issues related to Epic's algorithm and others like it. 
Another company (Bayesian Health) used machine learning tools to analyze the entire patient record in the EMR, and that was more effective. So perhaps with enough attention to the quality of the input data, and adequate testing before implementation (maybe even FDA review someday!), AI will some day play an effective role in improving patient outcomes. 

In terms of health information searches by the public, incorporating ChatGPT in a search could be misleading or even catastrophic given the many issues with ChatGPT: difficulty identifying the source material, the difficulties AI has in interpreting information with negation (eg, “not”), and ChatGPT’s ability to make up facts, including research, out of thin air

Perhaps of greater relevance to this group, there has been a firestorm of reaction to ChatGPT in scholarly journals, from publishing ChatGPT as an author to banning its use altogether. While, like you, many authors might find its editing features useful, especially for those for whom English isn't a first language, there are many concerns. The World Association of Medical Editors (for which I am a Trustee) issued a statement last month that addresses many of the problems with ChatGPT and chatbots in general, and recommends that (more specifics here):
1. Chatbots cannot be authors (ethically or legally)
2. Authors should be transparent when chatbots are used and provide information about how they were used
3. Authors are responsible for the work performed by a chatbot in their paper (including the accuracy of what is presented, and the absence of plagiarism) and for appropriate attribution of all sources (including for material produced by the chatbot)
4. Editors need appropriate tools to help them detect content generated or altered by AI and these tools must be available regardless of their ability to pay. (Organizations like STM are developing methods to detect use of AI and other issues in manuscripts such as image manipulation, indications the manuscript is from a paper mill, etc. WAME calls for such tools to be available to all journals, both for the accuracy of the scientific record and for the health of the public.) 

We expect to revise the recommendations over time and have asked for feedback on WAME's statement. I encourage anyone with feedback to contact me directly. AI and its impact on scholarly publishing (and medicine) is in its infancy and it's too early to say where it will wind up. 

Best wishes,
Maggie

Margaret Winker, MD

Trustee, World Association of Medical Editors

***

wame.org

@WAME_editors

www.facebook.com/WAMEmembers


On Feb 8, 2023, at 10:07 AM, Gluck, Michelle <mvg...@psu.edu> wrote:



Glenn Hampson

unread,
Feb 8, 2023, 1:00:56 PM2/8/23
to Bryan Alexander, Gluck, Michelle, osi20...@googlegroups.com, Margaret Winker

Very cool. So, it turn out Chat GPT has a YouTube extension for Chrome that instantly processes transcripts: https://chrome.google.com/webstore/detail/youtube-summary-with-chat/nmmicjeknamkfloonkhhcjmomieiodli. Attached are three transcripts, one for each of Bryan’s seminars.

 

The next step should be to plop these into Quillbot (or some other tool) and try to get a summary. I did this, and kept tweaking the output to try to make it shorter and cleaner, but there’s a lot of extra stuff in transcripts (time stamps, casual language, hellos and goodbyes, etc.) so I ended up overloading the system. But for now, anyway, it’s pretty cool to at least get an instant written record from which you can extract quotes, key ideas, etc.

future trends transcript 1.docx
future trends transcript 2.docx
future trends transcript 3.docx

Bryan Alexander

unread,
Feb 8, 2023, 1:48:08 PM2/8/23
to Glenn Hampson, Gluck, Michelle, osi20...@googlegroups.com, Margaret Winker
Thanks for that, Glenn!

David Wojick

unread,
Feb 8, 2023, 2:44:58 PM2/8/23
to Margaret Winker, Glenn Hampson, osi20...@googlegroups.com, Gluck, Michelle
I am more interested in the good these new systems might be able to do than the harm. Most research fields, even small ones, produce many more papers a year than a human can read, or even know about. (I developed an algorithm that finds them all.)

So it might be very useful to get overviews of what is going on in such cases. I actually wrote about this 30 years ago, after mapping the structure of Naval R&D. Answering a question like "What are we doing in laser research?" for example, because it is scattered all over the place. 

It is basically a search tool that summarizes the results. Answering questions like "What is going on in X research?"

David

On Feb 8, 2023, at 1:53 PM, Margaret Winker <margare...@gmail.com> wrote:



Danny Kingsley

unread,
Feb 8, 2023, 3:41:07 PM2/8/23
to David Wojick, Margaret Winker, Glenn Hampson, osi20...@googlegroups.com, Gluck, Michelle
Hi all,

My take is the implication for ‘authorship’ in the academic sense. There are some interesting angles there - see


Danny

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

Dr Danny Kingsley
Visiting Fellow, Australian National Centre for the Public Awareness of Science, ANU
Member, Board of Directors, FORCE11
Executive Committee, Open Access Australasia
---------------------------------------
e: da...@dannykingsley.com
m: +61 (0)480 115 937
t:@dannykay68
o: 0000-0002-3636-5939

Margaret Winker

unread,
Feb 8, 2023, 10:40:05 PM2/8/23
to Danny Kingsley, David Wojick, Glenn Hampson, osi20...@googlegroups.com, Gluck, Michelle
For anyone interested, below is WAME’s rationale why chatbots can’t be authors. 

But don't take WAME's word for it -- here's what ChatGPT said when I asked it
Can chatgpt be an author of a research paper?  
No, ChatGPT is a language model developed by OpenAI and does not have the ability to conduct original research or author a research paper in the traditional sense. However, it can assist in writing and generating text based on information and patterns it has been trained on. For example, it can summarize a research paper or provide a brief overview of a scientific concept, but it cannot produce original research or findings. It's important to note that any information generated by ChatGPT should always be reviewed and fact-checked by a human expert before being used for academic or other purposes.

1. Chatbots cannot be authors. Chatbots cannot meet the requirements for authorship as they cannot understand the role of authors or take responsibility for the paper. Chatbots cannot meet ICMJE authorship criteria, particularly “Final approval of the version to be published” and “Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.” (21) A chatbot cannot understand a conflict of interest statement, or have the legal standing to sign a statement. Chatbots have no affiliation independent of their creators. They cannot hold copyright. Authors submitting a manuscript must ensure that all those named as authors meet the authorship criteria, which clearly means that chatbots should not be included as authors.
21. Who Is an Author?, Defining the Role of Authors and Contributors. ICMJE. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html. Accessed January 18, 2023.

Explanation in the text re legal standing: 
Chatbots are not legal entities, and do not have a legal personality. One cannot sue, arraign in court, or punish a chatbot in any way. The terms of use and accepted responsibilities for the results of using the software are set out in the license documentation issued by the company making the software available. Such documentation is similar to that produced for other writing tools, such as Word, PowerPoint, etc. Just as Microsoft accepts no responsibility for whatever one writes with Word, ChatGPT’s creator OpenAI accepts no responsibility for any text produced using their product: their terms of use include indemnity, disclaimers, and limitations of liability. (13) Only ChatGPT’s users would be potentially liable for any errors it makes. Thus, listing ChatGPT as an author, which is already happening (14, 15) and even being encouraged (16), may be misdirected and not legally defensible. 

Cited references:

13. Terms of Use. OpenAI. December 13, 2022. https://openai.com/terms/ Accessed January 18, 2023.

14. O'Connor S, ChatGPT. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ Pract. 2023 Jan;66:103537. doi: 10.1016/j.nepr.2022.103537. Epub 2022 Dec 16. PMID: 36549229.

15. ChatGPT Generative Pre-trained Transformer; Zhavoronkov A. Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective. Oncoscience. 2022 Dec 21;9:82-84. doi: 10.18632/oncoscience.571. PMID: 36589923; PMCID: PMC9796173.

16. Call for Case Reports Contest Written with the Assistance of ChatGPT. Cureus. January 17, 2023. https://www.cureus.com/newsroom/news/164. Accessed January 20, 2023. Accessed January 20, 2023.

Margaret 

Margaret Winker, MD

Trustee, World Association of Medical Editors

***

wame.org

@WAME_editors

www.facebook.com/WAMEmembers




On Feb 8, 2023, at 2:41 PM, Danny Kingsley <da...@dannykingsley.com> wrote:

Hi all,

Glenn Hampson

unread,
Feb 8, 2023, 11:15:56 PM2/8/23
to Margaret Winker, Danny Kingsley, David Wojick, osi20...@googlegroups.com, Gluck, Michelle, Jason Steinhauer, Bryan Alexander

Very interesting---thanks Maggie. Jason---did any parallels come out of your history group conversation (as far as you recall)? I was flipping through the transcripts from Bryan’s first roundtable on this topic and came across this great (and relevant) quote from writer John Warner (I think). The conversation up to this point covered the pros and cons of AI writing tools and emphasized that what we have here is no different than before, where students could plagiarize essays or pay someone to write for them. The fact that this tool is fast and free (for now) makes things interesting, but not necessarily different, and these tools are for sure going to be used in the real world (writing business reports, sales brochures, etc.), so educators should teach students how to use these tools responsibly rather than shy away from them altogether. That’s my summary, not the computers, and I may be way off base here. In any case, John’s quote went something like this (edited for clarity because the AI transcript was a big jumble of words):

 

“What I lectured about or what we think is important to this sort of stuff is to not miss the aspect of using writing to learn. One of my mantras that I use in in all my books and talks and all that kind of stuff is writing is thinking---writing is both the expression of an idea and the exploration of an idea. The act of writing causes the writer to process the material both consciously and subconsciously, and I swear to G-d even sometimes unconsciously stuff will come to me. I have no idea where it came from I didn't even know I knew it and it rises up and it winds up on the page. And that is the kind of ability that makes us human. That is the kind of experience that makes us human, and while I am like everybody else messing around with chat GPT---like, how can I get all this stuff I have to do that I don't want to do to do it for me and it does an okay job---but ultimately what I realized when I was trying to experiment with it it's actually me denying myself an important part of my own thinking process about the stuff that I'm involved in. It is a great shortcut to content to a product. It may be a shortcut. I asked it---I still occasionally write humor pieces for my old employer mcsweeney's---and I gave it a prompt to write a speech by Jordan Peterson explaining the importance of stuffing live weasels down your pants. And it gave me like a decent start---and it gave me a little bit of primer around the way he speaks and his rhythms and his word choice---but ultimately I tried to prompt it with three or four other additive elements and it didn't help at all. It was really like “okay, I need to take this,” and I put it to one side and I open my word processing program and I just started typing my own thing. It was generative. But ultimately in the final version its language wound up … so it became a kind of brainstorming tool, not a great writing tool for something that actually does ultimately require a kind of inspiration or intuition and that kind of stuff. So I just like to remind people that writing is a skill that we demonstrate through making products, but it's also a living experience that at least for me is part of what reminds me that I'm human.”

Reply all
Reply to author
Forward
0 new messages