I finally succumbed to curiosity and sicked an AI text bot on my writing. Wow. Granted, I give it plenty to work with, but I can’t see why anyone would ever pay for editing work again with tools available like this. Another profession bites the dust. Full disclosure, I used Quillbot (Quillbot.com) to rewrite key sections in the latest OSI paper that I just sent you---the executive summary, introduction, conclusion, and a few tough paragraphs. Quillbot provided several different output styles ---simple, creative, formal, etc.---and also created different rewrite versions at the touch of a button, plus the ability to make more granular changes (swapping out adjectives here and there as needed, for example). I’ve credited Quillbot’s editorial help in the acknowledgements section.
On a sort of related note, Jason Steinhauer’s History Communication Institute held an in-depth conversation on Monday on the ramifications of this technology for history. Here’s a link to the video from this event: https://youtu.be/Eu0ARkdZ2X0
And in case you didn’t see it, today’s New York Times has an article about combining this technology with deepfake videos, so that news (fake news too) can be created with the tap of a button and presented by a realistic looking bot (customizable) in numerous languages. See How Deepfake Videos Are Used to Spread Disinformation - The New York Times (nytimes.com).
Here’s Quillbot’s elegant rewrite of the above email:
I finally gave in to curiosity and used an AI text bot to edit my writing. Wow. Granted, I give it plenty to work with, but I can't imagine anyone ever paying for editing work again with tools like this available. Another profession has died. To be completely honest, this morning I used Quillbot (Quillbot.com) to rewrite key sections of the latest OSI paper that I sent you yesterday, including the executive summary, introduction, conclusion, and a few difficult paragraphs. Quillbot offered a variety of output styles (simple, creative, formal, etc.) as well as the ability to create different rewrite versions at the touch of a button, as well as the ability to make more granular changes (swapping out adjectives here and there as needed, for example). I acknowledged Quillbot's editorial assistance in the acknowledgements section.
On a related note, Jason Steinhauer's History Communication Institute hosted an in-depth discussion on the implications of this technology for history on Monday. Here's a link to the event's video: https://youtu.be/Eu0ARkdZ2X0
And, in case you missed it, today's New York Times has an article about combining this technology with deepfake videos, so that news (including fake news) can be created with the click of a button and presented by a realistic-looking (customizable) bot in a variety of languages. The New York Times examines how deepfake videos are used to spread misinformation (nytimes.com).
I’ll grant you that eliminating the hyphens created a more “elegant” text, but IMHO “bites the dust” says something that “has died” does not.
Michelle Gluck
Associate General Counsel
Office of the General Counsel
The Pennsylvania State University
227 W. Beaver Ave, Suite 507
This electronic mail transmission may contain privileged or confidential information. If you believe you have received this message in error, please notify the sender by reply transmission and delete the message without copying or disclosing it.
--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
osi2016-25+...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB6064E9CF24749DB54E83674AC5DB9%40DM4PR17MB6064.namprd17.prod.outlook.com.
Hi Michelle,
Any talk in the legal field yet about how tools like this might be used?---research, writing briefs, even writing draft opinions?
Jason has the history field covered.
Maggie---what about medicine (making diagnoses, researching treatments, etc.)?
And for folks in the know, what is the root requirement here? That the databases being used are searchable? Or is there more too it (like good metadata, etc.)?
Best,
Glenn
Since I am in higher ed, so far all of the discussion has been about academic uses and detection. I’ll keep an ear out.
Michelle
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60645AF2DA3FDBFF83863426C5D89%40DM4PR17MB6064.namprd17.prod.outlook.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60645AF2DA3FDBFF83863426C5D89%40DM4PR17MB6064.namprd17.prod.outlook.com.
Margaret Winker, MD
Trustee, World Association of Medical Editors
***
@WAME_editors
On Feb 8, 2023, at 10:07 AM, Gluck, Michelle <mvg...@psu.edu> wrote:
Very cool. So, it turn out Chat GPT has a YouTube extension for Chrome that instantly processes transcripts: https://chrome.google.com/webstore/detail/youtube-summary-with-chat/nmmicjeknamkfloonkhhcjmomieiodli. Attached are three transcripts, one for each of Bryan’s seminars.
The next step should be to plop these into Quillbot (or some other tool) and try to get a summary. I did this, and kept tweaking the output to try to make it shorter and cleaner, but there’s a lot of extra stuff in transcripts (time stamps, casual language, hellos and goodbyes, etc.) so I ended up overloading the system. But for now, anyway, it’s pretty cool to at least get an instant written record from which you can extract quotes, key ideas, etc.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/CAJNuR2jd7-2HzdUv597A0S%2BvPeyxcL2Bnt3DX%3DeCsPFDnF0xpA%40mail.gmail.com.
--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/4974F5BB-F531-44C1-9395-346116A9ED48%40craigellachie.us.
No, ChatGPT is a language model developed by OpenAI and does not have the ability to conduct original research or author a research paper in the traditional sense. However, it can assist in writing and generating text based on information and patterns it has been trained on. For example, it can summarize a research paper or provide a brief overview of a scientific concept, but it cannot produce original research or findings. It's important to note that any information generated by ChatGPT should always be reviewed and fact-checked by a human expert before being used for academic or other purposes.
1. Chatbots cannot be authors. Chatbots cannot meet the requirements for authorship as they cannot understand the role of authors or take responsibility for the paper. Chatbots cannot meet ICMJE authorship criteria, particularly “Final approval of the version to be published” and “Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.” (21) A chatbot cannot understand a conflict of interest statement, or have the legal standing to sign a statement. Chatbots have no affiliation independent of their creators. They cannot hold copyright. Authors submitting a manuscript must ensure that all those named as authors meet the authorship criteria, which clearly means that chatbots should not be included as authors.21. Who Is an Author?, Defining the Role of Authors and Contributors. ICMJE. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html. Accessed January 18, 2023.
Explanation in the text re legal standing:
Chatbots are not legal entities, and do not have a legal personality. One cannot sue, arraign in court, or punish a chatbot in any way. The terms of use and accepted responsibilities for the results of using the software are set out in the license documentation issued by the company making the software available. Such documentation is similar to that produced for other writing tools, such as Word, PowerPoint, etc. Just as Microsoft accepts no responsibility for whatever one writes with Word, ChatGPT’s creator OpenAI accepts no responsibility for any text produced using their product: their terms of use include indemnity, disclaimers, and limitations of liability. (13) Only ChatGPT’s users would be potentially liable for any errors it makes. Thus, listing ChatGPT as an author, which is already happening (14, 15) and even being encouraged (16), may be misdirected and not legally defensible.
Cited references:
Margaret13. Terms of Use. OpenAI. December 13, 2022. https://openai.com/terms/ Accessed January 18, 2023.
14. O'Connor S, ChatGPT. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ Pract. 2023 Jan;66:103537. doi: 10.1016/j.nepr.2022.103537. Epub 2022 Dec 16. PMID: 36549229.
15. ChatGPT Generative Pre-trained Transformer; Zhavoronkov A. Rapamycin in the context of Pascal's Wager: generative pre-trained transformer perspective. Oncoscience. 2022 Dec 21;9:82-84. doi: 10.18632/oncoscience.571. PMID: 36589923; PMCID: PMC9796173.
16. Call for Case Reports Contest Written with the Assistance of ChatGPT. Cureus. January 17, 2023. https://www.cureus.com/newsroom/news/164. Accessed January 20, 2023. Accessed January 20, 2023.
Margaret Winker, MD
Trustee, World Association of Medical Editors
***
@WAME_editors
On Feb 8, 2023, at 2:41 PM, Danny Kingsley <da...@dannykingsley.com> wrote:
Hi all,
Very interesting---thanks Maggie. Jason---did any parallels come out of your history group conversation (as far as you recall)? I was flipping through the transcripts from Bryan’s first roundtable on this topic and came across this great (and relevant) quote from writer John Warner (I think). The conversation up to this point covered the pros and cons of AI writing tools and emphasized that what we have here is no different than before, where students could plagiarize essays or pay someone to write for them. The fact that this tool is fast and free (for now) makes things interesting, but not necessarily different, and these tools are for sure going to be used in the real world (writing business reports, sales brochures, etc.), so educators should teach students how to use these tools responsibly rather than shy away from them altogether. That’s my summary, not the computers, and I may be way off base here. In any case, John’s quote went something like this (edited for clarity because the AI transcript was a big jumble of words):
“What I lectured about or what we think is important to this sort of stuff is to not miss the aspect of using writing to learn. One of my mantras that I use in in all my books and talks and all that kind of stuff is writing is thinking---writing is both the expression of an idea and the exploration of an idea. The act of writing causes the writer to process the material both consciously and subconsciously, and I swear to G-d even sometimes unconsciously stuff will come to me. I have no idea where it came from I didn't even know I knew it and it rises up and it winds up on the page. And that is the kind of ability that makes us human. That is the kind of experience that makes us human, and while I am like everybody else messing around with chat GPT---like, how can I get all this stuff I have to do that I don't want to do to do it for me and it does an okay job---but ultimately what I realized when I was trying to experiment with it it's actually me denying myself an important part of my own thinking process about the stuff that I'm involved in. It is a great shortcut to content to a product. It may be a shortcut. I asked it---I still occasionally write humor pieces for my old employer mcsweeney's---and I gave it a prompt to write a speech by Jordan Peterson explaining the importance of stuffing live weasels down your pants. And it gave me like a decent start---and it gave me a little bit of primer around the way he speaks and his rhythms and his word choice---but ultimately I tried to prompt it with three or four other additive elements and it didn't help at all. It was really like “okay, I need to take this,” and I put it to one side and I open my word processing program and I just started typing my own thing. It was generative. But ultimately in the final version its language wound up … so it became a kind of brainstorming tool, not a great writing tool for something that actually does ultimately require a kind of inspiration or intuition and that kind of stuff. So I just like to remind people that writing is a skill that we demonstrate through making products, but it's also a living experience that at least for me is part of what reminds me that I'm human.”