RE: [OPENCAFE-L] The 'one shot' scholarly communication talk

3,102 views
Skip to first unread message

Glenn Hampson

unread,
Feb 8, 2024, 10:26:56 AMFeb 8
to OPENC...@listserv.byu.edu, osi20...@googlegroups.com

Wow. Living on the West coast of the US can be rough. By the time your day gets started, listserv conversations can be almost over! If I may, there are a couple of issues here that I see differently than my esteemed colleagues.

 

First, to this whole notion introduced by my friend Ricks and Lisa that peer review is highly effective at weeding out garbage and allowing good scholarship to get published: This is certainly true for the editorial process in general (like the desk rejection process), but it isn’t true of peer review. The peer review process is highly regarded by researchers, seen as a signal of quality (see https://bit.ly/3otwKRs), and highly valued by funders and institutions, but many studies have complained over the years that the evidence is unclear whether peer review actually improves research (beyond making articles more readable).

 

This process also varies by journal (see note below) and is highly subject to bias, as Daniel mentions---by idea, gender, nationality, etc.

 

Here’s a link to a presentation I gave a few years ago on this topic. There’s too much detail to bore you with in a listserv email but the presentation has references included if you want to dig deeper: BRISPE-presentation-final-Hampson.pdf (osiglobal.org). In particular, I suggest you read Melinda Baldwin’s great paper on the history of peer review (Baldwin, Melinda. 2018. Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States. Isis, volume 109, number 3). The peer review system we use today is essentially a byproduct of US Congressional oversight in the mid-1970s; it took decades thereafter for this process to become widely used throughout the world.

 

So what do you tell your students in your one-shot, Melissa? I don’t know. Maybe that peer review is quality-control process we invented to help “monitor” science and now it’s kind of an institution unto itself with a mythology larger than its actual value to science?

 

Regarding Pooja’s story about gatekeeping, I know this might not make your colleague feel better, but most papers are rejected at least once, for any number of reasons (as Jean-Claude explained, like a bad fit with the journal’s focus). Across all kinds of journals, the average rejection rate of articles is a whopping 60-65% (https://doi.org/10.3145/epi.2019.jul.07). Individual rates vary widely by journal, ranging from 0-90% and higher. About 20% of papers get rejected before peer review for being out of scope, among other reasons (see https://bit.ly/2YnYoVv). All this said, most papers eventually get published somewhere. Two-thirds of preprints posted before 2017 were later published in peer-reviewed journals within 12-18 months (see https://doi.org/10.7554/eLife.45133). Also, if your colleague is submitting to 65 different journals, it seems like they might be casting a net that is too wide (and too unfocused), which might not be the best approach.

 

And finally, to Toby and Danny’s big-picture thinking, here’s an infographic OSI created a few years ago to show how review and publishing fit into (and feed into) the full idea lifecycle: OSI-Infographic-1.0: The Idea Lifecylce (osiglobal.org). There’s a lot more to research than just publishing, obviously, but to Jean-Claude’s point, publishing still plays a critical role (and it always has throughout the history of science). What form this takes in the future is where so much of the attention and effort in the OA reform space has been directed.

 

Best regards,

 

Glenn

 

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

 

 

Note: Generally speaking, specialty and prestige journals provide high quality peer review; even some preprint servers are experimenting with new forms of peer review. Regional journals don’t always provide the kind of peer review required by specialty journals; peer review quality here varies widely.

 

 

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> On Behalf Of Daniel Kulp
Sent: Thursday, February 8, 2024 6:32 AM
To: OPENC...@LISTSERV.BYU.EDU
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

At the end of the day, peer review is run by people (editors, reviewers, etc.), and people are susceptible to bias. Is peer review perfect?  No, it’s not. It is likely the best we have, at the moment.  I certainly support experiments in the publishing industry, but I have yet to see a process which is consistently better and able to applied at scale. That would be how I would frame peer review to students.

 

Daniel Kulp, PhD


Founder, PIE Consulting
Publicationintegrity.com

 

 



On Feb 8, 2024, at 5:21 AM, Jean-Claude Guédon <jean.clau...@UMONTREAL.CA> wrote:

 

My own take on peer review is that it is part of a larger process which I have sometimes described as the "Great Conversation" that stands behind knowledge producing. Voltaire says somewhere - and I am paraphrasing - it is difficult to live without certainty, but believing that certainty exists is ridiculous. Knowledge production works exactly at that level, and that is where it differs from believing or convictions. Peer review is part of the process one can use to allow the best forms of human thinking to percolate to the surface and become reference points for further evolution of knowledge. Knowledge can only claim reliability, not certainty.

Rick is right when he says that, when executed competently and honestly, peer review is highly effective. The main problem is that parts of the process can remain quite opaque. For example, that "desk rejection" Rick mentions generally involves one person only. That person - the editor - may have two divergent objectives in his/her mind: on the one hand, his/her notion of quality, and, on the other, the effect of the article on the general standing of the journal, especially in a tightly controlled competition system such as the impact-factor driven mechanism.

Imagine yourself in the following situation: you have room (i.e. resources) to publish one article. You have two submissions. One article is on a hot topic but its quality is ho-hum. The other one appears stellar but on a topic that is more marginal in the present development of knowledge (perhaps it is not yet well understood, or whatever). Which principle will be used at the desk rejection level? The first article is bound to improve your impact factor; the second article may lower your impact factor. This is because the relationship of the impact factor to quality is both tenuous and ambiguous.

More generally, how is peer review affected by the fact that scientific articles and journals are two different kinds of objects  but have been entangled with each other since the advent of print. And this leads to another question: in a digital world, do journals still matter, and, if so, how can they matter? How do journals relate to platforms? To communities? Etc.

Jean-Claude Guédon

On 2024-02-07 23:23, Rick Anderson wrote:

Here’s how I explain peer review to students:

 

When an author submits her paper to a peer-reviewed journal, the journal’s editor gives it a first look. If it doesn’t appear to be up to scratch (which can mean any number of things: obviously poor methodology, illegibility, irrelevance, etc.) then the editor rejects it -- we call this “desk rejection.” If it looks like it has promise, the editor sends it out to one or more (usually at least two) reviewers. They’re called “peer” reviewers because they work in the same field as the author, or a closely adjacent one, so they’re in a good position to evaluate the scholarship. The reviewers are asked to read the paper more closely and evaluate it on its scholarly merits: is its methodology sound; do the conclusions proceed from the data; is it well organized and cogently written; do the cited works actually support the arguments in support of which they’re cited; etc. The reviewers submit reviews with recommendations as to whether the article should be rejected, or returned for revision, or published as is. This process may involve two or three rounds before the paper is finally published or rejected.

 

It's by no means a fail-safe system, but when executed competently and honestly, it’s highly effective at weeding out garbage and allowing good scholarship to get published. Unfortunately, the competence and honesty of journals is highly variable. Sometimes they get into bed with corporations who want to see certain things published; sometimes editors of different journals collude with each other to require authors to cite each other’s publications; and in recent years, there’s been a growing industry of journals that dishonestly claim to carry out peer review when in fact they will publish anything submitted to them as long as the author pays a publication fee. So before submitting to a journal, it’s really important to do your due diligence.

 

--- 

Rick Anderson

University Librarian

Brigham Young University

 

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> on behalf of Danny Kingsley <da...@DANNYKINGSLEY.COM>
Reply-To: Danny Kingsley <da...@DANNYKINGSLEY.COM>
Date: Wednesday, February 7, 2024 at 8:03 PM
To: "OPENC...@LISTSERV.BYU.EDU" <OPENC...@LISTSERV.BYU.EDU>
Subject: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Hi everyone,

 

I’m picking up in a new thread something Melissa noted:

 

As a librarian, I need to be able to stand in front of a class of freshmen, as I am about to do tonight, to explain what peer-review is and why it's the gold standard for what they cite in their papers, and to be able to say it with a straight face without feeling like a liar. For those of you who know what a "one-shot" is, you know we do NOT have time to explain the intricacies of the scholarly publishing industry, its good and bad financial incentives, etc., even if we understand them fully ourselves. We don't even have time to explain all that to graduate students.

 

This is a really good point for discussion.

 

How do people approach this type of explanation? I am thinking there is a parallel with the difference between what is written in textbooks and what is happening in the scholarly literature. Textbooks tend to present information as ‘decided’, information published in the literature is the ongoing debate. Textbooks change perspective and ideas slowly, a paper can get shot down in weeks/months.

 

So, do we provide the ‘textbook' version to students: “This is how science works, a research team find something out, write it up, send it to a journal, it gets sent to experts in the field, they comment, amendments are made and then it is published”.

 

Or do we bring in some of the broader picture: “Researchers don’t get paid to publish. Publication is the way researchers get ‘prestige’ - the better their paper and (more commonly) the place they publish it in ‘counts’ towards their academic standing. There are systems that count how many papers people have published, where they have published and how many other people have subsequently cited their work. These numbers are fed into most decision making in research - whether someone gets a promotion, whether they get a grant, how an institution fares in national ‘research excellence’ exercises and how universities get ranked."

 

Or do we lay it down: “The very narrow focus on what constitutes 'success’ in research has unfortunately resulted in some very poor behaviour…..”

 

 

I am conscious that when this is new to people it can seem overwhelming. A comment at last year’s AIMOS conference (which consisted of multiple presentations about research on research, uncovering a swathe of issues) was that it was very depressing and it meant it was hard to believe anything that was published. To be honest when you read articles like these https://www.theguardian.com/science/2024/feb/03/the-situation-has-become-appalling-fake-scientific-papers-push-research-credibility-to-crisis-point (which is referring to activity all over world) you can get depressed.

 

My response is that it is good we are lifting the lid on this - these are the steps we make towards fixing the problems.

 

But we want our community to ‘be alert not alarmed’.

 

How do people approach this discussion in their own institutions?

 

Danny

 

 

 

Dr Danny Kingsley

Scholarly Communication Consultant
Visiting Fellow, Australian National Centre for the Public Awareness of Science, ANU

Adjunct Senior Lecturer, Charles Sturt University
Member, Board of Directors, FORCE11
Member, Australian Academy of Science National Committee for Data in Science
---------------------------------------
e: da...@dannykingsley.com
m: +61 (0)480 115 937
t:@dannykay68

b: @dannykay68.bsky.social
o: 0000-0002-3636-5939

 

 

 

 

 

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

Rick Anderson

unread,
Feb 8, 2024, 10:29:18 AMFeb 8
to Glenn Hampson, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

> but many studies have complained over the years that the evidence is unclear whether peer

> review actually improves research (beyond making articles more readable).

 

Glenn, could you share links to a few of these?

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60640E4C6121008C690CBB07C5442%40DM4PR17MB6064.namprd17.prod.outlook.com.

Glenn Hampson

unread,
Feb 8, 2024, 10:52:33 AMFeb 8
to Rick Anderson, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

Gladly Rick. Here are the citations from my 2020 BRISPE presentation (some studies, some articles). There are many others before and since---this is just a sample:

 

Anthony Watkinson

unread,
Feb 8, 2024, 11:20:51 AMFeb 8
to Glenn Hampson, Rick Anderson, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

Hi Glenn, Rick and others.

The strange thing is that when one does peer review the impression is for both reviewer and reviewed that this is a worthwhile enterprise leading to a better result - at least that is my experience.

Anthony

 

------ Original Message ------
From: gham...@nationalscience.org
To: rick_a...@byu.edu; OPENC...@LISTSERV.BYU.EDU Cc: osi20...@googlegroups.com
Sent: Thursday, February 8th 2024, 15:52
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk
 

Gladly Rick. Here are the citations from my 2020 BRISPE presentation (some studies, some articles). There are many others before and since---this is just a sample:

 

 

From: Rick Anderson <rick_a...@byu.edu
Sent: Thursday, February 8, 2024 7:29 AM
To: Glenn Hampson <gham...@nationalscience.org>; OPENC...@LISTSERV.BYU.EDU
Cc: osi20...@googlegroups.com

To unsubscribe from this group and stop receiving emails from it, send an email to unsub...@googlegroups.com">osi2016-25+unsub...@googlegroups.com.

 

-- 


As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
--- 
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to unsub...@googlegroups.com">osi2016-25+unsub...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60648FC9FC326C8D453F1D91C5442%40DM4PR17MB6064.namprd17.prod.outlook.com.
 

Rick Anderson

unread,
Feb 8, 2024, 11:38:49 AMFeb 8
to Glenn Hampson, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

Thanks, Glenn. To your knowledge, did any of these studies find a way to evaluate articles that are rejected through peer review?

 

In other words, one of the key functions of a journal is to reject. The effectiveness of rejection is hugely important, which means that studying the record of published articles necessarily means ignoring a core function of peer review. With apologies for not having the time to read all of these (but sincere appreciation to you for sharing them), can you tell us whether, and if so how, any of these studies might have accounted for that?

 

Rick

 

---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

rick_a...@byu.edu

 

 

From: Glenn Hampson <gham...@nationalscience.org>
Date: Thursday, February 8, 2024 at 7:52 AM
To: Rick Anderson <rick_a...@byu.edu>, "'OPENC...@LISTSERV.BYU.EDU'" <OPENC...@LISTSERV.BYU.EDU>
Cc: "'osi20...@googlegroups.com'" <osi20...@googlegroups.com>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Gladly Rick. Here are the citations from my 2020 BRISPE presentation (some studies, some articles). There are many others before and since---this is just a sample:

 

·         Kelly, J, T Sadeghieh, K Adeli. 2014. Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide. EJIFCC. 2014 Oct 24;25(3):227-43. PMID: 27683470; PMCID: PMC4975196.

·         Willis, Michael. 2020. Peer review quality in the era of COVID. https://www.wiley.com/en-us/network/publishing/research-publishing/trending-stories/peer-review-quality-in-the-era-of-covid-19

·         Smith, Richard. 2006. Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine vol. 99,4 (2006): 178-82. doi:10.1258/jrsm.99.4.178

·         Horbach, SPJM, and W Halffman. 2019. The ability of different peer review procedures to flag problematic publications. Scientometrics 118, 3 39–373

·         Tennant, JP, and T Ross-Hellauer. 2020. The limitations to our understanding of peer review. Res Integr Peer Rev 5, 6. Doi: 10.1186/s41073-020-00092-1

·         Open Scholarship Initiative. 2016. Report from the OSI2016 Peer Review Workgroup. doi: 10.13021/G8K88P

o   Peer review is the worst form of evaluation except all those other forms that have been tried from time to time.-with apologies to Winston Churchill”

Glenn Hampson

unread,
Feb 8, 2024, 12:25:11 PMFeb 8
to Rick Anderson, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

I’m out of my depth Rick and will defer to others here (or not here, yet) who are peer review experts---the esteemed Michael Ware comes to mind.

 

But to take a swipe at the answer anyway, I think peer review might best be described as part of a process that weeds out papers (for various reasons, good and bad), at which point they are put back into the submission pipeline, with or without correction, and most will eventually get published elsewhere. At the same time, this process often provides constructive feedback that can improve papers but not necessarily guarantee they are factually correct or otherwise free from substantive defect.

 

The surveys cited by Ware in his 2008 paper (Ware M. 2008. “Peer Review: Benefits, Perceptions and Alternatives.” PRC Summary Papers4:4-20. Google Scholar), show an average rejection rate of about 50 percent---20% desk rejections and 30% rejections through peer review. Of the 50% accepted, most (80-ish percent of this total) are accepted on the condition they be revised. Again citing Ware (and similar stats show up elsewhere), most academics say they are satisfied with this system, and believe it helps improve their work.

 

So---I think the questions we’re asking are what is the true function of peer review, and what are the limits of this process? We imbue it with so much authority and ability, but it may not deserve these. It is, more accurately, an editorial review system---not really a “gold standard” of anything. If we can grapple with this reality, we can then work on designing the other review processes we need to address the wide variety of other issues peer review cannot effectively address---everything from plagiarism to fake data to bad stats.

 

Does this answer your question? (And please, others with more expertise in this area please do jump in.)

 

Best regards,

 

Glenn

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

 

 

 

Rick Anderson

unread,
Feb 8, 2024, 12:29:07 PMFeb 8
to Glenn Hampson, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

That is helpful, Glenn, thanks.

 

For me, the issue isn’t so much whether we should use the term “gold standard” to characterize peer review – I don’t care much how we characterize it. I do care whether we understand what it does and whether it’s effective for its intended purpose.

 

I’ll stop posting on this thread now so as to leave more room for other voices.

Lisa Hinchliffe

unread,
Feb 9, 2024, 6:40:29 AMFeb 9
to Glenn Hampson, OPENC...@listserv.byu.edu, The Open Scholarship Initiative
With no blame to Glenn as the emails were coming in fast and furious,  I don't say that peer review is the gold standard.  In fact,  I specifically think that librarians can teach how to find peer reviewed articles without expressing any view at al on why one might want them.

In most sessions for first year students where this is one topic among many for an hour of instruction, the answer truly is "bc your professor says so" ... and ask your professor if you want to know why.  (I doubt btw its a full throated defense of peer review as the gold standard in scholarship and instead trying to keep students in a walled garden that's better than the free for all of reddit threads and 4chan even if it isn't perfect). 

Anyway, if of interest,  here's an excerpt from a piece I did on peer review awhile back: 

"A few years ago I tried to work out a statement of meaning and found it quite difficult. Through feedback on drafts and with various interlocutors, I came to this: “This article has been certified by peer review, which means scholars in the field advised at the time of review it was worth this journal investing resources in editing, typesetting, disseminating, and preserving it so that it can be read by current and future scholars in the field.” I am confident this could be further improved. Nonetheless, unless it is radically understating the value in some way (e.g., that peer review identifies what is true, what can be relied upon as the consensus of the field, etc.), I also can’t help but think that it takes an awful lot of resources to deliver that value. It also points to why it may be that publishers who adopt a more “lightweight” approach to peer review are confident in claiming to do peer review even while others may question that fact. If the question for peer review is whether it is worth a particular journal investing resources in publishing an article — given the data that almost all manuscripts get published somewhere and so it is a matter of where not whether — the threshold for making that judgement of investment-worthiness is not uniform across publishers." (https://scholarlykitchen.sspnet.org/2023/09/22/ask-the-chefs-what-is-the-single-most-pressing-issue-for-the-future-of-peer-review/)

But, that's not what I'm going to get into in a one- hour intro to library research session with first year college students!

Best, Lisa 

Lisa Janicke Hinchliffe
lisali...@gmail.com

--

Johanna Havemann

unread,
Feb 9, 2024, 7:02:29 AMFeb 9
to osi20...@googlegroups.com, Glenn Hampson, Rick Anderson
Dear all, 

I am looking at this a bit late, but just to let you know that imho - Peer Review is nothing more and nothing less than collegial feedback on someone else’s work were the reviewer as a decent enough insight and expertise on the research topic.

It should have never turned into a bottle neck.

The one shot best practice in my view is: Write a review like you would like to receive one: informative, supportive, acknoledging and highlighting the good parts and humbly suggesting additions/changes/shared or differing viewpoints.

In essence: don’t be a jerk. Simple es that.

Having said that, Gareth Dyke, Maria Machado and I will be hosting a linked audio call today to propose community-sounced mapping of existing Peer Review resources and from there distill a minimum understanding and must have’s as in best practices so that we can reach a globally inclusive understand of what Peer Review is and have a guideline that is easy to follow and open to change and adoption on regional/research topic and other bases.

We are working on a proposal to be publicly shared in due course - will make sue to also send it through this list :)

Best wishes
Jo.

Johanna Havemann, PhD
ORCID: https://orcid.org/0000-0002-6157-1494

Email: j...@access2perspectives.org
Tel: +49-30-24617688  
Mobile: +49-176-22882437
LinkedIn: https://linkedin.com/in/johavemann
Twitter: https://twitter.com/openscicomm
Mastodon: @opens...@mastodon.world
Facebook: https://facebook.com/access2perspectives

Book a meeting: https://calendly.com/access2perspectives


Access 2 Perspectives
Open Science & International Cooperation
VAT-ID: DE286577286

Website: https://access2perspectives.org
Our website and email server are carbon neutrally powered by https://windcloud.de

Email: in...@access2perspectives.org 




image001.jpg
Regarding Pooja’s story about gatekeeping, I know this might not make your colleague feel better, but most papers are rejected at least once, for any number of reasons (as Jean-Claude explained, like a bad fit with the journal’s focus). Across all kinds of journals, the average rejection rate of articles is a whopping 60-65% (https://doi.org/10.3145/epi.2019.jul.07). Individual rates vary widely by journal, ranging from 0-90% and higher. About 20% of papers get rejected before peer review for being out of scope, among other reasons (see https://bit.ly/2YnYoVv). All this said, most papers eventually get published somewhere. Two-thirds of preprints posted before 2017 were later published in peer-reviewed journals within 12-18 months (seehttps://doi.org/10.7554/eLife.45133). Also, if your colleague is submitting to 65 different journals, it seems like they might be casting a net that is too wide (and too unfocused), which might not be the best approach.
 
And finally, to Toby and Danny’s big-picture thinking, here’s an infographic OSI created a few years ago to show how review and publishing fit into (and feed into) the full idea lifecycle: OSI-Infographic-1.0: The Idea Lifecylce (osiglobal.org). There’s a lot more to research than just publishing, obviously, but to Jean-Claude’s point, publishing still plays a critical role (and it always has throughout the history of science). What form this takes in the future is where so much of the attention and effort in the OA reform space has been directed.
 
Best regards,
 
Glenn
 
 
Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)
image002.jpg

Glenn Hampson

unread,
Feb 12, 2024, 7:12:12 PMFeb 12
to Bob Henkel, OPENC...@listserv.byu.edu, osi20...@googlegroups.com

Hi Bob,

 

As far as I’m aware anyway, the scholarly publishing industry doesn’t have any official stats that track fraud in all its various forms, from citation rings to paying rewards for publishing in high impact factor journals, to plagiarism, fake reviews, duplicate ink blots, forged data, and more. So it’s probably kind of random to identify which field has more fraud because it probably depends at least in part on what kind of fraud you’re looking for.

 

Still, the folks in our community who are experts in predatory publishing and retractions might be willing/able to give us some helpful insight into the “profile” of fraud as they’ve seen it (at least with regard to field), so I’m bcc’ing three of them in case they aren’t already on this list:

 

  1. Simon Linacre, formerly of Cabell’s (and now with Digital Science), who recently wrote a book on predatory publishing (Cabell’s publishes a subscription-based tracker for predatory publishing---not that predatory is synonymous with fraud, of course, but I suspect Simon can provide some insight here),
  2. Ivan Oranksy, Editor-in-Chief of Retraction Watch (his database has records of 50k+ retractions; again, although retractions aren’t always because of fraud, knowing the subject’s where most retractions are occurring might make for a good proxy?), and
  3. Amy Koerber, the lead author of a recent study on predatory publishing.

 

Cheers,

 

Glenn

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> On Behalf Of Bob Henkel
Sent: Monday, February 12, 2024 1:26 PM
To: OPENC...@LISTSERV.BYU.EDU
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

“There are people making literally $100,000s a year by organizing fraudulent Editors, Reviewers, and Authors. It's a gigantic industry especially in several countries (China, Iran, Saudi Arabia, Egypt, Pakistan, etc). Happy to discuss in more details off-list.”

 

Curious if people are seeing this type of fraudulent activity in particular specialties, or is it being seen across the spectrum of medical research?

 

 

Bob Henkel

Senior Director, Publishing

American Society of Nephrology

1401 H Street NW #900

Washington, DC 20005

p. 202-557-8360

f. 202-403-3615

www.asnjournals.org

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> On Behalf Of Kaveh Bazargan
Sent: Friday, February 9, 2024 7:48 AM
To: OPENC...@LISTSERV.BYU.EDU
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Daniel, I would say the problem of bad faith peer review is increasing, as is, more recently, bad faith Editors and Authors. I am looking into Research Integrity quite closely. There are people making literally $100,000s a year by organizing fraudulent Editors, Reviewers, and Authors. It's a gigantic industry especially in several countries (China, Iran, Saudi Arabia, Egypt, Pakistan, etc). Happy to discuss in more details off-list. ;-)

 

Regards
Kaveh

 

On Fri, 9 Feb 2024 at 12:25, Daniel Kulp <danie...@gmail.com> wrote:

I would also like to add one additional point, which can be considered a weakness of peer review if you are looking to exclude fraudulent papers - One basic premise for many peer reviewers is that the authors are acting in good faith.  This approach gives the benefit of the doubt to authors, which generally works.  It's the minority, bad players, that make it bad for everyone else.

 

This may have changed over the last decade, or so, since I have been out of editorial and more into publishing.

 

Error! Filename not specified.

To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

Error! Filename not specified.

 

 


To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU


 

--

Kaveh Bazargan PhD

Director

Accelerating the Communication of Research

Image removed by sender.  

 


To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 


This email has been scanned for spam and viruses by Proofpoint Essentials. Click here to report this email as spam.

 


To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

Simon Linacre

unread,
Feb 13, 2024, 12:43:36 AMFeb 13
to Glenn Hampson, Bob Henkel, OPENC...@listserv.byu.edu, osi20...@googlegroups.com
By its very nature, the kind of fraud we see in scholarly communications is very hard to estimate in scale, but I have done some back-of-the-envelope calculations for predatory publishing here (https://blog.cabells.com/2020/04/08/the-price-of-predatory-publishing/), and I would very much recommend looking at the original FTC judgement on which this is based (linked in article).

Best wishes, Simon

As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit https://groups.google.com/g/osi2016-25.

---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


--
Simon Linacre

5 Otley Mount
East Morton
Keighley
West Yorkshire BD20 5TD
UK

Twitter: @slinacre

Ivan Oransky

unread,
Feb 14, 2024, 6:04:59 PMFeb 14
to Glenn Hampson, Bob Henkel, OPENC...@listserv.byu.edu, osi20...@googlegroups.com
Responding briefly as I'm jammed at the moment: Any field that hasn't found fraud in its ranks isn't looking very hard. Happy to walk through various lines of evidence but won't have time to do that by email so please follow up off-list to schedule a time.

Ivan Oransky, MD
Editor in Chief, The Transmitter
Distinguished Journalist in Residence, New York University's Arthur Carter Journalism Institute
Co-Founder, Retraction Watch

David Wojick

unread,
Feb 14, 2024, 7:11:30 PMFeb 14
to Ivan Oransky, Glenn Hampson, Bob Henkel, OPENC...@listserv.byu.edu, osi20...@googlegroups.com
Surely that there is a degree of fraud is predictable. Many, perhaps most, researchers are in effect independent small businesses competing with one another, for publication among other things. That some will be dishonest is pretty certain.

But serious dishonesty is difficult and I see no evidence that it extends beyond a few percent of the population. As for data I suspect that a lot of apprehensions are handled quietly within the guilty party's organization, as with a lot of nonviolent crime.

David
David Wojick, Ph.D. (Philosophy of Science)
Freelance analyst

PS I tried to join Opencafe but it did not work so this is for OSI.

On Feb 14, 2024, at 7:04 PM, Ivan Oransky <ivan-o...@erols.com> wrote:


Responding briefly as I'm jammed at the moment: Any field that hasn't found fraud in its ranks isn't looking very hard. Happy to walk through various lines of evidence but won't have time to do that by email so please follow up off-list to schedule a time.

Ivan Oransky, MD
Editor in Chief, The Transmitter
Distinguished Journalist in Residence, New York University's Arthur Carter Journalism Institute
Co-Founder, Retraction Watch


On Mon, Feb 12, 2024 at 7:12 PM Glenn Hampson <gham...@nationalscience.org> wrote:

Hi Bob,

 

As far as I’m aware anyway, the scholarly publishing industry doesn’t have any official stats that track fraud in all its various forms, from citation rings to paying rewards for publishing in high impact factor journals, to plagiarism, fake reviews, duplicate ink blots, forged data, and more. So it’s probably kind of random to identify which field has more fraud because it probably depends at least in part on what kind of fraud you’re looking for.

 

Still, the folks in our community who are experts in predatory publishing and retractions might be willing/able to give us some helpful insight into the “profile” of fraud as they’ve seen it (at least with regard to field), so I’m bcc’ing three of them in case they aren’t already on this list:

 

  1. Simon Linacre, formerly of Cabell’s (and now with Digital Science), who recently wrote a book on predatory publishing (Cabell’s publishes a subscription-based tracker for predatory publishing---not that predatory is synonymous with fraud, of course, but I suspect Simon can provide some insight here),
  2. Ivan Oranksy, Editor-in-Chief of Retraction Watch (his database has records of 50k+ retractions; again, although retractions aren’t always because of fraud, knowing the subject’s where most retractions are occurring might make for a good proxy?), and
  3. Amy Koerber, the lead author of a recent study on predatory publishing.

 

Cheers,

 

Glenn

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

Kaveh Bazargan PhD

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 


This email has been scanned for spam and viruses by Proofpoint Essentials. Click here to report this email as spam.

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit https://groups.google.com/g/osi2016-25.

---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages