RE: [OPENCAFE-L] On behalf of Christine Bogman (FW: How to assess science scientifically? was: 'one shot'...)

10 views
Skip to first unread message

Glenn Hampson

unread,
Feb 8, 2024, 2:02:01 PMFeb 8
to OpenCafe-l, osi20...@googlegroups.com, cbor...@g.ucla.edu

Hi Christine!

 

For my money, Holly Falk-Krzesinski is hugely experienced in this area (at least in medical research).  She wrote a great paper a few years ago that might be helpful for you: How Do I Review Thee? Let Me Count the Ways: A Comparison of Research Grant Proposal Review Criteria Across US Federal Funding Agencies - PMC (nih.gov)

 

If you want to go really wild, here’s the reference section from a chapter on “Impact” that I wrote for the Routledge Handbook of Science Communication (2022). (p.s.: I hope this doesn’t overload your intake system Toby!)

 

  1. Adam, D. (2019) Science funders gamble on grant lotteries. Nature news. https://go.nature.com/3svELIa
  2. Archambault, E. (2018) Universalisation of OA scientific dissemination
  3. Ari M.D., Iskander, J., Araujo, J., et al. (2020) A science impact framework to measure impact beyond journal metrics. PLoS ONE 15(12): e0244407. doi: 10.1371/journal.pone.0244407
  4. Baker, M. (2015) Over half of psychology studies fail reproducibility test. Nature blog post. http://go.nature.com/3bH1Hxb
  5. Baldwin, M. (2018) Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States. Isis, volume 109, number 3
  6. Berg, J. (2020) Modeling Research Project Grant Success Rates from NIH Appropriation History: Extension to 2020. bioRxiv. doi: 10.1101/2020.11.25.398339
  7. Bollen, J., Van de Sompel, H., Hagberg, A., et al. (2009) A Principal Component Analysis of 39 Scientific Impact Measures. PLOS ONE 4(6): e6022. doi: 10.1371/journal.pone.0006022
  8. Bornmann, L. (2012) Measuring the societal impact of research: research is less and less assessed on scientific impact alone--we should aim to quantify the increasingly important contributions of science to society. EMBO Rep. 2012;13(8):673-676. doi:10.1038/embor.2012.99
  9. Dolgin, E. (2017) Why we left academia: Corporate scientists reveal their motives. Nature Careers. http://go.nature.com/3kqt6Y6
  10. Etzioni, O. (2019) AI Academy Under Siege. Inside Higher Ed blog post. http://bit.ly/3q3gS91
  11. Fabbri, A., Lai, A., Grundy, Q., et al. (2018) The Influence of Industry Sponsorship on the Research Agenda: A Scoping Review. Am J Public Health. 2018;108(11):e9-e16 doi:10.2105/AJPH.2018.304677
  12. Falk-Krzesinski, H.J., and Tobin. S.C. (2015) How Do I Review Thee? Let Me Count the Ways: A Comparison of Research Grant Proposal Review Criteria Across US Federal Funding Agencies. J Res Adm. 2015;46(2):79-94
  13. Fang, F.C., and Casadevall, A. (2016) Research Funding: The Case for a Modified Lottery. mBio editorial. doi: 10.1128/mBio.00422-16
  14. Gordon, M., Viganola, D., Bishop, M., et al. (2020) Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme. Royal Society Open Science, Vol 7;7. doi: 10.1098/rsos.200566
  15. Gugerty, M.K., and Karlan. D. (2018) Ten Reasons Not to Measure Impact---and What to Do Instead. Stanford Social Innovation Review
  16. Hampson, G, DeSart, M., Kamerlin, L., et al. (2021) OSI Policy Perspective 4: Open Solutions: Unifying the meaning of open and designing a new global open solutions policy framework. Open Scholarship Initiative. January 2021 edition. doi: 10.13021/osi2020.2930
  17. Holbrook, J.B., and Frodeman, R. (2011). Peer review and the ex ante assessment of societal impacts. Research Evaluation 20:3, pp. 239-246. doi: 10.3152/095820211X12941371876788
  18. Larivière, V., Macaluso, B., Mongeon, P., et al. (2018) Vanishing industries and the rising monopoly of universities in published research. PLOS ONE 13(8): e0202120.  doi: 10.1371/journal.pone.0202120
  19. Lok, C. (2010). Science funding: Science for the masses. Nature 465, 416-418 (2010). doi:10.1038/465416a
  20. Mallapaty, S. (2020) China bans cash rewards for publishing papers. Nature news. https://go.nature.com/3qPbB66
  21. National Institutes of Health (NIH). (2013) Additional scoring guidance for research applications. https://bit.ly/3uJpsO3
  22. National Research Council (NRC). (2014) Furthering America's Research Enterprise. Washington, DC: The National Academies Press. doi: 10.17226/18804.
  23. National Science Foundation (NSF). (2013) Chapter II - Proposal Preparation Instructions. http://bit.ly/2O0C2Yx
  24. OSI. (2018) Comment on proposed rule, “Strengthening Transparency in Regulatory Science,” EPA-HQ-OA-2018- 0259. Open Scholarship Initiative. https://bit.ly/3bHgN5O
  25. OSI. (2021) How do researchers decide where to publish? OSI Infographic 3. Open Scholarship Initiative. http://bit.ly/37VVvk1
  26. Packalen, M., and Bhattacharya, J. (2020) NIH funding and the pursuit of edge science. PNAS Jun 2020, 117 (22) 12011-12016; doi: 10.1073/pnas.1910160117
  27. Plume, A., and van Weijen, D. (2014) Publish or perish? The rise of the fractional author. Research Trends 38
  28. PricewaterhouseCoopers (PWC). (2020) Pharma 2020: Challenging business models. White paper. http://pwc.to/38codwT
  29. Priyadarshini, S. (2018). India targets universities in predatory-journal crackdown. Nature. doi: 10.1038/d41586-018-06048-2
  30. Ravenscroft J., Liakata, M., Clare, A., et al. (2017) Measuring scientific impact beyond academia: An assessment of existing impact metrics and proposed improvements. PLOS ONE 12(3): e0173152. doi: 10.1371/journal.pone.0173152
  31. Reale, E., Avramov, D., Canhial, K., et al. (2018) A review of literature on evaluating the scientific, social and political impact of social sciences and humanities research. Research Evaluation, 27(4), 298–308
  32. Research Excellence Framework (REF). (2021). https://www.ref.ac.uk
  33. SAGE Publishing. (2019) The latest thinking about metrics for research impact in the social sciences (White paper). Thousand Oaks, CA: Author. doi: 10.4135/wp190522
  34. Schimanski, L.A., and Alperin, J.P. (2018) The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Res. 2018;7:1605. doi:10.12688/f1000research.16493.1
  35. Science & Engineering Indicators (SEI). (2020) US National Science Board
  36. Sinatra, R., Wang, D., Deville, P., et al. (2016) Quantifying the evolution of individual scientific impact. Science  Vol. 354, Issue 6312, aaf5239. doi: 10.1126/science.aaf5239
  37. Taylor & Francis. 2019. Taylor & Francis Researcher Survey. https://bit.ly/3koHgrX
  38. UNESCO. (2021) UNESCO Institute for Statistics (UIS) dataset. http://data.uis.unesco.org
  39. Wahls, W. (2018) High cost of bias: Diminishing marginal returns on NIH grant funding to institutions. bioRxiv 367847.  doi: 10.1101/367847
  40. Wang, D., and Barabási, A.L. (2021) The Science of Science. Cambridge University Press
  41. Weisshaar, K. (2017) Publish and Perish? An Assessment of Gender Gaps in Promotion to Tenure in Academia, Social Forces, Volume 96, Issue 2, Pages 529–560, doi: 10.1093/sf/sox052
  42. Wootton, D. (2015) The Invention of Science: A New History of the Scientific Revolution. HarperCollins
  43. Wu, J. (2018) Why U.S. Business R&D Is Not as Strong as It Appears. Information Technology & Innovation Foundation. White paper. http://www2.itif.org/2018-us-business-rd.pdf

 

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> On Behalf Of Rick Anderson
Sent: Thursday, February 8, 2024 10:05 AM
To: OPENC...@LISTSERV.BYU.EDU
Subject: [OPENCAFE-L] On behalf of Christine Bogman (FW: How to assess science scientifically? was: 'one shot'...)

 

Listers –

 

Due to the vagaries of her local email system, Christine Borgman’s message below was rejected by the list platform. I’m forwarding it at her request.

 

---

Rick Anderson

University Librarian

Brigham Young University

(801) 422-4301

rick_a...@byu.edu

 

 

From: CHRISTINE L BORGMAN <cbor...@g.ucla.edu>
Date: Thursday, February 8, 2024 at 9:57 AM
To: "OPENC...@listserv.byu.edu" <OPENC...@LISTSERV.BYU.EDU>
Cc: Rick Anderson <rick_a...@byu.edu>
Subject: How to assess science scientifically? was: 'one shot'...

 

Thanks to all for a most interesting discussion and history of peer review!

 

I’m attempting to start a new thread to build upon the ‘one shot’ scholarly communication thread:

 

Let's take a few steps earlier in the cycle of scholarly inquiry to ask how to assess the quality of research proposals and projects? Put simply, how do we evaluate scientific missions scientifically?

 

Only the projects that are successful lead to the science that leads to the journal articles that are subject to peer review, hence our interest in tracing further back in the process.

 

With colleagues in astronomy, we are studying the processes by which major observatory proposals (on the scale of Keck, Hubble, JWST, etc) are evaluated at the initial state of competition, at interim reviews, and at continuing reviews for further funding or cancellation. We are finding little written about these kinds of peer reviews, which too often devolve into simple citation metrics and opaque expert judgments. Some of the evaluation reports are proprietary due to concerns for intellectual property, intelligence issues, and so on. The process is far from transparent.

 

The lit review by Mayernik et al (below) is among the few to address these questions.

 

All thoughts (and references) on how to apply peer review mechanisms to ‘big science’ appreciated. 

 

Mayernik, M. S., Hart, D. L., Maull, K. E., & Weber, N. M. (2017). Assessing and tracing the outcomes and impact of research infrastructures. Journal of the Association for Information Science and Technology68(6), 1341–1359. https://doi.org/10.1002/asi.23721

Christine

 

On Feb 8, 2024, at 09:29, Rick Anderson <rick_a...@BYU.EDU> wrote:

 

That is helpful, Glenn, thanks. 

 

For me, the issue isn’t so much whether we should use the term “gold standard” to characterize peer review – I don’t care much how we characterize it. I do care whether we understand what it does and whether it’s effective for its intended purpose.

 

I’ll stop posting on this thread now so as to leave more room for other voices.

 

Rick

 

--- 

Rick Anderson

University Librarian

Brigham Young University

 

 

From: Glenn Hampson <gham...@nationalscience.org>
Date: Thursday, February 8, 2024 at 9:25 AM
To: Rick Anderson <rick_a...@byu.edu>, "'OPENC...@LISTSERV.BYU.EDU'" <OPENC...@LISTSERV.BYU.EDU>
Cc: "'osi20...@googlegroups.com'" <osi20...@googlegroups.com>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

I’m out of my depth Rick and will defer to others here (or not here, yet) who are peer review experts---the esteemed Michael Ware comes to mind.

 

But to take a swipe at the answer anyway, I think peer review might best be described as part of a process that weeds out papers (for various reasons, good and bad), at which point they are put back into the submission pipeline, with or without correction, and most will eventually get published elsewhere. At the same time, this process often provides constructive feedback that can improve papers but not necessarily guarantee they are factually correct or otherwise free from substantive defect.

 

The surveys cited by Ware in his 2008 paper (Ware M. 2008. “Peer Review: Benefits, Perceptions and Alternatives.” PRC Summary Papers4:4-20. Google Scholar), show an average rejection rate of about 50 percent---20% desk rejections and 30% rejections through peer review. Of the 50% accepted, most (80-ish percent of this total) are accepted on the condition they be revised. Again citing Ware (and similar stats show up elsewhere), most academics say they are satisfied with this system, and believe it helps improve their work.

 

So---I think the questions we’re asking are what is the true function of peer review, and what are the limits of this process? We imbue it with so much authority and ability, but it may not deserve these. It is, more accurately, an editorial review system---not really a “gold standard” of anything. If we can grapple with this reality, we can then work on designing the other review processes we need to address the wide variety of other issues peer review cannot effectively address---everything from plagiarism to fake data to bad stats.

 

Does this answer your question? (And please, others with more expertise in this area please do jump in.)

 

Best regards,

 

Glenn

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

 

 

 

From: Rick Anderson <rick_a...@byu.edu> 
Sent: Thursday, February 8, 2024 8:39 AM
To: Glenn Hampson <gham...@nationalscience.org>; 'OPENC...@LISTSERV.BYU.EDU' <OPENC...@LISTSERV.BYU.EDU>
Cc: 'osi20...@googlegroups.com' <osi20...@googlegroups.com>
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Thanks, Glenn. To your knowledge, did any of these studies find a way to evaluate articles that are rejected through peer review?

 

In other words, one of the key functions of a journal is to reject. The effectiveness of rejection is hugely important, which means that studying the record of published articles necessarily means ignoring a core function of peer review. With apologies for not having the time to read all of these (but sincere appreciation to you for sharing them), can you tell us whether, and if so how, any of these studies might have accounted for that?

 

Rick

 

--- 

Rick Anderson

University Librarian

Brigham Young University

 

 

From: Glenn Hampson <gham...@nationalscience.org>
Date: Thursday, February 8, 2024 at 7:52 AM
To: Rick Anderson <rick_a...@byu.edu>, "'OPENC...@LISTSERV.BYU.EDU'" <OPENC...@LISTSERV.BYU.EDU>
Cc: "'osi20...@googlegroups.com'" <osi20...@googlegroups.com>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Gladly Rick. Here are the citations from my 2020 BRISPE presentation (some studies, some articles). There are many others before and since---this is just a sample:

 

·         Kelly, J, T Sadeghieh, K Adeli. 2014. Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide. EJIFCC. 2014 Oct 24;25(3):227-43. PMID: 27683470; PMCID: PMC4975196.

·         Willis, Michael. 2020. Peer review quality in the era of COVID. https://www.wiley.com/en-us/network/publishing/research-publishing/trending-stories/peer-review-quality-in-the-era-of-covid-19

·         Smith, Richard. 2006. Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine vol. 99,4 (2006): 178-82. doi:10.1258/jrsm.99.4.178

·         Horbach, SPJM, and W Halffman. 2019. The ability of different peer review procedures to flag problematic publications. Scientometrics 118, 3 39–373

·         Tennant, JP, and T Ross-Hellauer. 2020. The limitations to our understanding of peer review. Res Integr Peer Rev 5, 6. Doi: 10.1186/s41073-020-00092-1

·         Open Scholarship Initiative. 2016. Report from the OSI2016 Peer Review Workgroup. doi: 10.13021/G8K88P

o   Peer review is the worst form of evaluation except all those other forms that have been tried from time to time.-with apologies to Winston Churchill”

 

From: Rick Anderson <rick_a...@byu.edu> 
Sent: Thursday, February 8, 2024 7:29 AM
To: Glenn Hampson <gham...@nationalscience.org>; OPENC...@LISTSERV.BYU.EDU
Cc: osi20...@googlegroups.com
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

> but many studies have complained over the years that the evidence is unclear whether peer

> review actually improves research (beyond making articles more readable).

 

Glenn, could you share links to a few of these?

 

--- 

Rick Anderson

University Librarian

Brigham Young University

 

 

From: <osi20...@googlegroups.com> on behalf of Glenn Hampson <gham...@nationalscience.org>
Date: Thursday, February 8, 2024 at 7:27 AM
To: "OPENC...@LISTSERV.BYU.EDU" <OPENC...@LISTSERV.BYU.EDU>
Cc: "osi20...@googlegroups.com" <osi20...@googlegroups.com>
Subject: RE: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Wow. Living on the West coast of the US can be rough. By the time your day gets started, listserv conversations can be almost over! If I may, there are a couple of issues here that I see differently than my esteemed colleagues.

 

First, to this whole notion introduced by my friend Ricks and Lisa that peer review is highly effective at weeding out garbage and allowing good scholarship to get published: This is certainly true for the editorial process in general (like the desk rejection process), but it isn’t true of peer review. The peer review process is highly regarded by researchers, seen as a signal of quality (see https://bit.ly/3otwKRs), and highly valued by funders and institutions, but many studies have complained over the years that the evidence is unclear whether peer review actually improves research (beyond making articles more readable). 

 

This process also varies by journal (see note below) and is highly subject to bias, as Daniel mentions---by idea, gender, nationality, etc.

 

Here’s a link to a presentation I gave a few years ago on this topic. There’s too much detail to bore you with in a listserv email but the presentation has references included if you want to dig deeper: BRISPE-presentation-final-Hampson.pdf (osiglobal.org). In particular, I suggest you read Melinda Baldwin’s great paper on the history of peer review (Baldwin, Melinda. 2018. Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States. Isis, volume 109, number 3). The peer review system we use today is essentially a byproduct of US Congressional oversight in the mid-1970s; it took decades thereafter for this process to become widely used throughout the world.

 

So what do you tell your students in your one-shot, Melissa? I don’t know. Maybe that peer review is quality-control process we invented to help “monitor” science and now it’s kind of an institution unto itself with a mythology larger than its actual value to science?

 

Regarding Pooja’s story about gatekeeping, I know this might not make your colleague feel better, but most papers are rejected at least once, for any number of reasons (as Jean-Claude explained, like a bad fit with the journal’s focus). Across all kinds of journals, the average rejection rate of articles is a whopping 60-65% (https://doi.org/10.3145/epi.2019.jul.07). Individual rates vary widely by journal, ranging from 0-90% and higher. About 20% of papers get rejected before peer review for being out of scope, among other reasons (see https://bit.ly/2YnYoVv). All this said, most papers eventually get published somewhere. Two-thirds of preprints posted before 2017 were later published in peer-reviewed journals within 12-18 months (see https://doi.org/10.7554/eLife.45133). Also, if your colleague is submitting to 65 different journals, it seems like they might be casting a net that is too wide (and too unfocused), which might not be the best approach.

 

And finally, to Toby and Danny’s big-picture thinking, here’s an infographic OSI created a few years ago to show how review and publishing fit into (and feed into) the full idea lifecycle: OSI-Infographic-1.0: The Idea Lifecylce (osiglobal.org). There’s a lot more to research than just publishing, obviously, but to Jean-Claude’s point, publishing still plays a critical role (and it always has throughout the history of science). What form this takes in the future is where so much of the attention and effort in the OA reform space has been directed.

 

Best regards,

 

Glenn

 

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

 

 

Note: Generally speaking, specialty and prestige journals provide high quality peer review; even some preprint servers are experimenting with new forms of peer review. Regional journals don’t always provide the kind of peer review required by specialty journals; peer review quality here varies widely.

 

 

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> On Behalf Of Daniel Kulp
Sent: Thursday, February 8, 2024 6:32 AM
To: OPENC...@LISTSERV.BYU.EDU
Subject: Re: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

At the end of the day, peer review is run by people (editors, reviewers, etc.), and people are susceptible to bias. Is peer review perfect?  No, it’s not. It is likely the best we have, at the moment.  I certainly support experiments in the publishing industry, but I have yet to see a process which is consistently better and able to applied at scale. That would be how I would frame peer review to students.

 

Daniel Kulp, PhD


Founder, PIE Consulting
Publicationintegrity.com

 

 

 

On Feb 8, 2024, at 5:21 AM, Jean-Claude Guédon <jean.clau...@UMONTREAL.CA> wrote:

 

My own take on peer review is that it is part of a larger process which I have sometimes described as the "Great Conversation" that stands behind knowledge producing. Voltaire says somewhere - and I am paraphrasing - it is difficult to live without certainty, but believing that certainty exists is ridiculous. Knowledge production works exactly at that level, and that is where it differs from believing or convictions. Peer review is part of the process one can use to allow the best forms of human thinking to percolate to the surface and become reference points for further evolution of knowledge. Knowledge can only claim reliability, not certainty.

Rick is right when he says that, when executed competently and honestly, peer review is highly effective. The main problem is that parts of the process can remain quite opaque. For example, that "desk rejection" Rick mentions generally involves one person only. That person - the editor - may have two divergent objectives in his/her mind: on the one hand, his/her notion of quality, and, on the other, the effect of the article on the general standing of the journal, especially in a tightly controlled competition system such as the impact-factor driven mechanism.

Imagine yourself in the following situation: you have room (i.e. resources) to publish one article. You have two submissions. One article is on a hot topic but its quality is ho-hum. The other one appears stellar but on a topic that is more marginal in the present development of knowledge (perhaps it is not yet well understood, or whatever). Which principle will be used at the desk rejection level? The first article is bound to improve your impact factor; the second article may lower your impact factor. This is because the relationship of the impact factor to quality is both tenuous and ambiguous.

More generally, how is peer review affected by the fact that scientific articles and journals are two different kinds of objects  but have been entangled with each other since the advent of print. And this leads to another question: in a digital world, do journals still matter, and, if so, how can they matter? How do journals relate to platforms? To communities? Etc.

Jean-Claude Guédon

On 2024-02-07 23:23, Rick Anderson wrote:

Here’s how I explain peer review to students:

 

When an author submits her paper to a peer-reviewed journal, the journal’s editor gives it a first look. If it doesn’t appear to be up to scratch (which can mean any number of things: obviously poor methodology, illegibility, irrelevance, etc.) then the editor rejects it -- we call this “desk rejection.” If it looks like it has promise, the editor sends it out to one or more (usually at least two) reviewers. They’re called “peer” reviewers because they work in the same field as the author, or a closely adjacent one, so they’re in a good position to evaluate the scholarship. The reviewers are asked to read the paper more closely and evaluate it on its scholarly merits: is its methodology sound; do the conclusions proceed from the data; is it well organized and cogently written; do the cited works actually support the arguments in support of which they’re cited; etc. The reviewers submit reviews with recommendations as to whether the article should be rejected, or returned for revision, or published as is. This process may involve two or three rounds before the paper is finally published or rejected.

 

It's by no means a fail-safe system, but when executed competently and honestly, it’s highly effective at weeding out garbage and allowing good scholarship to get published. Unfortunately, the competence and honesty of journals is highly variable. Sometimes they get into bed with corporations who want to see certain things published; sometimes editors of different journals collude with each other to require authors to cite each other’s publications; and in recent years, there’s been a growing industry of journals that dishonestly claim to carry out peer review when in fact they will publish anything submitted to them as long as the author pays a publication fee. So before submitting to a journal, it’s really important to do your due diligence.

 

--- 

Rick Anderson

University Librarian

Brigham Young University

 

 

From: OpenCafe-l <OPENC...@LISTSERV.BYU.EDU> on behalf of Danny Kingsley <da...@DANNYKINGSLEY.COM>
Reply-To: Danny Kingsley <da...@DANNYKINGSLEY.COM>
Date: Wednesday, February 7, 2024 at 8:03 PM
To: "OPENC...@LISTSERV.BYU.EDU" <OPENC...@LISTSERV.BYU.EDU>
Subject: [OPENCAFE-L] The 'one shot' scholarly communication talk

 

Hi everyone,

 

I’m picking up in a new thread something Melissa noted:

 

As a librarian, I need to be able to stand in front of a class of freshmen, as I am about to do tonight, to explain what peer-review is and why it's the gold standard for what they cite in their papers, and to be able to say it with a straight face without feeling like a liar. For those of you who know what a "one-shot" is, you know we do NOT have time to explain the intricacies of the scholarly publishing industry, its good and bad financial incentives, etc., even if we understand them fully ourselves. We don't even have time to explain all that to graduate students.

 

This is a really good point for discussion.

 

How do people approach this type of explanation? I am thinking there is a parallel with the difference between what is written in textbooks and what is happening in the scholarly literature. Textbooks tend to present information as ‘decided’, information published in the literature is the ongoing debate. Textbooks change perspective and ideas slowly, a paper can get shot down in weeks/months.

 

So, do we provide the ‘textbook' version to students: “This is how science works, a research team find something out, write it up, send it to a journal, it gets sent to experts in the field, they comment, amendments are made and then it is published”.

 

Or do we bring in some of the broader picture: “Researchers don’t get paid to publish. Publication is the way researchers get ‘prestige’ - the better their paper and (more commonly) the place they publish it in ‘counts’ towards their academic standing. There are systems that count how many papers people have published, where they have published and how many other people have subsequently cited their work. These numbers are fed into most decision making in research - whether someone gets a promotion, whether they get a grant, how an institution fares in national ‘research excellence’ exercises and how universities get ranked."

 

Or do we lay it down: “The very narrow focus on what constitutes 'success’ in research has unfortunately resulted in some very poor behaviour…..”

 

 

I am conscious that when this is new to people it can seem overwhelming. A comment at last year’s AIMOS conference (which consisted of multiple presentations about research on research, uncovering a swathe of issues) was that it was very depressing and it meant it was hard to believe anything that was published. To be honest when you read articles like these https://www.theguardian.com/science/2024/feb/03/the-situation-has-become-appalling-fake-scientific-papers-push-research-credibility-to-crisis-point (which is referring to activity all over world) you can get depressed.

 

My response is that it is good we are lifting the lid on this - these are the steps we make towards fixing the problems.

 

But we want our community to ‘be alert not alarmed’.

 

How do people approach this discussion in their own institutions?

 

Danny

 

 

 

Dr Danny Kingsley

Scholarly Communication Consultant
Visiting Fellow, Australian National Centre for the Public Awareness of Science, ANU

Adjunct Senior Lecturer, Charles Sturt University
Member, Board of Directors, FORCE11
Member, Australian Academy of Science National Committee for Data in Science
---------------------------------------
e: da...@dannykingsley.com
m: +61 (0)480 115 937
t:@dannykay68

b: @dannykay68.bsky.social
o: 0000-0002-3636-5939

 

 

 

 

 

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 

 


Access the OPENCAFE-L Home Page and Archives 

To unsubscribe from the OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

-- 
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
--- 
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/DM4PR17MB60640E4C6121008C690CBB07C5442%40DM4PR17MB6064.namprd17.prod.outlook.com.

 


Access the OPENCAFE-L Home Page and Archives 

To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

 

Christine L. Borgman, Distinguished Research Professor, Information Studies

 

 

 

 

 


Access the OPENCAFE-L Home Page and Archives

To unsubscribe from OPENCAFE-L send an email to: OPENCAFE-L-si...@LISTSERV.BYU.EDU

Reply all
Reply to author
Forward
0 new messages