RE: [scisip] Journal Prestige and Reliability

13 views
Skip to first unread message

Glenn Hampson

unread,
Feb 21, 2018, 12:05:06 AM2/21/18
to Fiore, Steve, SCI...@listserv.nsf.gov, rsc...@googlegroups.com, osi20...@googlegroups.com

Hi Steve,

 

Do you agree with the author’s conclusion (“there is no evidence that articles published in higher ranking journals are methodologically stronger”)? To me anyway, it isn’t supportable for two main reasons:

 

  1. His study is based on impact factors, which are a notoriously inept measure of who knows what exactly ---it’s not hard to find articles calling for the death of impact factors. This one is my favorite: https://www.peerageofscience.org/a-tale-of-two-distributions/. Stats nerds will appreciate this take. And,
  2. It generalizes from very specific examples in specific fields of study to broad conclusions about all journals from all fields of study. That is, the quality of computer models in papers about molecular structures are better in lower impact factor journals than higher, and the statistical power in psych studies is better in lower than higher, and experimental design is better in animal studies, etc. (assuming, of course, that the sampled journals didn’t lie about their impact factors, which is a problem seen with predatory journals). Ergo “(1) experiments reported in high-ranking journals are no more methodologically sound than those published in other journals; and (2) experiments reported in high-ranking journals are often less methodologically sound than those published in other journals.” I don’t see how this generalization is warranted, but then maybe I’m reading this all wrong.

 

So I don’t know---many don’t go publishing in the “Journal Nobody Reads” just yet?

 

Best,

 

Glenn

 

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

osi-logo-2016-25-mail

2320 N 137th Street | Seattle, WA 98133
(206) 417-3607 | gham...@nationalscience.org | nationalscience.org

 

 

 

From: Science of Science Policy Listserv [mailto:SCI...@LISTSERV.NSF.GOV] On Behalf Of Fiore, Steve
Sent: Tuesday, February 20, 2018 7:07 PM
To: SCI...@LISTSERV.NSF.GOV
Subject: [scisip] Journal Prestige and Reliability

 

Hi Everyone - There is a new article out summarizing various findings that examined journal prestige in relation to the quality of the research published in the journals.  The article is fully open access and can be found at this link (https://www.frontiersin.org/articles/10.3389/fnhum.2018.00037/full).  But I've also pasted a table that summarizes the various disciplines, the criteria for research quality/reliability, and what was found in association with journal ranking.

 

Best,

Steve

 

--------

Stephen M. Fiore, Ph.D.

Professor, Cognitive Sciences, Department of Philosophy (philosophy.cah.ucf.edu/staff.php?id=134)

Director, Cognitive Sciences Laboratory, Institute for Simulation & Training (http://csl.ist.ucf.edu/)

University of Central Florida

sfi...@ist.ucf.edu

 

 

Prestigious Science Journals Struggle to Reach Even Average Reliability
Björn Brembs, Universität Regensburg, Regensburg, Germany

In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.

 

 

Table 1. Overview of the cited literature on journal rank and methodological soundness.

 

 

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP” to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP” to list...@listserv.nsf.gov

image001.png
image002.jpg

Glenn Hampson

unread,
Feb 21, 2018, 11:44:13 AM2/21/18
to SCI...@listserv.nsf.gov, rsc...@googlegroups.com

Alas, no it doesn’t David. Here are two OSI conference papers that describe the impact factor and what the scholarly community might want to do about it in more detail (for those of you who need extra reading 😊):

 

 

Best,

 

Glenn

 

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

osi-logo-2016-25-mail

2320 N 137th Street | Seattle, WA 98133
(206) 417-3607 | gham...@nationalscience.org | nationalscience.org

 

 

 

From: Science of Science Policy Listserv [mailto:SCI...@LISTSERV.NSF.GOV] On Behalf Of David Wojick
Sent: Wednesday, February 21, 2018 5:43 AM
To: SCI...@LISTSERV.NSF.GOV
Subject: Re: [scisip] Journal Prestige and Reliability

 

On the other hand these results may be quite plausible. The IF measures average importance based on near term citations. Importance and methodological quality are different parameters, so it would not be surprising if they were not well correlated. Given that important discoveries often come early in the research cycle and are then refined, one might even expect this result.

 

David

Inside Public Access


On Feb 21, 2018, at 12:05 AM, Glenn Hampson <gham...@NATIONALSCIENCE.ORG> wrote:

Hi Steve,

 

Do you agree with the author’s conclusion (“there is no evidence that articles published in higher ranking journals are methodologically stronger”)? To me anyway, it isn’t supportable for two main reasons:

 

  1. His study is based on impact factors, which are a notoriously inept measure of who knows what exactly ---it’s not hard to find articles calling for the death of impact factors. This one is my favorite: https://www.peerageofscience.org/a-tale-of-two-distributions/. Stats nerds will appreciate this take. And,
  2. It generalizes from very specific examples in specific fields of study to broad conclusions about all journals from all fields of study. That is, the quality of computer models in papers about molecular structures are better in lower impact factor journals than higher, and the statistical power in psych studies is better in lower than higher, and experimental design is better in animal studies, etc. (assuming, of course, that the sampled journals didn’t lie about their impact factors, which is a problem seen with predatory journals). Ergo “(1) experiments reported in high-ranking journals are no more methodologically sound than those published in other journals; and (2) experiments reported in high-ranking journals are often less methodologically sound than those published in other journals.” I don’t see how this generalization is warranted, but then maybe I’m reading this all wrong.

 

So I don’t know---many don’t go publishing in the “Journal Nobody Reads” just yet?

 

Best,

 

Glenn

 

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

<image002.jpg>

2320 N 137th Street | Seattle, WA 98133
(206) 417-3607 | gham...@nationalscience.org | nationalscience.org

 

 

 

From: Science of Science Policy Listserv [mailto:SCI...@LISTSERV.NSF.GOV] On Behalf Of Fiore, Steve
Sent: Tuesday, February 20, 2018 7:07 PM
To: SCI...@LISTSERV.NSF.GOV
Subject: [scisip] Journal Prestige and Reliability

 

Hi Everyone - There is a new article out summarizing various findings that examined journal prestige in relation to the quality of the research published in the journals.  The article is fully open access and can be found at this link (https://www.frontiersin.org/articles/10.3389/fnhum.2018.00037/full).  But I've also pasted a table that summarizes the various disciplines, the criteria for research quality/reliability, and what was found in association with journal ranking.

 

Best,

Steve

 

--------

Stephen M. Fiore, Ph.D.

Professor, Cognitive Sciences, Department of Philosophy (philosophy.cah.ucf.edu/staff.php?id=134)

Director, Cognitive Sciences Laboratory, Institute for Simulation & Training (http://csl.ist.ucf.edu/)

University of Central Florida

sfi...@ist.ucf.edu

 

 

Prestigious Science Journals Struggle to Reach Even Average Reliability
Björn Brembs, Universität Regensburg, Regensburg, Germany

In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.

 

 

<image001.png>

Table 1. Overview of the cited literature on journal rank and methodological soundness.

 

 

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP” to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP” to list...@listserv.nsf.gov

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP” to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP” to list...@listserv.nsf.gov

image001.jpg

Glenn Hampson

unread,
Feb 21, 2018, 2:46:26 PM2/21/18
to David Wojick, SCI...@listserv.nsf.gov, rsc...@googlegroups.com

Hi David,

 

Article citations are one of many indicators of “importance,” but the impact factor doesn’t measure individual articles. It’s a journal-level measure. So, one highly-cited article in an otherwise unremarkable journal can push up the journal’s impact factor.

 

To your boarder question, though, there are different alt-metrics that take into account a broader universe of “important” stuff like downloads, tweets, etc. There are also those who note that “important” work can go relatively uncited because it’s ahead of it’s time, it’s very niche, it’s published in the wrong place, etc.

 

Happy to chat more about this off-list.

 

Best,

 

Glenn

 

 

Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)

osi-logo-2016-25-mail

2320 N 137th Street | Seattle, WA 98133
(206) 417-3607 | gham...@nationalscience.org | nationalscience.org

 

 

 

From: Science of Science Policy Listserv [mailto:SCI...@LISTSERV.NSF.GOV] On Behalf Of David Wojick
Sent: Wednesday, February 21, 2018 11:05 AM
To: SCI...@LISTSERV.NSF.GOV
Subject: Re: [scisip] Journal Prestige and Reliability

 

Glenn, are you saying that citation is not an indicator of importance? A lot of scientometric work seems to assume that it is.

David
http://insidepublicaccess.com/

At 11:44 AM 2/21/2018, you wrote:

Alas, no it doesn't David. Here are two OSI conference papers that describe the impact factor and what the scholarly community might want to do about it in more detail (for those of you who need extra reading ):
 

 
Best,
 
Glenn
 
 
Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)
osi-logo-2016-25-mail

2320 N 137th Street | Seattle, WA 98133
(206) 417-3607 | gham...@nationalscience.org | nationalscience.org
 
 
 
From: Science of Science Policy Listserv [ mailto:SCI...@LISTSERV.NSF.GOV] On Behalf Of David Wojick
Sent: Wednesday, February 21, 2018 5:43 AM
To: SCI...@LISTSERV.NSF.GOV
Subject: Re: [scisip] Journal Prestige and Reliability
 
On the other hand these results may be quite plausible. The IF measures average importance based on near term citations. Importance and methodological quality are different parameters, so it would not be surprising if they were not well correlated. Given that important discoveries often come early in the research cycle and are then refined, one might even expect this result.
 
David
Inside Public Access

On Feb 21, 2018, at 12:05 AM, Glenn Hampson < gham...@NATIONALSCIENCE.ORG> wrote:

Hi Steve,

 

Do you agree with the author’s conclusion (“there is no evidence that articles published in higher ranking journals are methodologically stronger†)? To me anyway, it isn’t supportable for two main reasons:

 

His study is based on impact factors, which are a notoriously inept measure of who knows what exactly ---it’s not hard to find articles calling for the death of impact factors. This one is my favorite: https://www.peerageofscience.org/a-tale-of-two-distributions/. Stats nerds will appreciate this take. And,

It generalizes from very specific examples in specific fields of study to broad conclusions about all journals from all fields of study. That is, the quality of computer models in papers about molecular structures are better in lower impact factor journals than higher, and the statistical power in psych studies is better in lower than higher, and experimental design is better in animal studies, etc. (assuming, of course, that the sampled journals didn’t lie about their impact factors, which is a problem seen with predatory journals). Ergo “(1) experiments reported in high-ranking journals are no more methodologically sound than those published in other journals; and (2) experiments reported in high-ranking journals are often less methodologically sound than those published in other journals.†I don’t see how this generalization is warranted, but then maybe I’m reading this all wrong.

 

So I don’t know---many don’t go publishing in the “Journal Nobody Reads†just yet?

 

Best,

 

Glenn

 

 

Glenn Hampson

Executive Director

Science Communication Institute (SCI)

Program Director

Open Scholarship Initiative (OSI)

<image002.jpg>

2320 N 137th Street | Seattle, WA 98133

(206) 417-3607 | gham...@nationalscience.org | nationalscience.org

 

 

 

From: Science of Science Policy Listserv [ mailto:SCI...@LISTSERV.NSF.GOV] On Behalf Of Fiore, Steve

Sent: Tuesday, February 20, 2018 7:07 PM

To: SCI...@LISTSERV.NSF.GOV

Subject: [scisip] Journal Prestige and Reliability

 

Hi Everyone - There is a new article out summarizing various findings that examined journal prestige in relation to the quality of the research published in the journals.  The article is fully open access and can be found at this link ( https://www.frontiersin.org/articles/10.3389/fnhum.2018.00037/full ).  But I've also pasted a table that summarizes the various disciplines, the criteria for research quality/reliability, and what was found in association with journal ranking.

 

Best,

Steve

 

--------

Stephen M. Fiore, Ph.D.

Professor, Cognitive Sciences, Department of Philosophy ( philosophy.cah.ucf.edu/staff.php?id=134)

Director, Cognitive Sciences Laboratory, Institute for Simulation & Training (http://csl.ist.ucf.edu/)

University of Central Florida

sfi...@ist.ucf.edu

 

 

Prestigious Science Journals Struggle to Reach Even Average Reliability

Björn Brembs, Universität Regensburg, Regensburg, Germany

In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.

 

 

<image001.png>

Table 1. Overview of the cited literature on journal rank and methodological soundness.

 

 

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP†to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP†to list...@listserv.nsf.gov

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP†to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP†to list...@listserv.nsf.gov

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP†to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP†to list...@listserv.nsf.gov

########################################################################

To send to the list, address your message to: SCI...@listserv.nsf.gov

To subscribe to the list: send the text “subscribe SCISIP†to list...@listserv.nsf.gov

To unsubscribe: sent the text “unsubscribe SCISIP†to list...@listserv.nsf.gov

image001.jpg

Brooke Struck

unread,
Feb 21, 2018, 3:04:32 PM2/21/18
to Glenn Hampson, David Wojick, SCI...@listserv.nsf.gov, rsc...@googlegroups.com

Hi Glenn and David,

 

Just a few quick notes.

 

  1. Given that impact factors (IF) are computed over the whole output of the journal, I’m not sure about the potential for a single highly cited article to drive the IF score so much. IF scores are much more normally distributed than citation scores for single papers.

  2. I’d be wary about too easily equating altmetrics with “broader impact,” as was discussed at STI 2017 in Paris last year: http://www.sciencemetrics.org/metrics-state-of-the-alt/

  3. There’s some evidence that interdisciplinary work—perhaps a type of boundary-pushing work you have in mind—does indeed take longer to start getting cited, consistent with the hypothesis that it may be ahead of its time: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127298

 

Brooke

 

 

Brooke Struck, Ph.D.

Senior Policy Officer | Spécialiste des politiques

Science-Metrix

1335, Mont-Royal E

Montréal, QC  H2J 1Y6

Canada

 

http://1science.com/images/LinkedIn_sign.pnghttp://1science.com/images/Twitter_sign.png 

T. 1.514.495.6505 x.117

T. 1.800.994.4761 x.117

F. 1.514.495.6523

brooke...@science-metrix.com

www.science-metrix.com

 

sm15-left

--
You received this message because you are subscribed to the Google Groups "The Research & Scholarly Communications Network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rscomm+un...@googlegroups.com.
To post to this group, send email to rsc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rscomm/006b01d3ab4c%24a6661b10%24f3325130%24%40nationalscience.org.
For more options, visit https://groups.google.com/d/optout.

Reply all
Reply to author
Forward
0 new messages