Hi Scott,
Thanks for sharing this. What did you think of this article?
Personally I’m flummoxed---this reads to me like yet another attempt at scholarship on predatory publishing by people who don’t understand the topic (we ran into several of these papers last year, where researchers were using Beall’s List as their starting point, or using OA samples from the wrong indices). In this paper, four authors from the University of Liege library in Belgium analyzed the accuracy of Cabell’s list by benchmarking it against Walt Crawford’s 2012-16 OA grey list (which was a manual count of OA journals that weren’t in DOAJ). Um….why?
There’s some good criticisms in here about how Cabell’s might want to improve, but other criticisms that were just unwarranted (e.g., that the Cabell’s database was inaccurate because it didn’t align with Crawford’s work) or exhibited a lack of understanding (e.g., questioning---at length---why journals that don’t actually publish articles should be considered predatory).
Simon---I’d be curious if you found anything in this analysis helpful or if you plan to respond at all in your own article.
Best,
Glenn
Glenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)
--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osi2016-25/10DFA942-4114-4F68-84EB-35D88557B25E%40gmail.com.
On Sep 12, 2020, at 3:09 PM, Glenn Hampson <gham...@nationalscience.org> wrote:
Hi Scott,Thanks for sharing this. What did you think of this article?Personally I’m flummoxed---this reads to me like yet another attempt at scholarship on predatory publishing by people who don’t understand the topic (we ran into several of these papers last year, where researchers were using Beall’s List as their starting point, or using OA samples from the wrong indices). In this paper, four authors from the University of Liege library in Belgium analyzed the accuracy of Cabell’s list by benchmarking it against Walt Crawford’s 2012-16 OA grey list (which was a manual count of OA journals that weren’t in DOAJ). Um….why?There’s some good criticisms in here about how Cabell’s might want to improve, but other criticisms that were just unwarranted (e.g., that the Cabell’s database was inaccurate because it didn’t align with Crawford’s work) or exhibited a lack of understanding (e.g., questioning---at length---why journals that don’t actually publish articles should be considered predatory).Simon---I’d be curious if you found anything in this analysis helpful or if you plan to respond at all in your own article.Best,GlennGlenn Hampson
Executive Director
Science Communication Institute (SCI)
Program Director
Open Scholarship Initiative (OSI)
Dear Glenn
Thanks for the opportunity to respond. As many in OSI will know, I am Director of International Marketing & Development at Cabells, and there have been a number of comments on various channels regarding the recently published article ‘How reliable and useful is Cabell’s Blacklist? A data-driven analysis’ (2020) by Dony et al (https://www.liberquarterly.eu/articles/10.18352/lq.10339/).
My colleagues and I were alerted to the publication of the article via social media at the weekend having been hitherto unaware that this research had been conducted on Cabells’ Predatory Reports database. We have been shocked and angered by the tone and content of the article, and after close examination have a number of serious questions which we have directed to both LIBER Quarterly (LQ) and its authors regarding their research conduct and publication processes.
At this stage, Cabells would not like to go into the specifics of these concerns, but for the record would like to state that while it certainly takes on board in good grace some of the findings and recommendations made by the authors in their article, it refutes many of the findings and conclusions shared in the paper. Our work on Predatory Reports has enabled many authors to avoid redundant publications, saved funders from wasting their resources and empowered universities to make evidence-based decisions on recruitment and promotion.
We hope to share more details on this at a future date
Many thanks, Simon Linacre