Credits, impact factors and the the biomedical publishing industry

5 views
Skip to first unread message

jeffer...@gmail.com

unread,
Mar 18, 2009, 3:18:44 AM3/18/09
to GPeerReview, luca.d...@pensiero.it
I am a medical epidemiologist with a long-standing interest in the
mechanisms of publishing, the quality of international biomedical
literature and editorial peer review. I will limit my comments to the
biomedical field.
One of the best ways of assessing the status of each of the three
interlinked areas is to base one’s observations and predictions on the
totality of available evidence. In biomedicine this is summed-up in
systematic reviews which are sometimes also incorrectly known as meta-
analyses (this is more properly the name a statistical technique to
pool data, but is of little relevance to my comments).
Biomedical publishing is a huge global business producing a plethora
of information of all kinds. In the last few years there has been a
growing body of evidence showing some fundamental problems with
contemporary literature. These problems are due to partial and biased
publication of data, fraud (which is probably rare) research
misconduct in its many forms (which is more common), redundant
publication (in which the same set of data are published more than
once without cross referencing), poor study design, optimistic
conclusions favouring new and expensive treatments, media buzz for new
(and more expensive) treatments which years later are shown not to be
so effective, ghost writing of studies and (more recently) prestigious
journals preferentially publishing studies with rich backers,
regardless of their quality or size.
These problems would not really matter, were it not for the fact that
ultimately people’s health and health care delivery are affected.
Ethically to me as a physician this situation is intolerable.
However, the problems are of difficult resolution without a
fundamental paradigm shift which requires two linked interventions to
address the causes of this situation.
First the publishing of biomedical should not be a business. Research
conducted on human beings should be freely available in public
archives and (as your project aims to achieve) freely available. There
is no scientifically tenable reason for the so-called Inglefinger rule
(the right to publish only what has not been published before), nor
for the very existence of journals. There is little evidence that
journal rituals such as peer review add anything. Even less is there a
requirement to transfer copyright of a research paper to one of the
big publishing houses.

Second, the assessment of research quality and researchers’ careers
should not be based on the number of publications nor on their quality
as judged indirectly by bibliometric indicators such as the impact
factor (IF). The IF is a measure of journal citation and is not linked
to methodological quality of what is published in it. Academic
pressures to publish in journals with a high journal impact factor are
distorting and corrupting biomedical research, as both authors,
sponsors and journals are in fierce competition for ever higher factor
scores and the perverse incentives of the current system are beginning
to be clearly visible.
Projects such as yours are desperately needed to provide incentives
for honest biomedical research conduct and reporting. However: beware
of substituting the IF with endorsements. There is no point in getting
rid of one totem and providing another incentive for points
collection. Research should be judged on its capacity to improve
health and that takes time and an open debate.


Tom Jefferson
Jeffer...@gmail.com

Matthews

unread,
Mar 18, 2009, 6:12:43 AM3/18/09
to gpeer...@googlegroups.com
Dear Tom,

I agree with most of your points below, and appreciate the concerns,
but wonder what if any authors following this thread could comment on
positive experiences with peer review?

There may indeed be "little evidence that journal rituals such as peer
review add anything", but this could be because satisfied authors do
not make noises about the review process, while dissatisfied authors
do - and there are naturally many unhappy authors because papers can
rarely be published without revision.

In my own case, reviewers have always been helpful in one way or
another - even if it has meant not having a paper published. Not
getting published can also be a positive thing, in retrospect.

Which is not to say that all reviewers do good work, but I have yet to
be mortally wounded by the process.

Best regards, Peter

****

Qubyte

unread,
Mar 18, 2009, 6:29:24 AM3/18/09
to GPeerReview
Hi Tom, I enjoyed reading your lucid and frightening account of
biomedical publishing. I agree that it's clearly unethical. Whilst not
life threatening I've seen similar trends in my specific field. I'm in
experimental quantum optics. Specifically a rather old offshoot that
deals with `micromasers'. There are only a few players in this field,
and quite a while back (we're talking decades here) my supervisor's
old supervisor had a feud with one of the other players in France. My
supervisor's old supervisor died recently and my supervisor actually
inherited the feud. It's really crazy. We can't seem to get a thing
published in this area now because the journals -always- get him to
referee (it's obvious from his writing style, and French inverted
commas). Now if I'm not careful I'll inherit it too, which means that
I've been moving sideways into quantum information. This all means
that good research is not only not getting published, but it's also
not going to get funded or even done because we're getting scared off.

I've heard other frightening accounts. One colleague of mine did a
little study a while back and noticed that for big hitters, the time
from submission to publication to -Nature- dropped dramatically after
the author had reached a certain prominence. He concluded that at this
point Nature didn't bother getting work peer reviewed. I can't say
whether or not this is true because I've not seen the data and it
could be anecdotal. Worse yet, I know of someone else who used
blackmail to get published in Nature (nothing bad, it actually turned
out he was righting a wrong but the point stands).

I agree that care is needed not to idolise the EO factor, but it's
better than what we have. The real gain from gpeerreview is total
transparency. I'm tired of an environment where people who should be
collaborating with are stabbing each other in the back instead. I
don't doubt that peer review works most of the time, but it's clearly
open to corruption.
> Jefferson....@gmail.com

Tom Jefferson

unread,
Mar 18, 2009, 10:12:40 AM3/18/09
to gpeer...@googlegroups.com
Guys, thank you for the frank responses. I greatly enjoyed reading them.

I do not think editorial peer review is all bad nor that reviewers are all crooks or feuders. Drummond Rennie, one of the father figures of research into the topic, has been credited with the aphorism "everyone has a bad story to tell about peer review". I am no exception, but like Peter I also have some very good stories to tell.

But this is not I think the point. You have to consider the publishing industry's use of peer review. It is a shield, an insurance policy, a risk spreader and at the same time a marketing tool. The former is because the editor can always say: "well it was not spotted by so and so: a famous guy who peer reviewed it". The latter works by implying that peer reviewed journals are somehow superior (and peer review is a requrement - although there are exceptions - for indexing on Medline and then on the ominpotent Web of Science home to the impact factor). The truth is that peer review has never been evaluated partly because its objectives are unclear and partly because it is one of the corner stones of the publishing business. I write on the basis of evidence from our systematic review and have references for all the statements I have made.

The problem is far greater than peer review. It starts with the patient's agent, the physician, who in good faith needs to keep abreast of developments in his/her field. To keep uptodate he/she needs to read either original (primary) research or its digest (secondary research). The physician does this by turning to his/her favourite sources which most often are high impact famous journals, seen as the most reliable. He/she reads titles and abstracts and perhaps not even completely. He/she may change his/her behavour on the basis of the content or at least discuss it with colleagues. The trouble is that we now are beginning to understand that someone else has "made" this path. Reviews show systematic inconsistencies between titlles, abstracts, methods, results and conclusions especially in industry funded studies. However we have recently shown that this is true of all journals regardless of the impact factor, but high impact factor journals are more likely to publish studies with cash rich sponsors, but not because they are bigger of better than the other studies, they just have a rich sponsor.

So we publish on public archives, we do pre publication and post publication peer review (signed, not anonymous - would you accept a doctors' prescription with no signature on it?) and we debate openly. We do not look for truth in journals because it is hard or impossible to find. We may read journals with debates, training articles, letters, opniions, editorials but not research because journals are a business and business and health do not mix. Remember Vioxx.

All the best,

Tom.

--
Dr Tom Jefferson
Via Adige 28
00061 Anguillara Sabazia
(Roma)
tel 0039 3292025051
Ti sei iscritto alla newsletter di Attenti alle Bufale? No? Vai sul sito www.attentiallebufale.it e digita il tuo indirizzo di posta elettronica in alto a sinistra dove dice "Vuoi ricevere in anteprima le migliori dritte di Sun Tzu?"

Mike Gashler

unread,
Mar 18, 2009, 11:49:22 AM3/18/09
to gpeer...@googlegroups.com
Open source circles are fond of telling the story about how when Microsoft obtained market dominance with web browsers, they declared Internet Explorer finished and disbanded the development team. Later, when Firefox started making rapid gains, they resurrected the the Internet Explorer team and started making improvements again (tabbed-browsing, more standards-compliance, faster Javascript, etc.). I like to think that this analogy applies to our efforts. Even if EOs turn out to be as bad as journals for reasons we cannot now foresee, at least we'll create some heterogeneity that will cause both sides to try a little harder to provide a useful service. We'd all like to see a major revolution in the system, but if all we ever do is create a viable alternative that a minority of scholars use, I think we'll still have done more to clean up the mess than will ever be fully apparent.
Reply all
Reply to author
Forward
0 new messages