Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The quality (and quantity) of modern (medical) science. (Ioannidis,

4 views
Skip to first unread message

Erwin Moller

unread,
Oct 18, 2010, 9:51:20 AM10/18/10
to
Hi group,

Two disclaimers, because I am a lamer:
1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
Published Research Findings Are False" seems to be the most downloaded
technical paper on PLoS Medicine, so I expect some of you are familiar
with his work.
2) This is not 100% on topic here, but I think most of the regulars
should to read this nonetheless, because it concerns the way science
works these days, which is shockingly poor, if this article is right.

The article is (mostly) based on the work done by John P. A. Ioannidis.
(Urls posted below)

I strongly suggest to read the whole article yourself, but I couldn't
resist to post 2 pieces:

From november 2010 issue of the Atlantic:
Quote:
===========================================================
Though scientists and science journalists are constantly talking up the
value of the peer-review process, researchers admit among themselves
that biased, erroneous, and even blatantly fraudulent studies easily
slip through it. Nature, the grande dame of science journals, stated in
a 2006 editorial, �Scientists understand that peer review per se
provides only a minimal assurance of quality, and that the public
conception of peer review as a stamp of authentication is far from the
truth.� What�s more, the peer-review process often pressures researchers
to shy away from striking out in genuinely new directions, and instead
to build on the findings of their colleagues (that is, their potential
reviewers) in ways that only seem like breakthroughs�as with the
exciting-sounding gene linkages (autism genes identified!) and
nutritional findings (olive oil lowers blood pressure!) that are really
just dubious and conflicting variations on a theme.
===========================================================

Another quote:
===========================================================
Medical research is not especially plagued with wrongness. Other
meta-research experts have confirmed that similar issues distort
research in all fields of science, from physics to economics (where the
highly regarded economists J. Bradford DeLong and Kevin Lang once showed
how a remarkably consistent paucity of strong evidence in published
economics studies made it unlikely that any of them were right). And
needless to say, things only get worse when it comes to the pop
expertise that endlessly spews at us from diet, relationship,
investment, and parenting gurus and pundits. But we expect more of
scientists, and especially of medical scientists, given that we believe
we are staking our lives on their results. The public hardly recognizes
how bad a bet this is. The medical community itself might still be
largely oblivious to the scope of the problem, if Ioannidis hadn�t
forced a confrontation when he published his studies in 2005.
===========================================================

Be sure to read the whole article here if interested:
[source]
original url (probably spans more lines):
http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269

Or use this one:
http://bit.ly/bUUGgF
[/source]

For clarity's sake: I don't intend to feed any creationists or something
like that. But if Ioannidis is right, it is about time to seriously
rethink peer-reviewing and/or grant systems, methinks.
Opinions?

Regards,
Erwin Moller

--
"There are two ways of constructing a software design: One way is to
make it so simple that there are obviously no deficiencies, and the
other way is to make it so complicated that there are no obvious
deficiencies. The first method is far more difficult."
-- C.A.R. Hoare

Burkhard

unread,
Oct 18, 2010, 10:45:30 AM10/18/10
to
On 18 Oct, 14:51, Erwin Moller

<Since_humans_read_this_I_am_spammed_too_m...@spamyourself.com> wrote:
> Hi group,
>
> Two disclaimers, because I am a lamer:
> 1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
> Published Research Findings Are False" seems to be the most downloaded
> technical paper on PLoS Medicine, so I expect some of you are familiar
> with his work.
> 2) This is not 100% on topic here, but I think most of the regulars
> should to read this nonetheless, because it concerns the way science
> works these days, which is shockingly poor, if this article is right.
>
> The article is (mostly) based on the work done by John P. A. Ioannidis.
> (Urls posted below)
>
> I strongly suggest to read the whole article yourself, but I couldn't
> resist to post 2 pieces:
>
> �From november 2010 issue of the Atlantic:
> Quote:
> ===========================================================
> Though scientists and �science journalists are constantly talking up the
> original url (probably spans more lines):http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-an...

>
> Or use this one:http://bit.ly/bUUGgF
> [/source]
>
> For clarity's sake: I don't intend to feed any creationists or something
> like that. But if Ioannidis is right, it is about time to seriously
> rethink peer-reviewing and/or grant systems, methinks.
> Opinions?
>
> Regards,
> Erwin Moller
>
> --

I'm at the moment in a project with neuroscientists to look into peer
review as "gatekeeping" (and writing on a much larger funding
application to extend this to other disciplines). There are a couple
of interesting developments trialled by various publishers and learned
societies, for instance "constant peer reviewing" where subscribers
to a journal can give ratings to articles they read, amazon style.
Other things we look into is the legal regulation of research, e.g.
duties to publish negative results. ith a consortium of European
publishers of online journals, we are discussing some sort of "seal of
quality" for peer review processes that meet certain standards (you'd
be surprised just how many different processes are grouped under peer
review)

Peer review is arguably still the best we have, but there is certainly
room ,and need, for improvements of the technicalities

Friar Broccoli

unread,
Oct 18, 2010, 10:55:18 AM10/18/10
to
On Oct 18, 9:51�am, Erwin Moller

<Since_humans_read_this_I_am_spammed_too_m...@spamyourself.com> wrote:
> Hi group,
>
> Two disclaimers, because I am a lamer:
> 1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
> Published Research Findings Are False" seems to be the most downloaded
> technical paper on PLoS Medicine, so I expect some of you are familiar
> with his work.
> 2) This is not 100% on topic here, but I think most of the regulars
> should to read this nonetheless, because it concerns the way science
> works these days, which is shockingly poor, if this article is right.
>
> The article is (mostly) based on the work done by John P. A. Ioannidis.
> (Urls posted below)
>
> I strongly suggest to read the whole article yourself, but I couldn't
> resist to post 2 pieces:
>
> �From november 2010 issue of the Atlantic:
> Quote:
> ===========================================================
> Though scientists and �science journalists are constantly talking up the
> original url (probably spans more lines):http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-an...

>
> Or use this one:http://bit.ly/bUUGgF
> [/source]
>
> For clarity's sake: I don't intend to feed any creationists or something
> like that. But if Ioannidis is right, it is about time to seriously
> rethink peer-reviewing and/or grant systems, methinks.
> Opinions?
>
> Regards,
> Erwin Moller
>
> --
> "There are two ways of constructing a software design: One way is to
> make it so simple that there are obviously no deficiencies, and the
> other way is to make it so complicated that there are no obvious
> deficiencies. The first method is far more difficult."
> -- C.A.R. Hoare

If I am not mistaken (and I may be), as a direct result of research
like the above many medical journals now refuse to publish the results
of clinical trials for drugs unless they are registered before the
trial began here:

http://clinicaltrials.gov/

so that the existence of negative trials cannot be hidden.

Mitchell Coffey

unread,
Oct 18, 2010, 12:01:04 PM10/18/10
to
On Oct 18, 9:51�am, Erwin Moller
<Since_humans_read_this_I_am_spammed_too_m...@spamyourself.com> wrote:
[snip]

> Another quote:
> ===========================================================
> Medical research is not especially plagued with wrongness. Other
> meta-research experts have confirmed that similar issues distort
> research in all fields of science, from physics to economics (where the
> highly regarded economists J. Bradford DeLong and Kevin Lang once showed
> how a remarkably consistent paucity of strong evidence in published
> economics studies made it unlikely that any of them were right).
[snip]

The DeLong/Lang paper, Are All Economic Hypotheses False?, may be
found here:
http://www.j-bradford-delong.net/pdf_files/False.pdf

Brad DeLong, who teaches at the greatest public university in the
world, is one of my favorite practicing economists. If you are
limited to following only two economists' blogs (and who isn't these
days?) one must be Krugman's, http://krugman.blogs.nytimes.com, the
other, DeLong's, http://delong.typepad.com/. DeLong is the more
entertaining, and I don't mean such as the word "entertaining" means
to economists. His blog is incoherent, but not in the economics
sense, more in the Pynchon way; and I mean this as a good thing.

Mitchell Coffey

r norman

unread,
Oct 18, 2010, 12:05:59 PM10/18/10
to
On Mon, 18 Oct 2010 15:51:20 +0200, Erwin Moller
<Since_humans_read_this...@spamyourself.com> wrote:

>Hi group,
>
>Two disclaimers, because I am a lamer:
>1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
>Published Research Findings Are False" seems to be the most downloaded
>technical paper on PLoS Medicine, so I expect some of you are familiar
>with his work.
>2) This is not 100% on topic here, but I think most of the regulars
>should to read this nonetheless, because it concerns the way science
>works these days, which is shockingly poor, if this article is right.
>
>The article is (mostly) based on the work done by John P. A. Ioannidis.
>(Urls posted below)
>
>I strongly suggest to read the whole article yourself, but I couldn't
>resist to post 2 pieces:
>
> From november 2010 issue of the Atlantic:
>Quote:
>===========================================================
>Though scientists and science journalists are constantly talking up the
>value of the peer-review process, researchers admit among themselves
>that biased, erroneous, and even blatantly fraudulent studies easily
>slip through it. Nature, the grande dame of science journals, stated in

>a 2006 editorial, �Scientists understand that peer review per se

>provides only a minimal assurance of quality, and that the public
>conception of peer review as a stamp of authentication is far from the

>truth.� What�s more, the peer-review process often pressures researchers

>to shy away from striking out in genuinely new directions, and instead
>to build on the findings of their colleagues (that is, their potential

>reviewers) in ways that only seem like breakthroughs�as with the

>exciting-sounding gene linkages (autism genes identified!) and
>nutritional findings (olive oil lowers blood pressure!) that are really
>just dubious and conflicting variations on a theme.
>===========================================================
>
>Another quote:
>===========================================================
>Medical research is not especially plagued with wrongness. Other
>meta-research experts have confirmed that similar issues distort
>research in all fields of science, from physics to economics (where the
>highly regarded economists J. Bradford DeLong and Kevin Lang once showed
>how a remarkably consistent paucity of strong evidence in published
>economics studies made it unlikely that any of them were right). And
>needless to say, things only get worse when it comes to the pop
>expertise that endlessly spews at us from diet, relationship,
>investment, and parenting gurus and pundits. But we expect more of
>scientists, and especially of medical scientists, given that we believe
>we are staking our lives on their results. The public hardly recognizes
>how bad a bet this is. The medical community itself might still be

>largely oblivious to the scope of the problem, if Ioannidis hadn�t

>forced a confrontation when he published his studies in 2005.
>===========================================================
>
>Be sure to read the whole article here if interested:
>[source]
>original url (probably spans more lines):
>http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
>
>Or use this one:
>http://bit.ly/bUUGgF
>[/source]
>
>For clarity's sake: I don't intend to feed any creationists or something
>like that. But if Ioannidis is right, it is about time to seriously
>rethink peer-reviewing and/or grant systems, methinks.
>Opinions?

My impression has always been that the quality of science is only
partially dependent on peer review which is merely a first screening
process. If something is important enough to matter, then certainly
there will be subsequent studies that will contradict bad
publications. And if the erroneous paper is on a topic that nobody
really cares enough about to try to develop further, then what
difference does it make?

The process may be somewhat inefficient, requiring substantial time
and perhaps a lot of lost or wasted time and effort, but eventually
science becomes self correcting.

Medicine is another story because lives are at stake during the
correction process.

carlip...@physics.ucdavis.edu

unread,
Oct 18, 2010, 3:38:50 PM10/18/10
to
Erwin Moller <Since_humans_read_this...@spamyourself.com> wrote:,

[...]


> From november 2010 issue of the Atlantic:
> Quote:
> ===========================================================
> Though scientists and science journalists are constantly talking
> up the value of the peer-review process, researchers admit among
> themselves that biased, erroneous, and even blatantly fraudulent
> studies easily slip through it. Nature, the grande dame of science

> journals, stated in a 2006 editorial, "Scientists understand that

> peer review per se provides only a minimal assurance of quality,
> and that the public conception of peer review as a stamp of
> authentication is far from the truth."

[...]


> For clarity's sake: I don't intend to feed any creationists or
> something like that. But if Ioannidis is right, it is about time to
> seriously rethink peer-reviewing and/or grant systems, methinks.
> Opinions?

As someone who referees a lot of papers, and who has been on several
journal editorial boards, let me make a few comments:

First, peer review standards vary from field to field. In some branches
of mathematics, reviewers are expected to check every step of a proof,
and peer review can come fairly close to a confirmation of a claim. At
another extreme, in some branches of experimental physics there's
no way for a reviewer to check many things, short of spending a few
billion dollars to recheck an experiment. A detector at the Large
Hadron Collider, for instance, is complex enough that it's extremely
unlikely that anyone who isn't actually on the experiment can judge
some claims (e.g., how much statistical weight to ascribe to various
observations). In situations like that, it's often the experimental
collaboration itself that does the most rigorous review. They have
a strong incentive -- there is more than one detector, and it would
be very embarrassing to make a strong claim only to have it disproved
by your competition.

My field of theoretical physics is somewhere in between. Reviewers
are not expected to strongly confirm that a paper is correct. They
are basically supposed to look for

-- obvious errors ("The author claims that special relativity is
disproved by observation X, but in fact the theory predicts exactly
the observed outcome," or "The author claims that special relativity
is disproved by observation X, but is apparently unaware that this
effect has been tested in papers A, B, and C to a thousand times the
author's accuracy; if he wants to claim these other observations are
wrong, he ought to at least acknowledge their existence," or "The
model presented here is inconsistent -- it's easy to see that the only
solution of equation (11) is x=0, which contradicts equation (14),"
or "In section 4, the authors show that the effect they're looking
for is too small to measure; why, then, do they say in the conclusion
that they've found an important new test of their model?");

-- conspicuous gaps ("The author provides strong evidence for
hypothesis X, but nothing in the paper seems to support her much
stronger claim Y," or "Equation (7) is said to follow from equation
(6), and it might, but I, at least, don't see how, and since I'm nearly
as bright as most readers of this journal, I expect they won't, either;
a much more careful explanation is needed");

-- overlaps with existing results ("If the author bothered to read the
literature, he would see his claim is just a special case of the general
results of [my] paper A, discussed in detail in section 2 of that paper,"
or "Section 4 is an interesting new result, but section 3 reproduces
the material I'm currently teaching from textbook X");

-- missing references ("Section 3 of this paper is based on the results
of experiment A, but these were made obsolete by the much more
accurate experiment B last year; the author should check that her
model is consistent with the new data," or "This is a new result, but
much of it is an extension of paper C, which ought to be cited," or --
this one I once got -- "A general discussion of this issue appeared in
an obscure paper by Poincare in 1905; see if you can get someone to
translate it, and cite it where it's appropriate");

-- incoherent writing ("Paragraph 2 seems to only make sense if the
authors are using the word "energy" to mean "entropy" and the
word "mass" to mean "momentum," or "I've worked on a very closely
related topic, but I find this paper incomprehensible; the authors
never define their symbols, and they seem to assume that any reader
will have already memorized the details of their earlier paper A");

-- level of interest ("This result has already been shown for the ten
most common types of black holes; it's true that no one seems to have
checked this rather obscure eleventh type, but is this really important
enough to publish?" or "I'm sure there's some journal out there --
maybe the Journal of Mediocre Results -- that would want to publish
this, but it doesn't seem to meet the standards of importance required
by Journal of High Prestige Physics");

-- appropriateness ("Why have the authors submitted a biophysics paper
to a journal of high energy particle physics?").

Clearly, even if referees are careful -- and sometimes they're not
-- errors will get through, and the system is certainly not designed
to catch deliberate fraud. Moreover, there is such a proliferation
of journals these days that a dedicated author can usually find
*somewhere* to publish almost anything. But for the decent journals,
at least, peer review does screen out most of the really bad papers.
Typical acceptance rates in my field range from around 30% to around
60%, and from the papers I've reviewed, I'd say with some confidence
that most of the rejected papers really deserved to be rejected.

(As one calibration point, I've served on a journal editorial board
for which I handled appeals from authors whose papers were rejected.
Of the fairly large number of appeals I received, I decided that the
referees were just wrong about 5% of the time -- for these cases I
recommended publication, sometimes after revisions -- and that
about 10% of the cases were ambiguous enough to be sent out for
further review. That's certainly not perfect, but it's a pretty good
record for a highly selective journal. Of course, you can believe or
not believe my judgment...)

At least as important, though, peer review leads to improved papers.
Many submissions are initially sent back to authors for revision, and
in my experience with my own papers, this has generally been a good
thing. It's led me, at least, to clearer writing, to fewer gaps and
fewer assumptions about what readers know or don't know, fewer
missed references, and in a few cases to major improvements in the
content. I've had a couple of bad experiences with referees who just
missed the point, but those have been fairly rare exceptions.

Steve Carlip


Nashton

unread,
Oct 18, 2010, 4:20:42 PM10/18/10
to
On 18/10/10 10:51 AM, Erwin Moller wrote:
> Hi group,
>
> Two disclaimers, because I am a lamer:
> 1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
> Published Research Findings Are False" seems to be the most downloaded
> technical paper on PLoS Medicine, so I expect some of you are familiar
> with his work.
> 2) This is not 100% on topic here, but I think most of the regulars
> should to read this nonetheless, because it concerns the way science
> works these days, which is shockingly poor, if this article is right.

You are not a lamer and thanks for sharing.
Many "lamers" happen to dwell in this ng and to read their posts about
how "science" auto-corrects itself from errors in findings and
conclusions, even though true to certain respect, is laughable.

David Hare-Scott

unread,
Oct 18, 2010, 6:18:40 PM10/18/10
to

Please explain to us some significant cases where science was not
auto-correcting and which system of investigation it was that determined
that the science was wrong.


David

RAM

unread,
Oct 18, 2010, 6:30:23 PM10/18/10
to

What is more laughable is that you are not able to produce one piece
of scientific error that you frequently imply permeates all of
science.

You clearly indicate above the over whelming failure of science to
"auto-correct" itself from errors. Please provide a list of these
uncorrected errors so the "lamers" can be disabused of their self-
imposed ignorance. Or is it you and your religious based evolution
denial that makes you the self-imposed ignorant lamer? Discerning
scientists want to know.

Ron O

unread,
Oct 18, 2010, 7:10:43 PM10/18/10
to
On Oct 18, 8:51�am, Erwin Moller

<Since_humans_read_this_I_am_spammed_too_m...@spamyourself.com> wrote:
> Hi group,
>
> Two disclaimers, because I am a lamer:
> 1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
> Published Research Findings Are False" seems to be the most downloaded
> technical paper on PLoS Medicine, so I expect some of you are familiar
> with his work.
> 2) This is not 100% on topic here, but I think most of the regulars
> should to read this nonetheless, because it concerns the way science
> works these days, which is shockingly poor, if this article is right.
>
> The article is (mostly) based on the work done by John P. A. Ioannidis.
> (Urls posted below)
>
> I strongly suggest to read the whole article yourself, but I couldn't
> resist to post 2 pieces:
>
> �From november 2010 issue of the Atlantic:
> Quote:
> ===========================================================
> Though scientists and �science journalists are constantly talking up the
> original url (probably spans more lines):http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-an...

>
> Or use this one:http://bit.ly/bUUGgF
> [/source]
>
> For clarity's sake: I don't intend to feed any creationists or something
> like that. But if Ioannidis is right, it is about time to seriously
> rethink peer-reviewing and/or grant systems, methinks.
> Opinions?
>
> Regards,
> Erwin Moller
>
> --
> "There are two ways of constructing a software design: One way is to
> make it so simple that there are obviously no deficiencies, and the
> other way is to make it so complicated that there are no obvious
> deficiencies. The first method is far more difficult."
> -- C.A.R. Hoare

I once met a fellow at a science conference that claimed that his job
was working at one of the major medical research clinics, not to do
any science, but to take some furball of data generated by the
scientifically incompetent medical researchers that the "researchers"
couldn't make into anything publishable and generate publications from
junk research that should not have been done in the first place. He
claimed that he knew of other "researchers" doing the same at other
major medical research facilities. Beats me if you can believe
something like that.

Ron Okimoto

Bob Casanova

unread,
Oct 18, 2010, 7:24:35 PM10/18/10
to
On Mon, 18 Oct 2010 19:38:50 +0000 (UTC), the following
appeared in talk.origins, posted by
carlip...@physics.ucdavis.edu:

Nominated as the most readable and comprehensible (and
entertaining!) description of peer review I've seen, by an
actual reviewer. I've snipped the lead-in posts, but not
disparagingly; this is about Steve's post.

Perhaps incorporation in the FAQ?

--

Bob C.

"Evidence confirming an observation is
evidence that the observation is wrong."
- McNameless

bobsyo...@yahoo.com

unread,
Oct 19, 2010, 3:56:44 AM10/19/10
to

"Erwin Moller"
<Since_humans_read_this...@spamyourself.com> wrote in
message news:4cbc50d0$0$81483$e4fe...@news.xs4all.nl...

> Hi group,
>
> Two disclaimers, because I am a lamer:
> 1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
> Published Research Findings Are False" seems to be the most downloaded
> technical paper on PLoS Medicine, so I expect some of you are familiar
> with his work.
> 2) This is not 100% on topic here, but I think most of the regulars should
> to read this nonetheless, because it concerns the way science works these
> days, which is shockingly poor, if this article is right.
>
> The article is (mostly) based on the work done by John P. A. Ioannidis.
> (Urls posted below)
>
> I strongly suggest to read the whole article yourself, but I couldn't
> resist to post 2 pieces:
>
> From november 2010 issue of the Atlantic:
> Quote:
> ===========================================================
> Though scientists and science journalists are constantly talking up the
> value of the peer-review process, researchers admit among themselves that
> biased, erroneous, and even blatantly fraudulent studies easily slip
> through it. Nature, the grande dame of science journals, stated in a 2006
> editorial, �Scientists understand that peer review per se provides only a
> minimal assurance of quality, and that the public conception of peer
> review as a stamp of authentication is far from the truth.� What�s more,
> the peer-review process often pressures researchers to shy away from
> striking out in genuinely new directions, and instead to build on the
> findings of their colleagues (that is, their potential reviewers) in ways
> that only seem like breakthroughs�as with the exciting-sounding gene
> linkages (autism genes identified!) and nutritional findings (olive oil
> lowers blood pressure!) that are really just dubious and conflicting
> variations on a theme.
> ===========================================================
>
> Another quote:
> ===========================================================
> Medical research is not especially plagued with wrongness. Other
> meta-research experts have confirmed that similar issues distort research
> in all fields of science, from physics to economics (where the highly
> regarded economists J. Bradford DeLong and Kevin Lang once showed how a
> remarkably consistent paucity of strong evidence in published economics
> studies made it unlikely that any of them were right). And needless to
> say, things only get worse when it comes to the pop expertise that
> endlessly spews at us from diet, relationship, investment, and parenting
> gurus and pundits. But we expect more of scientists, and especially of
> medical scientists, given that we believe we are staking our lives on
> their results. The public hardly recognizes how bad a bet this is. The
> medical community itself might still be largely oblivious to the scope of
> the problem, if Ioannidis hadn�t forced a confrontation when he published
> his studies in 2005.
> ===========================================================
>
> Be sure to read the whole article here if interested:
> [source]
> original url (probably spans more lines):
> http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269
>
> Or use this one:
> http://bit.ly/bUUGgF
> [/source]
>
> For clarity's sake: I don't intend to feed any creationists or something
> like that. But if Ioannidis is right, it is about time to seriously
> rethink peer-reviewing and/or grant systems, methinks.
> Opinions?
>
> Regards,
> Erwin Moller

There are some "faults" with peer review - but there is nothing better.
The quality reviews are done by other scientists who have either done the
tests - or know of other, scientific tests, that contradict the claims.

Having an organized "Seal of Approval", as someone suggested, ONLY imbeds a
bureaucracy into the process.

Having a "5 Star" popularity scale does not work unless the credentials of
each voter is verified.

and, finally, just because this organization makes negative claims about the
"peer reviewed" process - does not mean their opinion is fact.

When working smoothly, the peer review process is the best, and the
strongest, way of insuring accuracy in scientific investigation.
I could easily believe that any attempts to change that, would make it
weaker, less reliable and more prone to manipulation.

The early comments, in this article, sound exactly like things a
fundamentalist/creationist would say to smear and demean valid science.
I would not change anything just because these brainless apes make false
claims.

Burkhard

unread,
Oct 19, 2010, 4:51:42 AM10/19/10
to
On 19 Oct, 08:56, "PepsiFr...@teranews.com" <bobsyoung...@yahoo.com>
wrote:
> "Erwin Moller"
> <Since_humans_read_this_I_am_spammed_too_m...@spamyourself.com> wrote in
> messagenews:4cbc50d0$0$81483$e4fe...@news.xs4all.nl...

>
>
>
> > Hi group,
>
> > Two disclaimers, because I am a lamer:
> > 1) Possibly old news for some of you. His (Ioannidis) paper "Why Most
> > Published Research Findings Are False" seems to be the most downloaded
> > technical paper on PLoS Medicine, so I expect some of you are familiar
> > with his work.
> > 2) This is not 100% on topic here, but I think most of the regulars should
> > to read this nonetheless, because it concerns the way science works these
> > days, which is shockingly poor, if this article is right.
>
> > The article is (mostly) based on the work done by John P. A. Ioannidis.
> > (Urls posted below)
>
> > I strongly suggest to read the whole article yourself, but I couldn't
> > resist to post 2 pieces:
>
> > From november 2010 issue of the Atlantic:
> > Quote:
> > ===========================================================
> > Though scientists and �science journalists are constantly talking up the

> > value of the peer-review process, researchers admit among themselves that
> > biased, erroneous, and even blatantly fraudulent studies easily slip
> > through it. Nature, the grande dame of science journals, stated in a 2006
> > editorial, �Scientists understand that peer review per se provides only a

> > minimal assurance of quality, and that the public conception of peer
> > review as a stamp of authentication is far from the truth.� What�s more,

> > the peer-review process often pressures researchers to shy away from
> > striking out in genuinely new directions, and instead to build on the
> > findings of their colleagues (that is, their potential reviewers) in ways
> > that only seem like breakthroughs�as with the exciting-sounding gene

> > linkages (autism genes identified!) and nutritional findings (olive oil
> > lowers blood pressure!) that are really just dubious and conflicting
> > variations on a theme.
> > ===========================================================
>
> > Another quote:
> > ===========================================================
> > Medical research is not especially plagued with wrongness. Other
> > meta-research experts have confirmed that similar issues distort research
> > in all fields of science, from physics to economics (where the highly
> > regarded economists J. Bradford DeLong and Kevin Lang once showed how a
> > remarkably consistent paucity of strong evidence in published economics
> > studies made it unlikely that any of them were right). And needless to
> > say, things only get worse when it comes to the pop expertise that
> > endlessly spews at us from diet, relationship, investment, and parenting
> > gurus and pundits. But we expect more of scientists, and especially of
> > medical scientists, given that we believe we are staking our lives on
> > their results. The public hardly recognizes how bad a bet this is. The
> > medical community itself might still be largely oblivious to the scope of
> > the problem, if Ioannidis hadn�t forced a confrontation when he published

> > his studies in 2005.
> > ===========================================================
>
> > Be sure to read the whole article here if interested:
> > [source]
> > original url (probably spans more lines):
> >http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-an...

>
> > Or use this one:
> >http://bit.ly/bUUGgF
> > [/source]
>
> > For clarity's sake: I don't intend to feed any creationists or something
> > like that. But if Ioannidis is right, it is about time to seriously
> > rethink peer-reviewing and/or grant systems, methinks.
> > Opinions?
>
> > Regards,
> > Erwin Moller
>
> There are some "faults" with peer review - but there is nothing better.
> The quality reviews are done by other scientists who have either done the
> tests - or know of other, scientific tests, that contradict the claims.


You should read Steve Carlips' post on this. For most disciplines, it
is not practically
possible for a reviewer to replicate the tests. it is difficult
enough to get funding for original research. Refereeing is done on a
"for free" shoestring basis, that just does not allow this.


> Having an organized "Seal of Approval", as someone suggested, ONLY imbeds a
> bureaucracy into the process.

How then do you know when you read a journal what type of peer review
they use? Blind, double blind, mixed? If double blind, are submissions
pre-edited to eliminate self-references and other identification
material? One, two or three (or more) reviewers? Possibility to
nominate reviewers by author? Only reviewers from editorial board
(permitting authors to find out who is likely to review the article)?
Confidence ratings of reviewer, and if so, self assessment or external
assessment? These are just a few of numerous aspects how a journal
arranges its peer review system that have a bearing on the credibility
and reliability of the the findings. You could go to the "guidelines
for authors" of each journal you read, and find out _some_ of these,
but not all.

The suggested alternative is a simple ranking system which allows
journals to display prominently on the outside what system they use,
and how reliable it is deemed to be.

>
> Having a "5 Star" popularity scale does not work unless the credentials of
> each voter is verified.

Depends. There are "wisdom of the crowds" models that show that even
open rating systems are remarkably stable. What some editors and
learned societies propose though is a rating by other subscribers (if
you pay lots of money to read Proceedings B of the Royal Society, it
is very likely you are a peer), or possibly a pool with verified
credentials of people who are also readers, or a mixture. Equally, in
lien with other social media, there could be a rating of the raters.

>
> and, finally, just because this organization makes negative claims about the
> "peer reviewed" process - does not mean their opinion is fact.
>
> When working smoothly, the peer review process is the best, and the
> strongest, way of insuring accuracy in scientific investigation.
> I could easily believe that any attempts to change that, would make it
> weaker, less reliable and more prone to manipulation.

That seems an immensely scientific way to evaluate a proposal
<sarcasm>


>
> The early comments, in this article, sound exactly like things a
> fundamentalist/creationist would say to smear and demean valid science.
> I would not change anything just because these brainless apes make false
> claims.

Which is the epitome of an unscientific approach. Scientist know tat
all their methods are limited, and constantly try to improve them.
First, we have increasingly often examples of scientific misconduct
that evaded peer review. H. Zhong, T. Liu at Acta Acta
Crystallographica, Jan Hendrik Sch�n's fraudulent papers on
nanotechnology; Malcolm Pearce and Geoffrey Chamberlain; Karen M.
Ruggiero; Hwang Woo-suk (cloning) are all high profile examples.
There is also a considerable amount of "epidemiological" data that
looks at things such as correlation between external funding and
research outcome; republication of articles rejected elsewhere (not
necessarily a proof soemthing is amiss, but the apparent increase a
warning sign nonetheless)

And btw, Ioannidis is professor for Medicine at the Institute for
Clinical Research and Health Policy Studies at Tufts, the article that
the Atlantic mentions published in a high quality peer reviewed
journal (PLoS Medicine) and the post-publication record (in terms of
approving citations etc) overwhelmingly positive.

Do we have reasons to belief the problem is getting worse? Yes, as
both the research data and an analysis of academic publishing
indicates. Higher and higher degrees of specialization result in more
and more journals, with a smaller and smaller group of peers. Academic
output increases, but resources for peer review remain low - five
years ago, I'd have refereed 5-10 articles/conference papers per year,
by now I'm asked almost weekly, and our survey indicates that this is
a general trend.

There are worrying tends that the loss of trust in the system can lead
to "juridification" of academic publishing, with courts being asked to
intervene more often (or alternatively, highly intrusive and punitive
policing by national funding councils or learned societies (Denmark
and Sweden already going down this road), which shoudl eb in nobodies
ineterst.

So the reasons to improve the system are substantial - they should of
course be based on proper, scientific evaluation - and most certainly
not on "what everyone knows" or "I could easily believe".


Nashton

unread,
Oct 19, 2010, 4:58:15 AM10/19/10
to
On 10/18/10 8:10 PM, Ron O wrote:

> I once met a fellow at a science conference that claimed that his job
> was working at one of the major medical research clinics, not to do
> any science, but to take some furball of data generated by the
> scientifically incompetent medical researchers that the "researchers"
> couldn't make into anything publishable and generate publications from
> junk research that should not have been done in the first place. He
> claimed that he knew of other "researchers" doing the same at other
> major medical research facilities. Beats me if you can believe
> something like that.
>
> Ron Okimoto
>

Nice attempt of trying to discredit Ioannidis. Too bad that this is
something you would definitely say when confronted with research that
suggests that science is not infallible.

Burkhard

unread,
Oct 19, 2010, 5:32:25 AM10/19/10
to

Extremely odd comment, since Ron's source, if true, confirms rather
than contradicts Ioannidis

Erwin Moller

unread,
Oct 19, 2010, 5:48:39 AM10/19/10
to

Nashton,

In your hurry to show Ron is wrong and you are right, I think you forgot
to read what he actually wrote.
He is describing a fellow that claims his job is to write papers based
on junk research.
That is not exactly the same as claiming "science is not infallible", is
it?
He described a guy that makes a living creating nonsense "scientific
research".

David Hare-Scott

unread,
Oct 19, 2010, 6:55:10 AM10/19/10
to

Another robotic reply. Sigh.

David

Ron O

unread,
Oct 19, 2010, 7:13:15 AM10/19/10
to

What an idiot NashT. There was no discrediting anything except some
of the practices of science. If anything it was a partial explanation
for the claims. Beats me what you are talking about since it is a
general negative statement on how science is sometimes done. Why
would anyone think that science is infalible? Haven't you understood
anything that you've read in this forum? If science is infalible
explain the involvement of guys like Minnich and Behe with the ID
scam, and the bait and switch that they helped perpetrate on
creationist rubes like you. Scientists are human, NashT. They just
usually have more on the ball than you do.

Ron Okimoto

Nashton

unread,
Oct 19, 2010, 9:19:13 AM10/19/10
to
On 10/19/10 4:56 AM, Pepsi...@teranews.com wrote:

>
> The early comments, in this article, sound exactly like things a
> fundamentalist/creationist would say to smear and demean valid science.
> I would not change anything just because these brainless apes make false
> claims.
>

I thought this was your first post in this thread????

Ognjen

unread,
Oct 23, 2010, 9:14:54 AM10/23/10
to
On Oct 19, 1:24�am, Bob Casanova <nos...@buzz.off> wrote:
> On Mon, 18 Oct 2010 19:38:50 +0000 (UTC), the following
> appeared in talk.origins, posted by
> carlip-nos...@physics.ucdavis.edu:

Seconded.

0 new messages