Self-Selected Vetting vs. Peer Review: Supplement or Substitute? (fwd)

1 view
Skip to first unread message

Stevan Harnad

unread,
Nov 7, 2002, 2:18:05 PM11/7/02
to

---------- Forwarded message ----------
Date: Mon, 4 Nov 2002 18:05:03 +0000 (GMT)
From: Stevan Harnad <har...@ecs.soton.ac.uk>
To: Andrew Odlyzko <odl...@dtc.umn.edu>
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

On Mon, 4 Nov 2002, Andrew Odlyzko wrote:

> Fears about possible damage to the peer review system are slowing down the
> evolution of scholarly communication, and in particular the development
> of freely accessible article archives. I am convinced that these fears
> are unjustified. Although the peer review system will change substantially
> with the spread of such archives, it will change for the better.

I agree the fears are groundless, and that they are holding back
self-archiving, but I am also convinced that some of the fears concern
CHANGE to peer review, so hesitant self-archivers need to be reassured
about that too.

I am certain that online implementation will make (and already is
making) CLASSICAL peer review faster, cheaper, more efficient, and more
equitable. That can be confidently stated. But what (in my opinion) has
to be avoided at all costs is any linking whatsoever between
self-archiving (i.e., author/institution steps taken to maximize the
visibility, accessibility, usage, citation and impact of their
peer-reviewed research output) and any substantive changes in classical
peer review.

Classical peer review is merely the evaluation of the work of
specialists by their qualified fellow-specialists (peers) mediated by
and answerable to a designated qualified-specialist (the editor) who
picks the referees, adjudicates the reports, indicates what needs to be
done to revise for acceptance (if anything) and is answerable for the
results of this quality-control, error-corrective mechanism.

Untested "reforms" to this system, though possible, should not be
mentioned at all, in the same breath as self-archiving, for any implied
coupling between self-archiving and hypothetical peer-review changes
will only work to the disadvantage of self-archiving and open access:

"A Note of Caution About 'Reforming the System'"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1169.html

"Peer Review Reform Hypothesis-Testing"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html

> A good overview of the history and current state of the peer review
> system is provided by the book [1].

Does Fiona's book cover peer review in all disciplines, or just health
sciences? There is a quantitative empirical literature on this.

> This system is really a collection
> of many different systems, of varying effectiveness. They guarantee
> neither correctness nor novelty of the results, even among the most
> selective and prestigious journals.

No human (or nonhuman) judgement can guarantee that. The only relevant
question -- and it has not been asked or tested, but the default
assumption until it is tested MUST be for, not against, the causal role
of peer review in maintaining the current quality level of the research
literature -- is: How much better or worse is the literature's quality
with (1) classical peer review, (2) with hypothetical (not yet tested
and compared) alternatives, or (3) with no peer review at all (which,
by the way, is NOT tested already by existing pre-refereeing preprint
quality levels, for the invisible-hand reasons I've elaborated)?

Absent the comparative data, there is only speculation (speculation that
may well put the quality of the current refereed literature at risk if
it were implemented before successful pre-testing). This is the sort
of speculation from which I think it is so important to dissociate the
question of self-archiving, completely. Any implied coupling will simply
lose us yet another generation of potential self-archivers.

> However, traditional peer review
> (with anonymous referees evaluating submissions to a journal) does
> perform a valuable screening function.

I haven't read Fiona's book, but traditional (classical) peer review
consists of a series of (trivial) variants; the standard practise is to
make referee-anonymity optional: referees may waive it if they wish.

But almost nowhere is peer-review merely red-light/green-light
screening: Papers are not just refereed for acceptance/rejection.
Referees propose corrections and elaborations, papers are revised and
re-refereed. Peer-review is not a passive, static filter but an active,
dynamic, interactive, corrective one.

> Still, it is just a part of
> the entire communication system, and evaluation of the value of an
> article is never truly complete, as sometimes historians will revisit
> this question centuries after publication.

Yes, the peer-reviewed, accepted final draft, certified as having met
the established quality standards of a given journal, is only a
stage in the embryology of research, a milestone along the "scholarly
skywriting" continuum

Harnad, S. (1990) Scholarly Skywriting and the Prepublication
Continuum of Scientific Inquiry. Psychological Science 1:
342 - 343 (reprinted in Current Contents 45: 9-13, November 11
1991). http://cogprints.ecs.soton.ac.uk/archive/00001581/index.html

But it is a critical milestone: the one that both generates and
certifies the (probable) quality level and reliability of the findings.

Without that dynamic, answerable, pre-correction, and without the
tried-and-tested quality-label of an established journal to sign-post
the skyline, I am convinced that the literature would not only quickly
decline in quality, but it would become un-navigable -- till peer review
was simply reinvented!

Yet it is precisely this doomsday scenario that is holding would-be
self-archivers back today, and I'm afraid you may just be reinforcing
their fears here, Andrew!

I sense (I am reading this sequentially in real time) that we are about to
come to the "open peer commentary" alternative to "classical peer review":
http://cogprints.soton.ac.uk/documents/disk0/00/00/16/94/index.html

After 25 years of opportunity to compare the two professionally, I can
say with some conviction that open peer commentary is a supplement,
not a substitute, for peer review. No one should have to navigate the
raw, unfiltered manuscripts that make their way to editors' desks (and
even those are better than they would be without the "invisible hand"
effect), and no one should trust the self-appointed stalwarts who have
nothing better to do with their time than to try to do just that.
Commentary is valuable, but only after peer-review has ensured that
the paper meets the quality standards for publication.

> It is the presence of such
> self-correcting features in the entire scholarly communication system
> that makes the deficiencies of the current peer review system tolerable.
> However, it is natural to expect evolution to occur.

The self-correction in classical peer review is systematic, deliberate,
and answerable (on a journal by journal basis). The ad-lib
self-correctiveness of self-appointed sleuths tends more toward an
opinion poll than expert guidance.

> In the Gutenberg era of print journals, placing heavy reliance on
> traditional peer review was sensible. Printing and distributing journals
> was very expensive. Furthermore, providing additional feedback after
> publication was hard and slow. Therefore it was appropriate to devote
> considerable attention to minimizing the volume of published material,
> and making sure it was of high quality. With the development of more
> flexible communication systems, especially the Internet, we are moving
> towards a continuum of publication.

I of course agree about the continuum, but it makes no sense to call it
a "publication" continuum: At best it is a "publicizing" continuum
(though I prefer calling it "skywriting"). What is left of the classical
Gutenberg notion of "publication" in this is only the milestone of peer
review (and its accompanying quality-certification tag). Otherwise it
would simply be a non-sign-posted chaos of self-publicization,
patrolled by self-appointed vigilantes -- of unknown quality themselves
-- attesting to quality (and hence worthiness of the investment of time
to read and the risk of trying to use): The blind leading the blind.

Except for their names and prior reputations: We can of course trust
more the opinions of the qualified experts whose expertise has already
been established -- for those happy cases where it is they who happen to
be patrolling the literature for us (as they would have done in
classical peer review). But in classical peer review this matching of
expertise was systematic, and reliably, certifiably done for us in
advance. Here we would just have to hope that it happens, or will happen
(when?). And there, it was the journal's own established reputation and
concerted, answerable efforts that ensured that it would converge if the
milestone (certification of having met the journal's standards) was met.

In this new "system" we would be entrusting all of that to the four
winds!

> I have argued, starting with [2],
> that this requires a continuum of peer review, which will provide feedback
> to scholars about articles and other materials as they move along the
> continuum, and not just in the single journal decision process stage.
> We can already see elements of the evolving system of peer review in
> operation.

There always was a continuum, with informal, nonbinding feedback prior
to submission (and after publication). But formal peer review was
systematic, answerable, binding, and not self-administered (take it or
leave it); and it established a quality "tag" (the journal name) that
one could rely upon (within limits) a priori, for an article of a given
level of quality, rigor, and even importance and impact.

Do you really expect the reluctant self-archiver -- who wants only
to increase the visibility, accessibility and impact of his current,
peer-reviewed research output, such as it is, and to access the same
peer-reviewed output of others -- to set aside his worries about the
possible deleterious effects of self-archiving on peer review on the
strength of the hypothetical alternative you are evoking here?

"I wanted reassurance that if I self-archived, nothing would be lost,
nothing would change but the accessibility of my work to would-be users.
Instead, it looks as if EVERYTHING will change if I self-archive! (I'd
better just keep waiting...)?

Andrew, both of us are frustrated by the slowness with which the
research community is coming to the realization that open access is the
optimal and inevitable outcome for them, and that self-archiving is the
way to get there. But do you really believe that inasmuch as they are
being held back by fears about peer review this paper will embolden them,
rather than confirming their worst fears?

Yet it is all completely unnecessary! All that's needed for open access
is to self-archive, and leave classical peer review alone! Why imply
otherwise?

> Many scholars, including Stevan Harnad [3], one of the most prominent
> proponents of open access archives, argue for a continuing strong role
> for the traditional peer review system at the journal level. I have no
> doubt that this system will persist for quite a while, since sociological
> changes in the scholarly arena are very slow [4]. However, I do expect
> its relative importance to decline.

You may or may not be right. But before classical peer review can
decline in the open-access era, we have to bring on the open-access era,
by self-archiving. And if what is holding us back from self-archiving
is fears about the decline of peer-review, your predictions will not
hearten us, they will strengthen our reluctance to self-archive.

You are making predictions and conjectures, which is fine. But why link
them to open-access and especially the current unfortunate reluctance to
self-archive? Speculations will not relieve fears, especially not
speculations that tend to confirm them.

What would-be self-archivers need to be reassured of is the truth,
and that via facts, and the fact is that there is no causal connection
whatsoever between self-archiving and change (present or future) in peer
review to date. And for every speculation that open access may have THIS
eventual effect on peer review, there is a counter-speculation that
it may instead have THAT effect. The speculations are irrelevant, and
should be de-emphasized (in my opinion) -- at least if the objective
is to try to encourage and facilitate universal open access through
self-archiving (rather than merely to speculate about the possible future
of peer review).

> The reason is that there is a
> continuing growth of other types of feedback that scholars can rely on.
> This is part of the general trend (described in [5]) in which traditional
> journals are continuing as before, but the main action is in novel and
> often informal modes of communication that are growing much more rapidly.

There are indeed wonderful new forms of feedback in the online-era, and
there will be even more in the open-access era. But (until there is
substantive evidence to the contrary) these will be SUPPLEMENTS to peer
review, not SUBSTUTUTES for it. Self-archivers need to be reassured that
classical peer review will continue intact: that it is not put at risk
in any way by self-archiving or open access. The rest is just a bonus!

> The growing flood of information does require screening. Some of this
> reviewing can be done by non-peers. Indeed, some of it has traditionally
> been done by non-peers, for example in legal scholarship, where U.S. law
> reviews are staffed by students.

The law-review case, about which I have written and puzzled before,
is an anomaly, and, as far as I know, there are many legal scholars
who are not satisfied with it (Hibbitts included). (Not only are
law-reviews student-run, but they are house organs, another anomaly in the
journal-quality hierarchy, where house-journals tend to rank low, a kind
of vanity-press.) I think it is highly inadvisable to try to generalize
this case in any way, when it is itself unique and poorly understood. In
any case, it certainly will not be reassuring to professors who are
contemplating whether or not they should self-archive, that doing so
may mean that whereas they are marking their students essays on
tuesdays and thursdays, if they self-archive their own papers, their
students may be marking them on wednesdays and fridays, instead of the
qualified editor-mediated peers of times past.

> The growing role of interdisciplinary
> research might lead to a generally greater role for non-peers in reviewing
> publications.

I can't follow this at all. Interdisciplinary work requires review by
peers from more disciplines, not from non-peers. ("Peer" means qualified
expert.)

> However, in most cases only peers are truly qualified to
> review technical results. However, peer evaluations can be obtained,
> and increasingly are being obtained, much more flexibly than through the
> traditional anonymous journal refereeing process.

That is not my experience. It seems that qualified referees, an
overharvested resource, are becoming harder and harder to come by. They
are overloaded, and take a long time to deliver their reports. Is the
idea that they will be more available if approached some other way? Or
if they self-select? But what if they all want to review paper X, and no
one -- or dilettantes -- review papers A-J?

> Some can come from
> use of automated tools to harvest references to papers, in a much more
> flexible and comprehensive way than the Science Citation Index provided
> in the old days.

Now here I agree, but this falls squarely in the category of using
online resources to implement CLASSICAL peer review more efficiently and
equitably: Here, it is to help find qualified referees and to distribute
the load more evenly. But that has nothing to do with peer review
reform, nor with any of the other speculative alternatives considered
here. It goes without saying that an open-access corpus will make it
much easier and more effective to find qualified referees.

> Other, more up-to-date evaluations, can be obtained
> from a variety of techniques, such as those described in [5].

Not sure which in particular are meant, but please distinguish between
the (very desirable) ways that open access could make classical peer
review faster and more efficient and the much more speculative variants
you also allude to. They really have nothing to do with one another.

> An example of how evolving forms of peer review function is provided by
> the recent proof that testing whether a natural number is prime (that
> is, divisible only by 1 and itself) can be done fast. (The technical
> term is in "polynomial time.") This had been an old and famous open
> problem of mathematics and computer science. On Sunday, August 4, 2002,
> Maninda Agrawal, Neeraj Kayal, and Nitin Saxena of the Indian Institute
> of Technology in Kanpur sent out a paper with their astounding proof of
> this result to several of the recognized experts on primality testing.
> (Their proof was astounding because of its unexpected simplicity.)
> Some of these experts responded almost right away, confirming the validity
> of the proof. On Tuesday, August 6, the authors then posted the paper
> on their Web site and sent out email announcements. This prompted many
> additional mathematicians and computer scientists to read the paper, and
> led to extensive discussions on online mailing lists. On Thursday, August
> 8, the New York Times carried a story announcing the result and quoting
> some of the experts who had verified the correctness of the result.

The same thing could and would have happened (and probably has)
occasionally in paper: A powerful new finding can spread, and be
confirmed, faster than the sluggish, systematic peer-review process.
So what? It happens sometimes in paper and will happen sometimes
on line, but it is hardly the paradigm or prototype for research. Most
research makes little impact, has few qualified experts, and needs to be
vetted before the few potential reader/users can decide whether they
want to spend their limited time reading it, let alone trying to use
and build upon it.

There's another way to put all this: To a first approximation (and
forgetting about what I said about dynamic correction, revision etc.),
a journal's quality level is a function of its rejection rate: The
highest quality journals will only accept the highest quality work,
rejecting the rest. Second-tier journals will reject less, and so on,
right down to the near-vanity press at the bottom, which accepts just
about anything. This is the hierarchy of sign-posted milestones that
guides the prospective reader and user rationing his finite reading time
and his precious research resources. How is this quality triage to be done
on the model you just described (of the prime-number flurry)?

> Review by peers played a central role in this story. The authors first
> privately consulted known experts in the subject. Then, after getting
> assurance they had not overlooked anything substantial, they made their
> work available worldwide, where it attracted scrutiny by other experts.
> The New York Times coverage was based on the positive evaluations of
> correctness and significance by those experts. Eventually they did
> submit their paper to a conventional journal, where it will undoubtedly
> undergo conventional peer review, and be published. The journal version
> will probably be the main one cited in the future, but will likely have
> little influence on the development of the subject. Within weeks of the
> distribution of the Agrawal-Kayal-Saxena article, improvements on their
> results had been obtained by other researchers, and future work will be
> based mainly on those. Agrawal, Kayal, and Saxena will get proper credit
> for their breakthrough. However, although their paper will go through
> the conventional journal peer review and publication system, that will
> be almost irrelevant for the intellectual development of their area.

All I can do is repeat that this picture will not scale to all of
research. It works only for the rare, sexy special cases. And although
in general there is a tendency for the "growing edge" of science to
outpace the more plodding and inefficient formal peer-review machine
somewhat, this is nevertheless being sustained by its invisible hand;
eliminate that, and it will be hanging by its bootstraps -- with the
inevitable result.

> One can object that only potentially breakthrough results are likely
> to attract the level of attention that the Agrawal-Kayal-Saxena result
> attracted. But that is not a problem. It is only the most important
> results that require this level of attention and at this rapid a rate.
> There will be a need for some systematic scrutiny of all technical
> publications, to ensure that the literature does not get polluted to
> erroneous claims.

How much scrutiny? By whom? How will we know? And when? (Are we going to
invite referees to referee belatedly, after the fact? What shall we do
with the literature in the meanwhile? And would you find this
reassuring if you were hestitating about self-archiving because of
worries about peer review and the quality level and usability of the
literature?)

And until the erroneous-claim pollution is tested and filtered out,
how are tenure and promotion committees supposed to weight those
unrefereed self-publicizations for career advancement? By consulting
self-appointed commentators (if any) on the web, in place if the
established quality-standards and track-records of refereed journals?

> However, we should expect a much more heterogeneous
> system to evolve, in which many of the ideas mentioned in [2] will play
> a role. For example, the current strong prohibition of simultaneous
> publication in multiple journals is likely to be discarded as another
> relic of the Gutenberg era where print resources were scarce. Also,
> we are likely to see separate evaluations of significance and correctness.

It is hard to imagine how (or why!) when referees are already a scarce
and overused resource we would wish (or even be able) to ask them to
do double or even triple duty or more, refereeing yet again what has
already been refereed, by allowing or encouraging multiple publication of
the same work! Here again, one reliable milestone would have been fine
(with the rest supplemented by post-publication commentary) rather than
overgeneralizing the notion of "publication" while weakening the notion
of refereeing. (None of this will reassure reluctant self-archivers!)

The evaluations for correctness and significance are already separate
for most journals, and they establish their own levels for both. Usually
significance is the main vertical factor in the quality hierarchy.

> This note is a personal perspective on how peer review is likely to evolve
> in the future. It is based primarily on my experience in area such as
"areas"

> mathematics, physics, computing, and some social sciences.

Andrew, I'm curious: experiences as what in those areas: reader? author?
referee? editor? empirical investigator of peer-review?

It seems to me that the first two are definitely not enough to come to
an objective position on this, maybe not even the first four...

> However,
> I believe there is nothing special about those areas. Although health
> sciences have moved towards electronic publishing more slowly than the
> fields I am familiar with, I do not see much that is special about
> their needs. In particular, I believe that the frequently voiced
> concerns about need for extra scrutiny of research results that might
> affect health practices are a red herring. Yes, decision about medical
> procedures or even diet should be based on solidly established research.
> However, the extra levels of scrutiny are more likely to be obtained by
> more open communication and review systems than we have today.

And a little bit of self-poisoning by the users after the
self-publicizing by the authors, by way of self-correction?

Andrew, I'm afraid I disagree rather profoundly with the position you
are advocating here! I think it is far more anecdotal and speculative
than your other work on publishing and access. I think the conjectures
about peer review are wrong, but worse, I think they will be damaging
rather than helpful to self-archiving and open access. I think my own
article, which you cite, actually premptively considered most of these
points already, more or less along the lines I have repeated in
commentary here. I have to say that I rather wish that you weren't
publishing this -- or at least that you would clearly dissociate it from
self-archiving, and simply portray it as the conjectures they are, from
someone who is not actually doing research on the peer review system,
but mere contemplating hypothetical possibilities.

Or failing that, I wish I could at least write a commentary by way of
rebuttal!

Stevan Harnad


Stevan Harnad

unread,
Nov 7, 2002, 2:17:39 PM11/7/02
to

---------- Forwarded message ----------
Date: Mon, 4 Nov 2002 08:21:24 -0600
From: Andrew Odlyzko <odl...@dtc.umn.edu>
To: har...@ecs.soton.ac.uk
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

Stevan, Enclosed below is a draft of a short note about peer review that
was solicited for the 2nd edition of "Peer Review in Health Sciences."
Any comments you might have would be greatly appreciated.
Best regards, Andrew

Peer and non-peer review

Andrew Odlyzko
Digital Technology Center
University of Minnesota
http://www.dtc.umn.edu/~odlyzko

Fears about possible damage to the peer review system are slowing down the
evolution of scholarly communication, and in particular the development
of freely accessible article archives. I am convinced that these fears

is unjustified. Although the peer review system will change substantially


with the spread of such archives, it will change for the better.

A good overview of the history and current state of the peer review
system is provided by the book [1]. This system is really a collection


of many different systems, of varying effectiveness. They guarantee
neither correctness nor novelty of the results, even among the most

selective and prestigious journals. However, traditional peer review


(with anonymous referees evaluating submissions to a journal) does

perform a valuable screening function. Still, it is just a part of


the entire communication system, and evaluation of the value of an
article is never truly complete, as sometimes historians will revisit

this question centuries after publication. It is the presence of such


self-correcting features in the entire scholarly communication system
that makes the deficiencies of the current peer review system tolerable.
However, it is natural to expect evolution to occur.

In the Gutenberg era of print journals, placing heavy reliance on


traditional peer review was sensible. Printing and distributing journals
was very expensive. Furthermore, providing additional feedback after
publication was hard and slow. Therefore it was appropriate to devote
considerable attention to minimizing the volume of published material,
and making sure it was of high quality. With the development of more
flexible communication systems, especially the Internet, we are moving

towards a continuum of publication. I have argued, starting with [2],


that this requires a continuum of peer review, which will provide feedback
to scholars about articles and other materials as they move along the
continuum, and not just in the single journal decision process stage.
We can already see elements of the evolving system of peer review in
operation.

Many scholars, including Stevan Harnad [3], one of the most prominent


proponents of open access archives, argue for a continuing strong role
for the traditional peer review system at the journal level. I have no
doubt that this system will persist for quite a while, since sociological
changes in the scholarly arena are very slow [4]. However, I do expect

its relative importance to decline. The reason is that there is a


continuing growth of other types of feedback that scholars can rely on.
This is part of the general trend (described in [5]) in which traditional
journals are continuing as before, but the main action is in novel and
often informal modes of communication that are growing much more rapidly.

The growing flood of information does require screening. Some of this


reviewing can be done by non-peers. Indeed, some of it has traditionally
been done by non-peers, for example in legal scholarship, where U.S. law

reviews are staffed by students. The growing role of interdisciplinary


research might lead to a generally greater role for non-peers in reviewing

publications. However, in most cases only peers are truly qualified to


review technical results. However, peer evaluations can be obtained,
and increasingly are being obtained, much more flexibly than through the

traditional anonymous journal refereeing process. Some can come from


use of automated tools to harvest references to papers, in a much more
flexible and comprehensive way than the Science Citation Index provided

in the old days. Other, more up-to-date evaluations, can be obtained


from a variety of techniques, such as those described in [5].

An example of how evolving forms of peer review function is provided by


the recent proof that testing whether a natural number is prime (that
is, divisible only by 1 and itself) can be done fast. (The technical
term is in "polynomial time.") This had been an old and famous open
problem of mathematics and computer science. On Sunday, August 4, 2002,
Maninda Agrawal, Neeraj Kayal, and Nitin Saxena of the Indian Institute
of Technology in Kanpur sent out a paper with their astounding proof of
this result to several of the recognized experts on primality testing.
(Their proof was astounding because of its unexpected simplicity.)
Some of these experts responded almost right away, confirming the validity
of the proof. On Tuesday, August 6, the authors then posted the paper
on their Web site and sent out email announcements. This prompted many
additional mathematicians and computer scientists to read the paper, and
led to extensive discussions on online mailing lists. On Thursday, August
8, the New York Times carried a story announcing the result and quoting
some of the experts who had verified the correctness of the result.

Review by peers played a central role in this story. The authors first


privately consulted known experts in the subject. Then, after getting
assurance they had not overlooked anything substantial, they made their
work available worldwide, where it attracted scrutiny by other experts.
The New York Times coverage was based on the positive evaluations of
correctness and significance by those experts. Eventually they did
submit their paper to a conventional journal, where it will undoubtedly
undergo conventional peer review, and be published. The journal version
will probably be the main one cited in the future, but will likely have
little influence on the development of the subject. Within weeks of the
distribution of the Agrawal-Kayal-Saxena article, improvements on their
results had been obtained by other researchers, and future work will be
based mainly on those. Agrawal, Kayal, and Saxena will get proper credit
for their breakthrough. However, although their paper will go through
the conventional journal peer review and publication system, that will
be almost irrelevant for the intellectual development of their area.

One can object that only potentially breakthrough results are likely


to attract the level of attention that the Agrawal-Kayal-Saxena result
attracted. But that is not a problem. It is only the most important
results that require this level of attention and at this rapid a rate.
There will be a need for some systematic scrutiny of all technical
publications, to ensure that the literature does not get polluted to

erroneous claims. However, we should expect a much more heterogeneous


system to evolve, in which many of the ideas mentioned in [2] will play
a role. For example, the current strong prohibition of simultaneous
publication in multiple journals is likely to be discarded as another
relic of the Gutenberg era where print resources were scarce. Also,
we are likely to see separate evaluations of significance and correctness.

This note is a personal perspective on how peer review is likely to evolve


in the future. It is based primarily on my experience in area such as

mathematics, physics, computing, and some social sciences. However,


I believe there is nothing special about those areas. Although health
sciences have moved towards electronic publishing more slowly than the
fields I am familiar with, I do not see much that is special about
their needs. In particular, I believe that the frequently voiced
concerns about need for extra scrutiny of research results that might
affect health practices are a red herring. Yes, decision about medical
procedures or even diet should be based on solidly established research.
However, the extra levels of scrutiny are more likely to be obtained by
more open communication and review systems than we have today.

[1] F. Godlee and T. Jefferson, eds., "Peer Review in Health Sciences,"
BMJ Books, 1999.

[2] A. M. Odlyzko, Tragic loss or good riddance? The impending demise
of traditional scholarly journals, Intern. J. Human-Computer Studies
(formerly Intern. J. Man-Machine Studies) 42 (1995), pp. 71-122.
Available online at <http://www.dtc.umn.edu/~odlyzko/doc/eworld.html>.

[3] S. Harnad, The invisible hand of peer review, Exploit
Interactive, issue 5, April 2000. Available online at
<http://www.exploit-lib.org/issue5/peer-review/> and
<http://www.cogsci.soton.ac.uk/~harnad/nature2.html>.

[4] A. M. Odlyzko, The slow evolution of electronic publishing, in
"Electronic Publishing '97: New Models and Opportunities," A. J. Meadows
and F. Rowland, eds., ICCC Press, 1997, pp. 4-18. Available online at
<http://www.dtc.umn.edu/~odlyzko/doc/eworld.html>.

[5] A. M. Odlyzko, The rapid evolution of scholarly communication,
Learned Publishing, 15(1) (Jan. 2002), pp. 7-19. Available online
at <http://www.catchword.com/alpsp/09531513/v15n1/contp1-1.htm> and
<http://www.dtc.umn.edu/~odlyzko/doc/eworld.html>.


Stevan Harnad

unread,
Nov 7, 2002, 2:19:01 PM11/7/02
to

---------- Forwarded message ----------
Date: Wed, 6 Nov 2002 22:25:30 -0600
From: Andrew Odlyzko <odl...@dtc.umn.edu>
To: har...@ecs.soton.ac.uk
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

Please feel free to post our exchange in the American Scientist Forum.

> (R1) Your primary motivation is (R) to reform research communication,
> mainly through the posting of all papers online, unrefereed, and then
> relying on self-selected "vetters," in place of of classical peer review,
> to evaluate, improve and thereby sign-post paper quality.

I would not phrase it this way. I would say that my primary motivation
is indeed to reform research communication, mainly through the posting
of all papers online, unrefereed, and then relying on whatever mixture
of classical peer review, or contributions of self-selected "vetters,"
the community decides to rely on. I do not hold dogmatic views of how
peer review will be handled, and predict only rough trends.

> (A1) My primary motivation is (A) to make the classically peer-reviewed
> research literature we have now openly accessible online, such as
> it is, to maximize its accessibility, usage and impact. Qualified
> journal editors continue to select the referees, referees and author
> remain answerable to the editor, and the journal-name and its quality
> track-record sign-post for potential users that an article has met
> that journal's quality-standards.

> (R2) You see a causal connection between A and R: A will lead to R.

Yes, provided that (A) involves open access to preprints. (The way you
have phrased (A), it could encompass a system in which the Ingelfinger
rule would be universal, it's just that after publication by a journal,
articles would be freely available. That form of (A) would not yield
to (R).)

> (A2) I see a causal connection between worries about R and not-A:
> Worries that R would compromise or destroy the quality and usability
> of the peer-reviewed literature hold researchers back from doing A.

Yes, agreed.

> (R3) You think peer review is flawed and should be replaced by something
> better.

by a better form of peer review, with occasional non-peer review having
some impact as well.

> (R4) You think that the research advances that occur before peer review
> through the online posting of pre-refereeing preprints today are evidence
> that peer review is unnecessary and can be replaced by spontaneous vetting
> (R) without loss (and perhaps even with a gain) in quality.

Not that "peer review is unnecessary," but that "classical peer review is
unnecessary." I would also quarrel with the phrasing of the last part
of this point, but will let it go for lack of time.

>ao> We do have substantially different visions of the future of peer
>ao> review, and they have not changed much since we first started
>ao> corresponding back in 1993.
>
> I agree. Nor has the evidence changed since 1993.

But there is much more of it now.

> There is not only this total empirical gap between the data you use and
> the conclusions you draw, but there are also logical gaps: You have not
> replied when I have asked how, in a system where classical peer review
> and journal-names with track-records are no longer there as the back-up
> and bottom line -- as they are universally and without exception now --
> how the annual 2,000,000 papers (which are today refereed and sign-posted
> by refereed journals) will find their proper vetting, and how this will
> be sign-posted for potential users? This question does not even come up
> in the case of pre-refereeing preprints, because those are a "parallel
> economy," backed up by the classical peer-review and then sign-posted by
> the names and track-records of the journals to which just about every
> one of those preprints has been submitted, and in which they will all
> appear eventually (though perhaps only after several rounds of revision
> and refereeing, and perhaps not always at the level of the journal to
> which they were submitted first.)

I did not reply because I did not have time to reply to all of your points.
Since you make this such a central point, let me respond now (although very
briefly and so inadequately). How will all those papers "find their proper
vetting"? Well, how do they find proper refereeing now, under your vaunted
classical peer review? We know that serious frauds like the Jan Hendrik
Schoen slip through. We also how plenty of evidence that lots of simply
not very solid science that is not fraudulent gets through. (I don't have
time to dig up references, but there was a paper quite a while ago that
looked at the statistical methodology used in a large sample of medical
papers. It found a horrendously high rate of misapplications of statistics.
There are lots more examples.) The point is that classical peer review does
not provide much of a signal, especially for journals in the lower quality
tiers. So how does science progress? Well, there are all sorts of checks
that are applied post-publication. (And none of them are infallible. Even
a few Nobel prizes are now regarded as having been given in error.) Basically
classical peer review is just one noisy and uncertain signal that the
scholarly community relies on.

> If self-archiving had (mirabile dictu) begun instead with refereed
> postprints, we might have spared ourselves these misconstruals, and we
> might have been further along the road to open access by now....

The incentives were not there to do this. The authors, who after all control
the information flow, could see the benefits to themselves of quick circulation
of preprints. Open access to published journal articles was of much less
value to them, since they typically had access to those journals in their
libraries.

Andrew Odlyzko

Stevan Harnad

unread,
Nov 7, 2002, 2:18:17 PM11/7/02
to

---------- Forwarded message ----------
Date: Tue, 5 Nov 2002 21:56:50 +0000 (GMT)
From: Andrew Odlyzko <odl...@dtc.umn.edu>
To: har...@ecs.soton.ac.uk
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

Stevan Harnad writes:

> I am certain that online implementation will make (and already is
> making) CLASSICAL peer review faster, cheaper, more efficient, and more
> equitable. That can be confidently stated. But what (in my opinion) has
> to be avoided at all costs is any linking whatsoever between
> self-archiving (i.e., author/institution steps taken to maximize the
> visibility, accessibility, usage, citation and impact of their
> peer-reviewed research output) and any substantive changes in classical
> peer review.

That seems to me to be an untenable view. Any time a major change
takes place in method of dissemination of scholarly information,
changes in peer review are basically unavoidable. They may only
be changes that make what you call classical peer review better, but
that is a very unlikely course. It is much more probable that the
changes will be deeper.

> Classical peer review is merely the evaluation of the work of
> specialists by their qualified fellow-specialists (peers) mediated by
> and answerable to a designated qualified-specialist (the editor) who
> picks the referees, adjudicates the reports, indicates what needs to be
> done to revise for acceptance (if anything) and is answerable for the
> results of this quality-control, error-corrective mechanism.
>
> Untested "reforms" to this system, though possible, should not be
> mentioned at all, in the same breath as self-archiving, for any implied
> coupling between self-archiving and hypothetical peer-review changes
> will only work to the disadvantage of self-archiving and open access:

In other words, self-archiving is the preeminent goal, and we should
keep quite about any changes it might bring to peer review in order not
to frighten the uncommitted?

How does this differ from somebody a decade or two that might have
promised that electronic publishing would simply mean that journals
would now be available online, but there would be no disturbing
innovations such as scholars being confused by uncontrolled preprint
distribution?


>ao> This system is really a collection of many different systems,
>ao> of varying effectiveness. They guarantee neither correctness
>ao> nor novelty of the results, even among the most selective and
>ao> prestigious journals.


>
> No human (or nonhuman) judgement can guarantee that. The only relevant
> question -- and it has not been asked or tested, but the default
> assumption until it is tested MUST be for, not against, the causal role
> of peer review in maintaining the current quality level of the research
> literature -- is: How much better or worse is the literature's quality
> with (1) classical peer review, (2) with hypothetical (not yet tested
> and compared) alternatives, or (3) with no peer review at all (which,
> by the way, is NOT tested already by existing pre-refereeing preprint
> quality levels, for the invisible-hand reasons I've elaborated)?

And as electronic publishing became a possibility, would it not have
been natural to complain that the only way to maintain the quality of
scholarly publication was to insist on proven techniques (thus
ruling out self-archives, and extending the Ingelfinger rule to cover
all journals)?

> Absent the comparative data, there is only speculation (speculation that
> may well put the quality of the current refereed literature at risk if
> it were implemented before successful pre-testing). This is the sort
> of speculation from which I think it is so important to dissociate the
> question of self-archiving, completely. Any implied coupling will simply
> lose us yet another generation of potential self-archivers.

Again, by this line of reasoning, moving journals online should have been
carefully dissociated from irresponsible talk aout self-archiving and its
"pollution" of the literature.

> almost nowhere is peer-review merely red-light/green-light
> screening: Papers are not just refereed for acceptance/rejection.
> Referees propose corrections and elaborations, papers are revised and
> re-refereed. Peer-review is not a passive, static filter but an active,
> dynamic, interactive, corrective one.

So are (even to a greater degree) the many other stages of the
"scholarly skywriting" continuum.

> Without that dynamic, answerable, pre-correction, and without the
> tried-and-tested quality-label of an established journal to sign-post
> the skyline, I am convinced that the literature would not only quickly
> decline in quality, but it would become un-navigable -- till peer review
> was simply reinvented!

It is my contention that peer review is being reinvented, or more
precisely, reshaped. I do not deny the importance of review by peers,
but do question whether classical peer review is all that important.
It just has too many warts!

> Yet it is precisely this doomsday scenario that is holding would-be
> self-archivers back today, and I'm afraid you may just be reinforcing
> their fears here, Andrew!

But what I am holding out is the promise of an improved system of
review by peers.

> I sense (I am reading this sequentially in real time) that we are about to
> come to the "open peer commentary" alternative to "classical peer review":
> http://cogprints.soton.ac.uk/documents/disk0/00/00/16/94/index.html

You sense incorrectly. In the extremely short space I had, I could
not discuss open peer commentary in detail. It is likely to be an
element of future review systems, but I do not venture to predict
how important it will be.

> The self-correction in classical peer review is systematic, deliberate,
> and answerable (on a journal by journal basis). The ad-lib
> self-correctiveness of self-appointed sleuths tends more toward an
> opinion poll than expert guidance.

The "self-correction in classical peer review" is sadly inadequate.
I wrote at length about this in "Tragic loss or good riddance ...,"
and there are plenty of more systematic sources of complaints (for
example, the recent "publish and be damned ..." by David Adams and
Jonathan Knight in Nature, vol. 419, 24 Oct. 2002, pp. 772-776.
The supposedly gold standard of classical peer review is made of
badly corroded pewter! The recent Bell Labs scandal with Jan
Hendrik Schoen's fraudulent publications (many in Science and
Nature) is just the tip of the iceberg.

> In this new "system" we would be entrusting all of that to the four
> winds!

Hardly. We would be able to set our filters any way we wanted. We
could choose to look only at something that had been vetted by experts
of a top caliber (or, as an extreme example, only look at papers that
were at least 10 years old and had been mentioned favorably in half
a dozen survey articles in journals published by a given field's
main professional society), or we could accept all the recent postiging
to arXiv and other archives.

> Andrew, both of us are frustrated by the slowness with which the
> research community is coming to the realization that open access is the
> optimal and inevitable outcome for them, and that self-archiving is the
> way to get there. But do you really believe that inasmuch as they are
> being held back by fears about peer review this paper will embolden them,
> rather than confirming their worst fears?

I believe it is imperative to be honest. A move to self-archiving
will, I am convinced, lead to major changes in peer review, of the
type I am describing. Not right away, since time scales are
different, but eventually it will.

> Yet it is all completely unnecessary! All that's needed for open access
> is to self-archive, and leave classical peer review alone! Why imply
> otherwise?

Yes, and we could have promised scholars that electronics would
only lead to journals moving online, and that nobody would be
allowed to take advantage of the new freedoms to self-archive
their articles. That surely would have allayed the concerns
of many (especially of publishers).

> You are making predictions and conjectures, which is fine. But why link
> them to open-access and especially the current unfortunate reluctance to
> self-archive? Speculations will not relieve fears, especially not
> speculations that tend to confirm them.

I will deemphasize the link in my next revision, but will leave some
reside of it there. Anything else I feel would not be responsible.

> The law-review case, about which I have written and puzzled before,
> is an anomaly, and, as far as I know, there are many legal scholars
> who are not satisfied with it (Hibbitts included). (Not only are
> law-reviews student-run, but they are house organs, another anomaly in the
> journal-quality hierarchy, where house-journals tend to rank low, a kind
> of vanity-press.) I think it is highly inadvisable to try to generalize
> this case in any way, when it is itself unique and poorly understood. In
> any case, it certainly will not be reassuring to professors who are
> contemplating whether or not they should self-archive, that doing so
> may mean that whereas they are marking their students essays on
> tuesdays and thursdays, if they self-archive their own papers, their
> students may be marking them on wednesdays and fridays, instead of the
> qualified editor-mediated peers of times past.

The law review case may be "poorly understood," but so is the whole
classical peer review system. It does, however, serve as a counterexample
to many extreme claims about what kind of review is needed. That many
scholars are not satisfied with it is nothign special. The same can
be said of classical peer review.

>ao> The growing role of interdisciplinary
>ao> research might lead to a generally greater role for non-peers in reviewing
>ao> publications.

>
> I can't follow this at all. Interdisciplinary work requires review by
> peers from more disciplines, not from non-peers. ("Peer" means qualified
> expert.)

If I, as a mathematician, need to rely on some results from physics,
I may end up criticizing the presentation and methodology of a
physics paper even without understanding all the physics that is
involved.

It is a weak analogy I would not want to push too far, but note that
many music teachers and sports coaches are very successful, and
train top stars in their areas, without being able to perform at
their students' level.

>ao> However, in most cases only peers are truly qualified to review
>ao> technical results. However, peer evaluations can be obtained, and
>ao> increasingly are being obtained, much more flexibly than through
>ao> the traditional anonymous journal refereeing process.


>
> That is not my experience. It seems that qualified referees, an
> overharvested resource, are becoming harder and harder to come by. They
> are overloaded, and take a long time to deliver their reports. Is the
> idea that they will be more available if approached some other way? Or
> if they self-select? But what if they all want to review paper X, and no
> one -- or dilettantes -- review papers A-J?

You help make my case. Classical peer review typically is too slow,
and it is getting harder to run. Self-selection is a major antidote.
Yes, it is not ideal, as indeed, interests of potential referees
won't be uniformly distributed, but I will settle for that if I can't
get anything better. As the primality example later on shows, it
is the most important articles that are likely to get the fastest
and most thorough scrutiny, nad that is as it should be.

>ao> Some can come from use of automated tools to harvest references
>ao> to papers, in a much more flexible and comprehensive way than the
>ao> Science Citation Index provided in the old days.


>
> Now here I agree, but this falls squarely in the category of using
> online resources to implement CLASSICAL peer review more efficiently and
> equitably: Here, it is to help find qualified referees and to distribute
> the load more evenly. But that has nothing to do with peer review
> reform, nor with any of the other speculative alternatives considered
> here. It goes without saying that an open-access corpus will make it
> much easier and more effective to find qualified referees.

Again, I have a different view. If I am looking for something in
psychology, and area I know very little about, and find a relatively
recent archives paper that has not been published, but is referenced
favorably by Stevan Harnad and several other famous figures, should
I not be willing to accept it as of good quality?

> There's another way to put all this: To a first approximation (and
> forgetting about what I said about dynamic correction, revision etc.),
> a journal's quality level is a function of its rejection rate: The
> highest quality journals will only accept the highest quality work,
> rejecting the rest. Second-tier journals will reject less, and so on,
> right down to the near-vanity press at the bottom, which accepts just
> about anything. This is the hierarchy of sign-posted milestones that
> guides the prospective reader and user rationing his finite reading time
> and his precious research resources. How is this quality triage to be done
> on the model you just described (of the prime-number flurry)?

I would dispute the claimed strong correlation between rejection
rates and quality. Having served on the editorial board of what
is usually regarded as one of the three most prestigious journals
in mathematics, I can say that its rejection rate was actually
lower than of several lower quality journals I have served on.
The reason was self-selection. Aside from a moderate fraction
of crank submissions (something like 10 to 20%), the overwhelming
majority were of very high quality. Authors knew of the journal
standards, and did not bother to submit run-of-the mill papers.
This is just one anectodal piece of evidence, but from what
I have heard from other editors, is not all that atypical.

> Andrew, I'm curious: experiences as what in those areas: reader? author?

> referee? editor? empirical investigator of peer-review?

All of the above.

>ao> I believe that the frequently voiced concerns about need for extra
>ao> scrutiny of research results that might affect health practices
>ao> are a red herring. Yes, decision about medical procedures or even
>ao> diet should be based on solidly established research. However,
>ao> the extra levels of scrutiny are more likely to be obtained by more
>ao> open communication and review systems than we have today.


>
> And a little bit of self-poisoning by the users after the
> self-publicizing by the authors, by way of self-correction?

Well, it seems users are able to self-poison themselves quite well
as is. Didn't the President of South Africa discover on the Internet
that AIDS is not caused by a virus?

> Andrew, I'm afraid I disagree rather profoundly with the position you
> are advocating here! I think it is far more anecdotal and speculative
> than your other work on publishing and access. I think the conjectures
> about peer review are wrong, but worse, I think they will be damaging
> rather than helpful to self-archiving and open access. I think my own
> article, which you cite, actually premptively considered most of these
> points already, more or less along the lines I have repeated in
> commentary here. I have to say that I rather wish that you weren't
> publishing this -- or at least that you would clearly dissociate it from
> self-archiving, and simply portray it as the conjectures they are, from
> someone who is not actually doing research on the peer review system,
> but mere contemplating hypothetical possibilities.

What I am saying here is pretty much what I have been saying ever
since "Tragic loss or good riddance ..." back in 1994. My position
has not changed. This is an opinion piece, as solicited by the
editors of this volume, so it certainly is largely personal
evaluation and speculation. However, everything I have seen in
the last 8 years confirms my initial impression.

> Or failing that, I wish I could at least write a commentary by way of
> rebuttal!

Why don't you propose it to the editors? (BTW, mine is just one of
several short contributions they have solicited. I have not seen
any of the others.)

Andrew Odlyzko

Stevan Harnad

unread,
Nov 7, 2002, 2:18:45 PM11/7/02
to

---------- Forwarded message ----------
Date: Wed, 6 Nov 2002 09:14:17 +0000 (GMT)
From: Andrew Odlyzko <odl...@dtc.umn.edu>
To: har...@ecs.soton.ac.uk
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

Just a couple of brief responses to your comments. Beyond this,
I do not intend to continue. We do have substantially different
visions of the future of peer review, and they have not changed
much since we first started corresponding back in 1993.

> ...if your prediction happens to be wrong, making the prediction
> anyway will have a negative, retardant effect on self-archiving and open
> access. (Of course, if you are right, then concerns about these changes
> will still have a negative, retardant effect on self-archiving.)

I am not sure it will be negative. It might encourage the transformation
by showing the path to a future in which not only information dissemination,
but also peer review, are improved.

>ao> How does this differ from somebody a decade or two ago that
>ao> might have promised that electronic publishing would simply mean
>ao> that journals would now be available online, but there would
>ao> be no disturbing innovations such as scholars being confused by
>ao> uncontrolled preprint distribution?
>
> I can't see the point (and I'm not sure what you mean by scholars being
> confused by uncontrolled preprint distribution!).

Some of the opposition to electronic publishing was based on concerns about
uncontrolled proliferation in available material.

> (Nor does it seem to me that you are making
> these predictions because you are recommending that people think twice
> about the transition, or first take some remedial measures.)

Certainly not. I hope to hasten the transition, by pointing out that
it is likely to improve the peer review system.

> Self-archiving was tried extensively, and demonstrated to work. Now we
> can confidently say it works and recommend it to everyone. Alternatives
> to peer review have not been tried or demonstrated to work. Nothing
> prevents people from trying to implement controlled experiments on
> alternatives to peer review. But until they are done, and the outcome
> known, there is no basis whatsoever for linking them to self-archiving
> and open-access.

Sorry, but alternatives to classical peer review have been tried, and
are constantly being tried (quite successfully, in my opinion). That is
largely what my article "The fast evolution of scholarly communication"
was about.

> The fact, though, is that no one KNOWS that your prediction is true. So
> dilating on it now -- when its truth cannot even be known, and when,
> on the face of it, proclaiming it will merely reinforce people's fears and
> hesitations about self-archiving -- can hardly serve a useful purpose. (If
> I were you, and I could not in good conscience deny my belief in the
> causal connection between self-archiving, open-access, and the changes in
> peer review that you described, I simply would not express my belief at
> all, rather than risk voicing a fallible belief that is almost certainly
> going to have a negative effect on something I regard as very positive,
> but also certain.)

Well, yes, we don't know whether my prediction is true, but there is
evidence for it, for example in "The fast evolution ...." Peer review
is undergoing change right now, in front of our eyes, even though
few are paying attention. To pretend that nothing will change seems
really short-sighted.

> ...the peer review generates a reliable,
> recognizable, quality-level-tag, a recognizable milestone with an
> established track record, along the continuum, on which the would-be user
> can depend. Without that, it is not at all clear where a particular paper
> stands, in quality and usability, along its own continuum...

But even with classical peer review "it is not at all clear where a particular
paper stands ..." We get a very weak quality and usability signal from
classical peer review, and my contention is that we can obtain many other
signals that collectively, if not individually, can be even more useful.

> Reinvented or reshaped where, and by whom? As we speak, whether a
> self-archiver or not, not a single author of the annual 2,000,000 papers
> that appear in any of the hierarchy of 20,000 peer-reviewed journals
> published across all disciplines and around the world has stopped
> submitting his papers to those journals. Your predictions are merely
> speculations. They have not been implemented and tested, and what the
> outcome would be if they were tested is not known.

I would have to go back and dig up some old messages, but I believe there
are several very reputable scholars who have stopped publishing in traditional
journals. Furthermore, some estimates have been made of arXiv submissions,
and a noticeable fraction of them do not get submitted to journals. (I can
testify to some really outstanding papers in mathematics that are available
only through arXiv, because their authors simply never bothered to submit
them to journals.)

Part of our difference is probably rooted in our varying professional
experiences. As just one example (there are others in "Tragic loss ...")
some of the most interesting developments in mathematics over the last
half century were in algebraic geometry (leading to several Fields Medals,
the mathematics equivalent of a Nobel Prize). Much of that work was
based on several thousand pages published in the un-refereed Springer
Lecture Notes in Mathematics (the famous SGA volumes). Why was the
field willing to rely on those? Well, it is a long story, but there
were enough checks around (such as many people, including graduate
student seminars going over those volumes line by line) to convince
experts to rely on those papers. It was examples like these, definitely
involving review by peers, but not classical peer review, that helped
convince me that scholars could thrive under a variety of systems.
I spend an inordinate amount of time on this subject in "Tragic loss ..."

> Peer review is not a gold standard, but I'm sure you will agree that
> any alternative would have to ensure at least the same standard, if
> not better: Do you think there is this evidence for the promise you are
> holding out above?

Yes, I do. Examples such as that of algebraic geometry (mentioned in my
previous response above) showed me early on that there is nothing sacred
about classical peer review. The law review system is yet another example.

> I honestly can't see how you imagine this scaling to the annual
> 2,000,000 papers that currently appear in the classically peer reviewed
> journals! Absent the peer reviewed journal, how can I know that a paper
> has been "vetted by experts of a top caliber"? What tells me that (as
> the journal-name currently does) for those 2,000,000 annual papers? And
> what now gets the right-calibre experts vetting the right papers (as
> editors formerly did, when they invited them to referee?). Do experts
> voluntarily spend their precious time trawling the continuum of raw
> papers on the net on their own?

I do not have the time to respond to all these points, but yes, many experts
do "voluntarily spend their precious time trawling the continuum of raw papers
on the net on their own." I do know many people whose day starts with a scan
of the latest arXiv submissions. Moreover, some put a lot of effort into
making what they find more easily digestible for others. (John Baez and his
wonderful "This week's finds in mathematical physics" comes to mind.)

I am sorry, Stevan, but you are ignoring some of the most interesting
evolutionary developments in scholarly publishing in your blind faith
that classical peer review is the only thing that stands between us
and chaos.

> It is up to you, but I do not understand why your conscience tells you
> you need to share your speculations (especially when they risk alienating
> the majority who are still leery about self-archiving!).

Well, I was invited to contribute my thoughts on peer review to a book
devoted to the subject, so I did the best I could within the allotted
limits. Open archives are not a religion to me, just a step towards
a better scholarly communication system, which will also require changes
in peer review.

Andrew Odlyzko

Stevan Harnad

unread,
Nov 7, 2002, 2:18:35 PM11/7/02
to

---------- Forwarded message ----------
Date: Tue, 5 Nov 2002 21:56:29 +0000 (GMT)
From: Stevan Harnad <har...@ecs.soton.ac.uk>
To: Andrew Odlyzko <odl...@dtc.umn.edu>

Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

Andrew Odlyzko wrote:

> Any time a major change
> takes place in method of dissemination of scholarly information,
> changes in peer review are basically unavoidable. They may only
> be changes that make what you call classical peer review better, but
> that is a very unlikely course. It is much more probable that the
> changes will be deeper.

You may or may not be right about the latter. But what I am suggesting
is that if your prediction happens to be wrong, making the prediction


anyway will have a negative, retardant effect on self-archiving and open
access. (Of course, if you are right, then concerns about these changes
will still have a negative, retardant effect on self-archiving.)

For these reasons, I would avoid making any predictions about possible
changes in peer review (other than improved efficiency) as a result of
open access and self-archiving. We are agreed that open access is the
optimal and inevitable endstate in any case. I think we both agree that
the sooner it comes, the better. If making predictions about changes in
peer-review is likely to slow rather than hasten the optimal and
inevitable, it would seem to be preferable not to venture predictions
at this time.

> In other words, self-archiving is the preeminent goal, and we should

> keep quiet about any changes it might bring to peer review in order not
> to frighten the uncommitted?

Exactly!

> How does this differ from somebody a decade or two ago that might have
> promised that electronic publishing would simply mean that journals
> would now be available online, but there would be no disturbing
> innovations such as scholars being confused by uncontrolled preprint
> distribution?

I can't see the point (and I'm not sure what you mean by scholars being
confused by uncontrolled preprint distribution!).

Yes, the journal transition from on-paper to on-line was also a case of
the optimal/inevitable, though a far less radical one than the transition
from toll-access to open-access. If someone, before the transition from
on-paper to on-line, had had some solid evidence or reasoning-based
predictions of untoward consequences that ought to make people think
twice about the transition, or first take some remedial measures, of
course he should have made those known (though I know of no such untoward
consequences, nor of any necessary remedial measures, in that case:
the advent of self-archiving is certainly not an untoward consequence!).

But that is not the case at all in what we are discussing here, namely,
the transition from toll-access to open-access through self-archiving
itself. The only untoward consequence I can see is that speculations that
it would induce radical changes in peer review, whether correct or not,
can only retard open-access. (Nor does it seem to me that you are making


these predictions because you are recommending that people think twice
about the transition, or first take some remedial measures.)

> ao> This system is really a collection
> ao> of many different systems, of varying effectiveness. They guarantee
> ao> neither correctness nor novelty of the results, even among the most
> ao> selective and prestigious journals.
>
>sh> No human (or nonhuman) judgement can guarantee that. The only relevant
>sh> question -- and it has not been asked or tested, but the default
>sh> assumption until it is tested MUST be for, not against, the causal role
>sh> of peer review in maintaining the current quality level of the research
>sh> literature -- is: How much better or worse is the literature's quality
>sh> with (1) classical peer review, (2) with hypothetical (not yet tested
>sh> and compared) alternatives, or (3) with no peer review at all (which,
>sh> by the way, is NOT tested already by existing pre-refereeing preprint
>sh> quality levels, for the invisible-hand reasons I've elaborated)?


>
> And as electronic publishing became a possibility, would it not have
> been natural to complain that the only way to maintain the quality of
> scholarly publication was to insist on proven techniques (thus
> ruling out self-archives, and extending the Ingelfinger rule to cover
> all journals)?

Self-archiving was tried extensively, and demonstrated to work. Now we


can confidently say it works and recommend it to everyone. Alternatives
to peer review have not been tried or demonstrated to work. Nothing
prevents people from trying to implement controlled experiments on
alternatives to peer review. But until they are done, and the outcome
known, there is no basis whatsoever for linking them to self-archiving
and open-access.

>sh> Absent the comparative data, there is only speculation (speculation that
>sh> may well put the quality of the current refereed literature at risk if
>sh> it were implemented before successful pre-testing). This is the sort
>sh> of speculation from which I think it is so important to dissociate the
>sh> question of self-archiving, completely. Any implied coupling will simply
>sh> lose us yet another generation of potential self-archivers.


>
> Again, by this line of reasoning, moving journals online should have been
> carefully dissociated from irresponsible talk aout self-archiving and its
> "pollution" of the literature.

No, the analogy does not hold at all. The self-archivers tried an
experiment. It could have failed, in which case that would have been the
end of it, but as it happened, it was spectacularly successful. And they
did it well before the mass movement of journals online. There is, as
far as I can see, no contingency whatosever between journals moving
online and authors posting their digital texts: It was the invention of
word processing and of the Internet that made the latter possible, not
journals going or not going online. I think this analogy is extremely
flawed.

A strained version of the analogy that might work would be this:
Publishers might have said: "Let's not go online, because look at
the piracy-damage the xerox-era has done to us: Making online versions
available would invite even worse piracy." I'm rather sure that thoughts
along those lines DID slow down the journal transition to online by a few
years (until proprietary firewalls were in place) but nothing follows from
that for the case we are considering here: The continuity and causal
connection between analog and digital piracy is quite transparent,
but the transition from classical peer review to the hypothetical
alternatives you described certainly is not -- and its causal connection
with self-archiving and open-access is even less clear.

But let's take even that at face value: Suppose that (counterfactually,
to my mind) your predicted outcome, and your suggestion about its causal
connection with open access, were completely correct, and even that the
outcome you hypothesize was the optimal one, and would indeed yield a
research literature of at least the same quality and navigability as the
current one. Even if I myself believed that (which I don't at all, but
suppose it was true and I believed it true too), I still would strongly
urge you not to make that prediction at this time, because it would
not be believed that the outcome would be optimal and it would instead
only serve to confirm the very fears about peer review (wrong-headed,
in the event) that are currently holding people back from self-archiving.

The fact, though, is that no one KNOWS that your prediction is true. So
dilating on it now -- when its truth cannot even be known, and when,
on the face of it, proclaiming it will merely reinforce people's fears and
hesitations about self-archiving -- can hardly serve a useful purpose. (If
I were you, and I could not in good conscience deny my belief in the
causal connection between self-archiving, open-access, and the changes in
peer review that you described, I simply would not express my belief at
all, rather than risk voicing a fallible belief that is almost certainly
going to have a negative effect on something I regard as very positive,
but also certain.)

> sh> Peer-review is not a passive, static filter but an active,
> sh> dynamic, interactive, corrective one.


>
> So are (even to a greater degree) the many other stages of the
> "scholarly skywriting" continuum.

Perhaps in some cases; but the peer review generates a reliable,


recognizable, quality-level-tag, a recognizable milestone with an
established track record, along the continuum, on which the would-be user
can depend. Without that, it is not at all clear where a particular paper
stands, in quality and usability, along its own continuum...

>sh> Without that dynamic, answerable, pre-correction, and without the
>sh> tried-and-tested quality-label of an established journal to sign-post
>sh> the skyline, I am convinced that the literature would not only quickly
>sh> decline in quality, but it would become un-navigable -- till peer review
>sh> was simply reinvented!


>
> It is my contention that peer review is being reinvented, or more
> precisely, reshaped. I do not deny the importance of review by peers,
> but do question whether classical peer review is all that important.
> It just has too many warts!

Reinvented or reshaped where, and by whom? As we speak, whether a


self-archiver or not, not a single author of the annual 2,000,000 papers
that appear in any of the hierarchy of 20,000 peer-reviewed journals
published across all disciplines and around the world has stopped
submitting his papers to those journals. Your predictions are merely
speculations. They have not been implemented and tested, and what the
outcome would be if they were tested is not known.

I can only repeat that the occasional cases like the number-theoretic one
you gave -- in which there is a dramatic flurry of dynamic testing and
revision based on informal peer feedback well before the formal peer
review -- are far too rare to use as a model for the annual 2,000,000. Such
examples simply will not scale. They are by no means a systematic
test of paper quality levels in the absence of classical peer review (and
such examples occasionally arose in the paper era too).

Similarly, the fact that often (no one knows how often or how
much) research interactions and advances occur at the stage of the
pre-refereeing preprint exchange stage, before peer review is completed
(8 months, on average, and accelerating, in the Physics Archive
http://www.dlib.org/dlib/october02/hitchcock/fig4-agereferences.gif )
is simply a reaffirmation of the fact that the "growth" region in
research partly predates the outcome of peer review. This is still an
effect occurring squarely inside a system that is quality-controlled by
and answerable to classical peer review. To predict that this anticipatory
effect -- and overall the quality/usability levels for the literature --
would still be there if classical peer review were not, the controlled
experiment must first actually be done (on a sufficiently large and
representative sample, and long enough to trust it will scale).

To my knowledge, the experiment has not been done. Nothing even remotely
like that has been tried.

>sh> Yet it is precisely this doomsday scenario that is holding would-be
>sh> self-archivers back today, and I'm afraid you may just be reinforcing
>sh> their fears here, Andrew!


>
> But what I am holding out is the promise of an improved system of
> review by peers.

You are predicting a radical change (which may not take place), and you are
predicting an alternative system (which has never been tried or tested)
that will work at least as well as the present one -- and you are
proposing these by way of assuaging people's worries about putting peer
review at risk by self-archiving.

These are indeed promises, and speculative promises. I doubt that
speculative promises -- even when they come from someone as informed
and authoritative on the economics and dynamics of online publication
as you -- will allay people's fears, if these are holding them back
from self-archiving. The only thing that could (or should) allay those
fears would be (substantial!) empirical evidence from controlled tests
of these hypothetical changes and hypothetical systems showing that the
resulting literature will be of at least the same level of quality and
usability as the current one.

No such tests have been done. No such evidence exists. (So it is best
not to speculate at all.)

>sh> I sense (I am reading this sequentially in real time) that we are about to
>sh> come to the "open peer commentary" alternative to "classical peer review":
>sh> http://cogprints.soton.ac.uk/documents/disk0/00/00/16/94/index.html


>
> You sense incorrectly. In the extremely short space I had, I could
> not discuss open peer commentary in detail. It is likely to be an
> element of future review systems, but I do not venture to predict
> how important it will be.

But what I mean by open peer commentary here is precisely the
self-corrective peer feedback that you are hypothesizing will take the
place of classical peer review! Not just public comments, but also
direct emails to the author, based on open access to the raw drafts.
Isn't that precisely the substitute for classical peer review that you
are contemplating? Or do you think that in the post-peer-review era it
will simply be a matter of using the unrefereed literature exactly the
way we used the refereed literature, and reporting any problems or progress
with it only in our own (likewise unrefereed) papers? That I would find
an even more far-fetched speculation than the "open peer review" variant
most of peer-review reformers have in mind (and that you certainly also
invoked in your paper)!

>sh> The self-correction in classical peer review is systematic, deliberate,
>sh> and answerable (on a journal by journal basis). The ad-lib
>sh> self-correctiveness of self-appointed sleuths tends more toward an
>sh> opinion poll than expert guidance.


>
> The "self-correction in classical peer review" is sadly inadequate.
> I wrote at length about this in "Tragic loss or good riddance ...,"
> and there are plenty of more systematic sources of complaints (for
> example, the recent "publish and be damned ..." by David Adams and
> Jonathan Knight in Nature, vol. 419, 24 Oct. 2002, pp. 772-776.
> The supposedly gold standard of classical peer review is made of
> badly corroded pewter! The recent Bell Labs scandal with Jan
> Hendrik Schoen's fraudulent publications (many in Science and
> Nature) is just the tip of the iceberg.

If there is something seriously wrong with peer review, then
alternatives need to be tried and tested. This is the research area of
peer review testing and reform. No alternatives have been tested yet.
There are no empirical or logical or theoretical grounds for simply
ASSUMING that abandoning peer review and posting everything would remedy
the defects of peer review. Even less for assuming that self-archiving
would lead to that outcome. But there ARE prima facie grounds [defeasible
grounds, in my opinion] for worrying about it. And I am afraid that your
own empirically unsupported speculations will simply enhance those worries,
and hence retard the self-archiving.

Peer review is not a gold standard, but I'm sure you will agree that
any alternative would have to ensure at least the same standard, if
not better: Do you think there is this evidence for the promise you are
holding out above?

And while we're at it: What's the evidence that the few cases that
come to our attention are just the tip of the iceberg? Indeed, where
is the evidence that fraud is a serious problem at all? It seems to me
that the test of whether a research finding is important is whether it
leads to further research results or applications. One cannot build
further results or applications on fraud; it collapses. So by that
token, fraud -- at least important fraud -- will always come out. So
there's no iceberg there. Could the rest of the iceberg consist of
unimportant results that no one has bothered to try to build on or apply
yet? Perhaps. But is that, in turn, important -- important enough to be
called (switching metaphors) "badly corroded pewter?

I am not saying that classical peer review cannot stand some improvement,
if improvement is possible (it is, after all, simply human expert
judgment, systematically and answerably applied by expert-appointed
experts, certified with a tag, backed up by a public track-record).
But then let us try and test improvements, not assume them a priori,
and link them causally to something else that is ostensibly quite
different, namely, the attempt to use the newfound potential of the online
medium to maximize access to the peer-reviewed literature, such as it is,
warts and all -- something that can provide huge potential increases in
research visibility, usability, citability, and impact, hence
productivity -- increases that were impossible in the toll-access era.

These potential benefits of open access are tried and true -- you
yourself have attested to them and documented them. The hypothetical
benefits of untested changes in the peer-review system that generated
the literature in question, on the other hand, are merely conjectures.

Why must we mix sure benefits with untested conjectures, especially when
the very voicing of those conjectures is likely to strengthen the worries
of those they have held back from partaking of the sure benefits?

>sh> In this new "system" we would be entrusting all of that to the four
>sh> winds!


>
> Hardly. We would be able to set our filters any way we wanted. We
> could choose to look only at something that had been vetted by experts
> of a top caliber (or, as an extreme example, only look at papers that
> were at least 10 years old and had been mentioned favorably in half
> a dozen survey articles in journals published by a given field's

> main professional society), or we could accept all the recent posting


> to arXiv and other archives.

I honestly can't see how you imagine this scaling to the annual


2,000,000 papers that currently appear in the classically peer reviewed
journals! Absent the peer reviewed journal, how can I know that a paper
has been "vetted by experts of a top caliber"? What tells me that (as
the journal-name currently does) for those 2,000,000 annual papers? And
what now gets the right-calibre experts vetting the right papers (as
editors formerly did, when they invited them to referee?). Do experts
voluntarily spend their precious time trawling the continuum of raw
papers on the net on their own?

As to the wait-ten-years solution: Even that (unrealistic as it is, for
a new medium that was meant to accelerate rather than retard research
communication) is hardly a sure thing -- until you have told me how it
is eventually assured -- without the intervention of journals and editors
whose job it is to do just that -- that each paper will get the vetting
it needs eventually? I don't see that at all. I see 2,000,000 annual
papers in the sky, god knows where along their own respective
embryological continua, signposted only by links, hits, ad lib
commentaries, citations, and author-name value (which would no doubt
quickly decline in this raw flux). Is it really a sure thing here, that
if I pick a paper posted 10 years ago, it is just as reliable and usable
as it would be in a classically peer-reviewed journal? WHICH JOURNAL?

>sh> Andrew, both of us are frustrated by the slowness with which the
>sh> research community is coming to the realization that open access is the
>sh> optimal and inevitable outcome for them, and that self-archiving is the
>sh> way to get there. But do you really believe that inasmuch as they are
>sh> being held back by fears about peer review this paper will embolden them,
>sh> rather than confirming their worst fears?


>
> I believe it is imperative to be honest. A move to self-archiving
> will, I am convinced, lead to major changes in peer review, of the
> type I am describing. Not right away, since time scales are
> different, but eventually it will.

It is imperative to be honest with out facts and evidence. It is not at
all clear to me that it is imperative to be honest with our fallible
speculations...

>sh> Yet it is all completely unnecessary! All that's needed for open access
>sh> is to self-archive, and leave classical peer review alone! Why imply
>sh> otherwise?


>
> Yes, and we could have promised scholars that electronics would
> only lead to journals moving online, and that nobody would be
> allowed to take advantage of the new freedoms to self-archive
> their articles. That surely would have allayed the concerns
> of many (especially of publishers).

I think we dealt with that analogy once above. I don't think there is a
tertium comparationis between (1) the on-paper to on-line to self-archiving
transition and (2) the toll-access to open access to (nontrivial)
peer-review reform transition. Self-archiving was feasible, and
trivially predictable, on an individual basis. The peer review sea-changes
you mention here a far more hypothetical (and, in my opinion, just plain
wrong -- but in any case, untested, undemonstrated).

>sh> You are making predictions and conjectures, which is fine. But why link
>sh> them to open-access and especially the current unfortunate reluctance to
>sh> self-archive? Speculations will not relieve fears, especially not
>sh> speculations that tend to confirm them.


>
> I will deemphasize the link in my next revision, but will leave some
> reside of it there. Anything else I feel would not be responsible.

It is up to you, but I do not understand why your conscience tells you


you need to share your speculations (especially when they risk alienating

the majority who are still leery about self-archiving!). Is there not a
saying about honesty in business that it would be a lie to deny it if you
know that someone across the street is selling your product for half your
price, but if the customer doesn't ask, honesty does not require you
to tell him! Well, that's still not it: Surely, if you don't know that
someone across the street is selling it for less, but you merely guess
that it is possible that he MIGHT be selling it for less, then surely
"honesty" is not quite the right descriptor for the policy of sending
every would-be customer across the street, just in case your guess
is right!

And in this case, even that isn't quite it, because I am at least as
convinced that your conjecture is false as you are that it is true,
and I think I have here given a few rather strong reasons (especially
that it is completely untested) for you to send your customers to my
side of the street instead (until you find the empirical evidence)!

>sh> The law-review case, about which I have written and puzzled before,
>sh> is an anomaly, and, as far as I know, there are many legal scholars
>sh> who are not satisfied with it (Hibbitts included). (Not only are
>sh> law-reviews student-run, but they are house organs, another anomaly in the
>sh> journal-quality hierarchy, where house-journals tend to rank low, a kind
>sh> of vanity-press.) I think it is highly inadvisable to try to generalize
>sh> this case in any way, when it is itself unique and poorly understood. In
>sh> any case, it certainly will not be reassuring to professors who are
>sh> contemplating whether or not they should self-archive, that doing so
>sh> may mean that whereas they are marking their students essays on
>sh> tuesdays and thursdays, if they self-archive their own papers, their
>sh> students may be marking them on wednesdays and fridays, instead of the
>sh> qualified editor-mediated peers of times past.


>
> The law review case may be "poorly understood," but so is the whole
> classical peer review system. It does, however, serve as a counterexample
> to many extreme claims about what kind of review is needed. That many

> scholars are not satisfied with it is nothing special. The same can


> be said of classical peer review.

The classical peer-review system has 20,000 journals and 2,000,000
articles annually attesting to the (hierarchical) quality-levels it
delivers. Alternatives have to have evidence that they can deliver at
least the same quality levels. The few hundred college law reviews are a
special case. They are not peer reviewed (hence not counted in the 20K
above); they are house-journals rather than independent ones; and there
are very specific grumbles about its quality -- in explicit comparison
with peer-reviewed journals of legal and related scholarship with which
they do not compare favorably (except perhaps for the most elite law
schools, but that only because, being the house organs, they get the
top house scholars).

So, no, I would say it does not serve as a counterexample at all. What
would serve as a counterexample would be taking, say, a top, middle and
low level journal in your field (and in mine) and passing over its "peer
review" to students, and seeing whether that would maintain the same
quality level across the years -- and then to phase out the students
altogether, and let raw submissions all appear on the web, for
self-selected vetters to patrol, and see what that does to quality,
and navigability, and usability, and impact...

>ao> The growing role of interdisciplinary
>ao> research might lead to a generally greater role for non-peers in reviewing
>ao> publications.
>

>sh> I can't follow this at all. Interdisciplinary work requires review by
>sh> peers from more disciplines, not from non-peers. ("Peer" means qualified
>sh> expert.)


>
> If I, as a mathematician, need to rely on some results from physics,
> I may end up criticizing the presentation and methodology of a
> physics paper even without understanding all the physics that is
> involved.

So who IS qualified to judge the soundness of an interdisciplinary
math/physics paper then, a chemist? A sports coach?

> It is a weak analogy I would not want to push too far, but note that
> many music teachers and sports coaches are very successful, and
> train top stars in their areas, without being able to perform at
> their students' level.

This all seems rather weak and unrepresentative when what it must scale
up to is all 20,000 journals in all fields, whether interdisciplinary or
not. As editor I have consulted referees who do not publish much but are
known masters of their field. It is not the referee's publication count or
impact factor that matters but their expertise.

> ao> However, in most cases only peers are truly qualified to

> ao> review technical results. However, peer evaluations can be obtained,
> ao> and increasingly are being obtained, much more flexibly than through the
> ao> traditional anonymous journal refereeing process.
>
>sh> That is not my experience. It seems that qualified referees, an
>sh> overharvested resource, are becoming harder and harder to come by. They
>sh> are overloaded, and take a long time to deliver their reports. Is the
>sh> idea that they will be more available if approached some other way? Or
>sh> if they self-select? But what if they all want to review paper X, and no
>sh> one -- or dilettantes -- review papers A-J?


>
> You help make my case. Classical peer review typically is too slow,
> and it is getting harder to run. Self-selection is a major antidote.

It doesn't scale! There are a few papers many people would be happy to
referee, and there are many papers few people would be willing to
referee -- unless specifically asked by a trusted editor, for an
established journal, and with a high presumption that the author will be
made answerable and the effort not wasted. Do you really imagine
self-selection for the annual 2,000,000? Are you not too focussed on a
small and anecdotal sample? What is the "force" that will ensure that
the 2,000,000 get their due via self-selection? And when? And how will
we know it?

> Yes, it is not ideal, as indeed, interests of potential referees
> won't be uniformly distributed, but I will settle for that if I can't
> get anything better.

You can get something better now, with classical peer review. The burden
on you is to show that this alternative would be at least as good. Would
it? How?

> As the primality example later on shows, it
> is the most important articles that are likely to get the fastest

> and most thorough scrutiny, and that is as it should be.

Indeed. But alas it does not scale to the annual 2,000,000, by
definition. (And it is the cases where the name-value of the author, or
even of the title, is not a sufficient "cue" that it belongs in the circle
of the "most important": those are the real test cases. How does the
anarchic self-selection system pick that up? Otherwise, you are simply
generalizing from the highly unrepresentative sample of the known elite:
If they were the only ones we had to worry about, maybe we wouldn't need
peer review at all. But what percentage of the 2,000,000 do you think
that covers?)

> If I am looking for something in

> psychology, an area I know very little about, and find a relatively


> recent archives paper that has not been published, but is referenced
> favorably by Stevan Harnad and several other famous figures, should
> I not be willing to accept it as of good quality?

What I want to know is how Stevan Harnad found that paper, among the
2,000,000 (and to avoid infinite regress, we must not assume that he
was guided by a still more famous figure!) and decided it was worth his
time to read, and the risk to use and cite it?

> I would dispute the claimed strong correlation between rejection
> rates and quality. Having served on the editorial board of what
> is usually regarded as one of the three most prestigious journals
> in mathematics, I can say that its rejection rate was actually
> lower than of several lower quality journals I have served on.
> The reason was self-selection.

I am aware of that. But look at what constrains that self-selection and
makes it possible: That journal has an established track record of
publishing all and only the highest quality research. It WOULD have
rejected the papers that appear in 2nd tier journals if they had been
foolishly submitted to the journal in question.

Now, I ask you, in a system where the only "self-selection" is to put
all 2,000,000 raw drafts willy-nilly up in the sky, how is quality
supposed to sort itself out? How do authors find referees at the right
level? And how will that level be sign-posted?

Yet THAT is the real test that you do not actually even consider as a
thought-experiment in these speculations based on tiny, elite subsets,
and positive evidence only! Not only will none of that scale, but even
for THAT effect, the invisible hand of peer review had to be there.
Maybe I am conceding too much if I agree that the elite don't really
need the constraint of being answerable to classical peer review. Maybe
they do (or did earlier in their careers). But in any case, surely the
rest of us do!

> Aside from a moderate fraction
> of crank submissions (something like 10 to 20%), the overwhelming
> majority were of very high quality. Authors knew of the journal
> standards, and did not bother to submit run-of-the mill papers.
> This is just one anectodal piece of evidence, but from what
> I have heard from other editors, is not all that atypical.

Not atypical? How can a self-selection at the very top of the hierarchy be
typical of the whole hierarchy? Do people have an equally unerring
sense that their destiny is tier 2 rather than tier 3? And if, mirabile
dictu, they do, do their prospective self-selecting vetters also have this
unerring matching sense (especially bearing in mind that the hierarchy
gets fatter as you go down)?

>sh> Or failing that, I wish I could at least write a commentary by way of
>sh> rebuttal!


>
> Why don't you propose it to the editors? (BTW, mine is just one of
> several short contributions they have solicited. I have not seen
> any of the others.)

I'll be happy to propose it if you allow me. May I send the editor a copy
of these two exchanges, by way of a sketch of where we differ? (Which
journal is it, by the way, and what is the email of the editor?)

Stevan Harnad

Stevan Harnad

unread,
Nov 7, 2002, 2:18:54 PM11/7/02
to

---------- Forwarded message ----------
Date: Wed, 6 Nov 2002 13:50:28 +0000 (GMT)
From: Stevan Harnad <har...@ecs.soton.ac.uk>
To: Andrew Odlyzko <odl...@dtc.umn.edu>
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

Dear Andrew, I think our exchange has been useful, if only to highlight
the points on which we will have to agree to disagree, and why. I will
try to summarize our four main points of disagreement, and then make
some comments on your recent replies. I will send our exchange to Fiona,
offering to cobble mine into a commentary (and suggesting that if she is
interested, she then ask you to do the same with your replies). I look
forward to breakfasting with you (and maybe Jean-Claude) on November 22.

One last question: As our differences are, I think, rather important,
because we are both very involved in the open-access movement and we
have both written a good deal about it, would you give me permission to
also post our full exchange in the American Scientist Forum? The BOAI
list on which they have appeared is not a public list, and seen only by
the dozen or so original signatories of the BOAI.

Summary of our differences:

(R1) Your primary motivation is (R) to reform research communication,
mainly through the posting of all papers online, unrefereed, and then
relying on self-selected "vetters," in place of of classical peer review,
to evaluate, improve and thereby sign-post paper quality.

(A1) My primary motivation is (A) to make the classically peer-reviewed


research literature we have now openly accessible online, such as
it is, to maximize its accessibility, usage and impact. Qualified
journal editors continue to select the referees, referees and author
remain answerable to the editor, and the journal-name and its quality
track-record sign-post for potential users that an article has met
that journal's quality-standards.

(R2) You see a causal connection between A and R: A will lead to R.

(A2) I see a causal connection between worries about R and not-A:


Worries that R would compromise or destroy the quality and usability
of the peer-reviewed literature hold researchers back from doing A.

(R3) You think peer review is flawed and should be replaced by something
better.

(A3) I think peer review is no more nor less flawed than any other
area of human judgment; it is merely a systematic, answerable method
for soliciting qualified judgment, and sign-posting the outcome. If
there is a better method, then it should definitely replace classical
peer review; but no one has yet tested or demonstrated a better
method.

(R4) You think that the research advances that occur before peer review
through the online posting of pre-refereeing preprints today are evidence
that peer review is unnecessary and can be replaced by spontaneous vetting
(R) without loss (and perhaps even with a gain) in quality.

(A4) I think those research advances are only evidence that online
pre-refereeing preprint posting is a very valuable supplement to --
but not a substitute for -- classical peer review, which is still
there, unchanged, alongside all preprint posting and exchange today
(the "invisible hand" of answerability to classical peer review).
No test of what would happen without classical peer review has
been done; all your evidence is parasitic on intact peer review.

On Tue, 5 Nov 2002, Andrew Odlyzko wrote:

> We do have substantially different visions of the future of peer review,
> and they have not changed much since we first started corresponding back
> in 1993.

I agree. Nor has the evidence changed since 1993.

>sh> at I am suggesting is that if your prediction [that A will lead to R,
>sh> and that R is viable] happens to be wrong, making the prediction
>sh> anyway will have a negative, retardant effect on self-archiving and open
>sh> access. (Of course, if you are right, then concerns about these changes
>sh> will still have a negative, retardant effect on self-archiving.)


>
> I am not sure it will be negative. It might encourage the transformation
> by showing the path to a future in which not only information dissemination,
> but also peer review, are improved.

If you are right (and if reluctant self-archivers, worried about peer
review, believe you). But if you are not right, or not believed, then
the effect will be negative.

> I hope to hasten the transition [to open access] by pointing out that

> it is likely to improve the peer review system.

And I hope that voicing your conjecture will not have the
opposite effect.

> Sorry, but alternatives to classical peer review have been tried, and
> are constantly being tried (quite successfully, in my opinion). That
> is largely what my article "The fast evolution of scholarly communication"
> was about.

Testing your hypothesis means performing systematic, controlled
experiments on representative and sufficiently large samples of the
literature (on a sufficient time-scale) to test whether a system
consisting exclusively of self-selected vetting of completely unrefereed
papers, with no subsequent or parallel classical peer review, yields a
literature of quality comparable to its current quality. Nothing faintly
like this has been attempted yet. Online preprint posting and exchange
occur within a "parallel economy," with classical peer review still 100%
in place and active, before, during and afterward.

There is not only this total empirical gap between the data you use and
the conclusions you draw, but there are also logical gaps: You have not
replied when I have asked how, in a system where classical peer review
and journal-names with track-records are no longer there as the back-up
and bottom line -- as they are universally and without exception now --
how the annual 2,000,000 papers (which are today refereed and sign-posted
by refereed journals) will find their proper vetting, and how this will
be sign-posted for potential users? This question does not even come up
in the case of pre-refereeing preprints, because those are a "parallel
economy," backed up by the classical peer-review and then sign-posted by
the names and track-records of the journals to which just about every
one of those preprints has been submitted, and in which they will all
appear eventually (though perhaps only after several rounds of revision
and refereeing, and perhaps not always at the level of the journal to
which they were submitted first.)

How will the 2,000,000 annual papers (all? some? most? enough? which?)
find their qualified vetters? And how will potential vetters know what
to vet (and why will they want to do that faithful duty for us all under
those ad lib conditions, busy as they all are)? And what will ensure that
the papers' authors (all? some? most? enough? which?) hear and heed what
needs to be heard and heeded? And how will potential users know whether
(and when) the vetting has been heeded? You, Andrew, are imagining that,
persisting miraculously under these anarchic conditions, there will still
be something much like what we have already, with classical peer review,
whereas I am imagining the un-navigable chaos it would quickly decline
into, once no longer propped up by the invisible hand of peer review
and the sign-posting of the outcome by the journals that implement it.

One can, of course, dream up a system of answerability and classification
that would systematize and bring order into all this. But call it what
you like, it will prove to be a re-invention of something very much like
classical peer review.

> Well, yes, we don't know whether my prediction is true, but there is
> evidence for it, for example in "The fast evolution ...." Peer review
> is undergoing change right now, in front of our eyes, even though
> few are paying attention. To pretend that nothing will change seems
> really short-sighted.

I submit that nothing like the test I described above is happening now.
It is all just small local supplements to the universal system currently
backing it up.

> We get a very weak quality and usability signal from
> classical peer review, and my contention is that we can obtain many other
> signals that collectively, if not individually, can be even more useful.

Classical peer review is, as I have agreed repeatedly, fallible, being
based on human judgment. Perhaps it can be improved; it can certainly be
supplemented. But whether an alternative is truly a substitute -- and
whether it can yield a literature of at least equal quality and usability
-- must first be tested. None of the data from which you draw your
conclusions constitute such a test; they are all parasitic on classical
peer review.

> I would have to go back and dig up some old messages, but I believe there
> are several very reputable scholars who have stopped publishing in traditional
> journals.

And what follows from that? That this can be safely extrapolated to all
the reputable and not-so-reputable authors of the annual 2,000,000? Of
course there are always exceptions. If Newton were alive today, he would
have no peers, and we could all safely read every raw manuscript he
wrote. But nothing whatsoever follows from that for the 2,000,000. Such
cases (usually elite ones) simply do not scale!

> Furthermore, some estimates have been made of arXiv submissions,
> and a noticeable fraction of them do not get submitted to journals. (I can
> testify to some really outstanding papers in mathematics that are available
> only through arXiv, because their authors simply never bothered to submit
> them to journals.)

This is partly the Newton effect again (for the elite at the peak of the
distribution). As to the noticeable fraction never submitted to
journals, we need to know a bit more:

(1) How big a fraction?

(2) Apart from the Newton-subset (how big a subfraction is that?),
how does this fraction compare in quality to the the rest of the papers
(i.e., those that did go on to appear in journals)?

(3) Were some (many) of these rejected by journals? revised and
resubmitted under another name? incorporated instead in later work that
was eventually accepted by a journal?

Such questions are very interesting, and we too have done and are doing
such analyses -- http://opcit.eprints.org/tdb198/opcit/ --
http://opcit.eprints.org/ijh198/ --
but these data are certainly no substitute for (or even predictive of
the outcome of) testing what would happen in the complete absence of
classical peer review (as sketched above).

> Part of our difference is probably rooted in our varying professional
> experiences. As just one example (there are others in "Tragic loss ...")
> some of the most interesting developments in mathematics over the last
> half century were in algebraic geometry (leading to several Fields Medals,
> the mathematics equivalent of a Nobel Prize). Much of that work was
> based on several thousand pages published in the un-refereed Springer
> Lecture Notes in Mathematics (the famous SGA volumes). Why was the
> field willing to rely on those? Well, it is a long story, but there
> were enough checks around (such as many people, including graduate
> student seminars going over those volumes line by line) to convince
> experts to rely on those papers. It was examples like these, definitely
> involving review by peers, but not classical peer review, that helped
> convince me that scholars could thrive under a variety of systems.
> I spend an inordinate amount of time on this subject in "Tragic loss ..."

I read it, and found it very interesting, but it does not test the
hypothesis. It merely confirms that classical peer review can be
supplemented in various ways (especially with elite researchers,
highly specialized topics with relatively few practitioners and known
to one another, and relatively fast-moving developments). It certainly
does not follow that these special-case supplements can serve as
substitutes for classical peer review, nor that they scale to research
as a whole.

Here is a logical reductio (since so much of this is a question of
scale): If research just consisted of algebraic topology, and there were
only 6 algebraic topologists doing all the breakthrough work, we would
not need journals and peer review: We could revert to the exchange of
learned letters of Newton's day. Unfortunately (perhaps), this does not
scale to the full gaussian distribution of fields and quality represented
by the 2,000,000 annual papers published in our day.

Classical peer review evolved specifically to cope with the increase in
scale of research. Advocating posting it all to the skies and assuming
that vetting will somehow take care of itself at this scale strikes me as
unrealistic in the extreme.

In some ways, it is a pity that self-archiving and open-access began
with unrefereed preprints. It gave two wrong impressions (diametrically
opposite ones, in the event): Some people concluded, wrongly, that
self-archiving and open-access are only suitable for unrefereed
preprints, whereas refereed postprints should be toll-access. (I think
we all agree that that is not only wrong, but nonsense.) Others
concluded (equally wrongly), that unrefereed self-archived, open-access
preprints are all we need: We can dispense with the peer review and the
postprints.

How much better it would have been (but alas it is too late to redo it)
if the first ones who had "twigged" on the fact that open-access is
optimal for all research had self-archived instead their refereed
postprints, rather than their unrefereed preprints. The reason this
did not happen is that (1) most researchers then (and now) wrongly
believed that copyright law prevents them from self-archiving their
refereed research and (2) the physicists, though not silly enough to be
deterred by such copyright worries, were focussed mainly on the "growing
edge" of their work, which precedes the refereed postprint by 8-12
months. So they took to self-archiving their pre-refereeing preprints
first (though from the outset, many swapped or added the refereed
postprint 8-12 months later, once it was available).

Today it is still copyright worries that are holding back self-archiving
(except among the much more sensible physicists and mathematicians). But
a further worry has been added to retard self-archiving: that it might
destroy peer review, and hence the quality and navigability of the
research literature. And this worry has been (needlessly) encouraged by
the (incorrect) interpretations the self-archiving physicists and
mathematicians have made of what they have actually been doing, and what
follows from it. They THINK they have shown that peer review is not
necessary; in reality what they have shown is that open access is
optimal.

If self-archiving had (mirabile dictu) begun instead with refereed
postprints, we might have spared ourselves these misconstruals, and we
might have been further along the road to open access by now....

>sh> Peer review is not a gold standard, but I'm sure you will agree
>sh> that any alternative would have to ensure at least the same
>sh> standard, if not better: Do you think there is this evidence
>sh> for the promise you are holding out above?


>
> Yes, I do. Examples such as that of algebraic geometry (mentioned in my
> previous response above) showed me early on that there is nothing sacred
> about classical peer review. The law review system is yet another example.

I'm afraid we will have to agree to disagree about that. I have tried to
explain why those cases do not test the hypothesis, and what tests are
needed.

>sh> I honestly can't see how you imagine this scaling to the annual
>sh> 2, 000,000 papers that currently appear in the classically
>sh> peer reviewed journals! Absent the peer reviewed journal, how
>sh> can I know that a paper has been "vetted by experts of a top
>sh> caliber"? What tells me that (as the journal-name currently
>sh> does) for those 2,000,000 annual papers? And what now gets
>sh> the right-calibre experts vetting the right papers (as editors
>sh> formerly did, when they invited them to referee?). Do experts
>sh> voluntarily spend their precious time trawling the continuum of
>sh> raw papers on the net on their own?


>
> I do not have the time to respond to all these points, but yes, many experts
> do "voluntarily spend their precious time trawling the continuum of raw papers
> on the net on their own." I do know many people whose day starts with a scan
> of the latest arXiv submissions. Moreover, some put a lot of effort into
> making what they find more easily digestible for others. (John Baez and his
> wonderful "This week's finds in mathematical physics" comes to mind.)

I can only repeat: This is all parasitic on a classical peer review
system that is still intact behind all of this. It does not and cannot
tell us what would happen without it.

> I am sorry, Stevan, but you are ignoring some of the most interesting
> evolutionary developments in scholarly publishing in your blind faith
> that classical peer review is the only thing that stands between us
> and chaos.

Perhaps. But it is a historical and evolutionary fact that classical
peer review is still 100% intact and in place behind all these
developments, which therefore makes them supplements to peer review,
not substitutes, until such a time as evolution or experimentation
actually tests them as substitutes.

> Open archives are not a religion to me, just a step towards
> a better scholarly communication system, which will also require changes
> in peer review.

I don't think open-access is a religion for me either (though it has
become a bit of an obsession). For me too, it is merely a means to
an end. The end, though, is the enhanced impact and interactivity, hence
productivity, of research. I don't think it's the flaws of classical
peer review that are holding that back (especially now that
pre-refereeing preprints can be made openly accessible too), but the
access-barriers of the toll-access system. Open access to it all would
solve that problem -- and it would also open the door to any
evolutionary developments along the line you are hypothesizing, if they
turn out to be adaptive -- but only after needless worries about
copyright and peer review are overcome. Our point of disagreement is only
about the advisability of needlessly exacerbating worries about peer
review while self-archiving and open access still have not prevailed.

Stevan Harnad


Stevan Harnad

unread,
Nov 7, 2002, 2:19:13 PM11/7/02
to bionet.jou...@ecs.soton.ac.uk

---------- Forwarded message ----------
Date: Thu, 7 Nov 2002 17:01:57 +0000 (GMT)
From: Stevan Harnad <har...@ecs.soton.ac.uk>
To: Andrew Odlyzko <odl...@dtc.umn.edu>
Subject: Re: Self-Selected Vetting vs. Peer Review: Supplement or Substitute?

On Wed, 6 Nov 2002, Andrew Odlyzko wrote:

>sh> (R1) Your primary motivation is (R) to reform research
>sh> communication, mainly through the posting of all papers online,
>sh> unrefereed, and then relying on self-selected "vetters," in
>sh> place of of classical peer review, to evaluate, improve and
>sh> thereby sign-post paper quality.

>ao> I would not phrase it this way. I would say that my primary motivation
>ao> is indeed to reform research communication, mainly through the posting
>ao> of all papers online, unrefereed, and then relying on whatever mixture
>ao> of classical peer review, or contributions of self-selected "vetters,"
>ao> the community decides to rely on. I do not hold dogmatic views of how
>ao> peer review will be handled, and predict only rough trends.

It is gratifying to know that you are only predicting rough trends.

It is important to make it quite explicit here just how close our
position is, so as to pinpoint exactly what it is that we disagree on
(i.e., the trends you predict):

(1) My motivation is freeing access to all research, both pre- and post
peer review (i.e., nut just unrefereed preprints), through self-archiving.

(2) If we could (for some arbitrary reason!) free only one of the two, I
would choose the refereed final draft rather than the unrevised preprint,
but there is no reason not to free both (with a few special exceptions:
see below).

(3) Given success in convincing the world research community to free
access to their research by self-archiving both their preprints and
their postprints, we would eo ipso also have all the benefits of
"self-selected vetting" that you (and I!) both value. But this would be
a *supplement* not a *substitute* for classical peer review. (Nothing
whatever would be lost, and everything would be gained!)

(4) If I were you, and I believed self-selected vetting will eventually
replace classical peer review, I would not feel any need to add anything
to (1) - (3). Self-archiving, and the open access it brings, would be
all I would need to evangelize for. The rest (if my belief that vetting
could and would replace peer review was correct) would then take care
of itself.

(5) But self-archiving is (we both agree) taking place far too slowly.

(6) Among the reasons self-archiving is taking place so slowly are
worries that researchers have that hold them back from self-archiving.
The two main ones are:
(i) the worry that self-archiving would compromise or destroy peer
review http://www.eprints.org/self-faq/#7.Peer
and
(ii) the worry that self-archiving would violate copyright
http://www.eprints.org/self-faq/#10.Copyright

(7) As we both regard open access as optimal, inevitable, and long
overdue, we should both be concerned with relieving worries (i) and
(ii).

(8) Your belief that self-vetting will eventually replace classical
peer review is one that would *reinforce* rather than relieving
researchers' worry on that score. Hence, unless you could persuade them
not only that it will happen, but that it too is optimal, voicing your
belief (and it is only a belief, a hypothesis), even as the prediction
of a "rough trend" seems more likely to slow rather than speed the
transition to what we both agree is the optimal and the inevitable outcome
(open access).

(9) I believe strongly that your belief (that self-vetting will replace
classical peer review) is wrong, and that there is no trend, rough
or smooth, in that direction, now till there be. (I believe that, not
dogmatically, but for the very concrete reasons I have repeatedly adduced
in this series of exchanges.) But if someone like me -- who believes fully
in self-archiving and the transition to open-access, and disbelieves in
any causal connection between that and any risk to classical peer
review -- is not persuaded by your contrary belief, nor the arguments
you adduce in its support, then how likely is it that someone who does
not yet believe in self-archiving *and* worries that it would be a risk
to peer review will be emboldened (to self-archive) by your hypothesis
(even formulated as a "rough trend")?

(10) The optimality of open access to the research literature is a
certainty, not a hypothesis. We both agree about that, and about the ample
evidence that it maximizes research visibility, accessibility, uptake,
usage, citation, and impact, as well as scope, speed, and interactivity,
in short, that it greatly benefits research and researcher productivity.

(11) The transition from classical peer review to self-selected vetting,
in contrast, is merely a hypothesis (and one with -- in my view -- much
a priori evidence and many reasons to conclude that it is incorrect, but
never mind) -- a hypothesis that, on the face of it, looks as if it
would retard the transition to open access through self-archiving by
reinforcing the worries of those who do not self-archive precisely
because they are afraid it would destroy classical peer review!

(11) So why even voice the hypothesis? For if it is true, then it
will be a causal consequence of self-archiving anyway; hence, since
self-archiving is the necessary and sufficient condition for it,
everything should be done to hasten self-archiving. Yet voicing this
(fallible) hypothesis is very likely to slow, rather than hasten
self-archiving.

(12) As to my (arbitrary, desert-island) forced choice of open access
for the peer-reviewed final draft over the unrefereed preprint, not only
is it not a choice anyone has to make (since there is in most cases no
reason not to have both), but I even have an explicit commitment to the
self-archiving of unrefereed preprints, for two reasons:

(13) The first benefit of self-archiving the unrefereed preprint is one
on which we both agree -- that it hastens and strengthens the progress,
interactivity and self-correctiveness of research.

(14) The second benefit of self-archiving the unrefereed preprint is
that it is a means of legally self-archiving even in the face of the
most restrictive copyright transfer agreement: The self-archiving of the
preprint pre-dates any agreement or even any submission to a journal;
and after submission, refereeing, revision, and acceptance, if the
copyright agreement forbids self-archiving the final draft, one need only
self-archive the corrections that would have to be made by the user on
the already public preprint to make it conform to the final draft.
http://www.eprints.org/self-faq/#publisher-forbids

(15) Hence, besides being highly beneficial in its own right, the
self-archiving of the unrefereed preprint is an ally in ensuring open
access to the contents (if not the form) of the refereed final draft too.

(16) So we agree on the importance and sure benefits of self-archiving
preprints and we agree on the importance and sure benefits of open
access. We disagree only on your hypothesis that self-archiving will
eventually lead to the replacement of classical peer review by
self-selected vetting (or that there is any "rough trend" in that
direcrion.)

(17) Yet we agree that even if the hypothesis is correct, all it needs
is self-archiving -- and we both believe fully in the importance and
benefits of self-archiving.

(18) Perhaps what self-archiving needs now is facts rather than
hypotheses. The benefits of self-archiving are demonstrated facts,
whereas your hypothesis remains completely untested (and the likelihood
of success for any eventual experiment would seem to be contradicted by
a number of prima facie logical and practical considerations that I have
repeatedly listed in these exchanges).

(19) There is only one remaining problem with emphasizing (as you do),
only the self-archiving of pre-refereeing preprints, rather than also
the refereed postprints:

(20) There are special cases -- I couldn't say how many, or how their
distribution varies by field, but certainly a non-zero number, as the
"preprint culture" of physics is by no means universal yet -- in which
the author would prefer not to make his research public before it is
(classically) peer-reviewed. (There are also cases, particularly in
the medical literature, where the public posting of unrefereed findings
might represent a danger to public health.)
http://www.nih.gov/about/director/ebiomed/com0509.htm#harn45

(21) This is a further reason for emphasizing the self-archiving of the
refereed postprint. Otherwise, such authors/papers are also lost to
self-archiving altogether.

(22) If such authors go on to sign over-restrictive copyright transfer
agreements after refereeing then they may still be lost to self-archiving
for the time-being, because for them the preprint-plus-corrigenda strategy
above (14) will not work (there being no already-archived preprint);
but there seems no reason to lose open access to the research of all
preprint non-archivers by restricting self-archiving to preprints alone,
rather than preprints and postprints.

(23) Finally, many sceptics about the benefits of open access will only
be won over once the refereed, published postprints are openly
accessible, and not merely the unrefereed preprints.

(24) Having said all this, I have sufficient confidence in the
self-correctiveness of open online communication (such as this exchange)
not to worry too much about the possible untoward effects of the airing
of your hypothesis on those who are reluctant to self-archive because
of worries about peer review. As long as both sides are aired, let us
trust the outcome to (self-corrective) human judgment. (I hope Fiona
Godlee will agree to co-publish my reply along with your hypothesis in
the collection in which it will appear. Thanks for agreeing to post this
exchange to the American Scientist Forum.)

>sh> (A1) My primary motivation is (A) to make the classically peer-reviewed
>sh> research literature we have now openly accessible online, such as
>sh> it is, to maximize its accessibility, usage and impact. Qualified
>sh> journal editors continue to select the referees, referees and author
>sh> remain answerable to the editor, and the journal-name and its quality
>sh> track-record sign-post for potential users that an article has met
>sh> that journal's quality-standards.
>sh>
>sh> (R2) You see a causal connection between A and R: A will lead to R.
>
>ao> Yes, provided that (A) involves open access to preprints. (The way you
>ao> have phrased (A), it could encompass a system in which the Ingelfinger
>ao> rule would be universal, it's just that after publication by a journal,
>ao> articles would be freely available. That form of (A) would not yield
>ao> to (R).)

I agree, and I hope that what I wrote above now shows that this is not
at all what I meant. I am both strongly in favor of the self-archiving of
preprints and strongly opposed to the Ingelfinger rule and have published
critiques of it. However, I am at least equally in favor of the
self-archiving of the refereed postprints too:

Harnad, S. (2000) Ingelfinger Over-Ruled: The Role
of the Web in the Future of Refereed Medical Journal
Publishing. Lancet Perspectives 256 (December Supplement): s16.
http://cogprints.soton.ac.uk/documents/disk0/00/00/17/03/

Harnad, S. (2000) E-Knowledge: Freeing the Refereed
Journal Corpus Online. Computer Law & Security Report
16(2) 78-87. [Rebuttal to Bloom Editorial in Science
and Relman Editorial in New England Journal of Medicine]
http://cogprints.soton.ac.uk/documents/disk0/00/00/17/01/

>sh> (R4) You think that the research advances that occur before peer review
>sh> through the online posting of pre-refereeing preprints today are evidence
>sh> that peer review is unnecessary and can be replaced by spontaneous vetting
>sh> (R) without loss (and perhaps even with a gain) in quality.
>
>ao> Not that "peer review is unnecessary," but that "classical peer review is
>ao> unnecessary." I would also quarrel with the phrasing of the last part
>ao> of this point, but will let it go for lack of time.

It would be useful to know precisely what you mean -- especially since
this exchange is really meant to clarify things for those who may be
worried that self-archiving poses a risk to classical peer review. I am
convinced that there is no causal connection whatsoever between
self-archiving and changes in classical peer review (except that online
pre-refereeing feedback is a useful supplement to peer review, and that
the online medium will make it possible to implement classical peer
review more quickly, cheaply, efficiently and equitably).

Hence there is no reason to refrain from self-archiving because of
worries about peer review: http://www.eprints.org/self-faq/#7.Peer

On the other hand, a system consisting exclusively of self-selected
online vetting is no form of peer review, classical or otherwise (except
in those cases where it happens to happen by chance!).

>sh> There is not only this total empirical gap between the data you
>sh> use and the conclusions you draw, but there are also logical
>sh> gaps: You have not replied when I have asked how, in a system
>sh> where classical peer review and journal-names with track-records
>sh> are no longer there as the back-up and bottom line -- as they are
>sh> universally and without exception now -- how the annual 2,000,000
>sh> papers (which are today refereed and sign-posted by refereed
>sh> journals) will find their proper vetting, and how this will be
>sh> sign-posted for potential users? This question does not even come
>sh> up in the case of pre-refereeing preprints, because those are a
>sh> "parallel economy," backed up by the classical peer-review and
>sh> then sign-posted by the names and track-records of the journals to
>sh> which just about every one of those preprints has been submitted,
>sh> and in which they will all appear eventually (though perhaps only
>sh> after several rounds of revision and refereeing, and perhaps not
>sh> always at the level of the journal to which they were submitted
>sh> first.)
>
>ao> I did not reply because I did not have time to reply to all of your
>ao> points. Since you make this such a central point, let me respond
>ao> now (although very briefly and so inadequately). How will all
>ao> those papers "find their proper vetting"? Well, how do they find
>ao> proper refereeing now, under your vaunted classical peer review?

By being submitted to a journal, whose editor (presumably a recognized
expert in the field) is responsible for (1) selecting qualified referees
(busy, reluctant, but willing-if-asked by the right journal/editor),
(2) deciding which of the referees' recommendations are valid and need
to be satisfied in order to meet the journal's established quality
standards, and (3) making sure the accepted, final draft has satisfied
them. The result is then recognizably (4) tagged (sign-posted) as
having been thus quality-controlled by the journal-name (and associated
track-record). That's all there is to classical peer review: Human expert
judgment, systematized, answerable, and reliably labelled accordingly.

How does anarchic self-selected vetting ensure an equivalent outcome,
and how is it to be sign-posted?

>ao> We know that serious frauds like the Jan Hendrik Schoen slip through.

I have replied about fraud in earlier postings. In brief, fraud will
be found out anyway, because one cannot build on it, and research
progress is about building cumulatively on findings, not merely reporting
them. But, in any case, with open-access preprints and postprints as
a supplement to peer review, all the benefits of the extra lines of
defence are there, over and above peer review, without any need to change
classical peer review in any way (other than to make it faster and
more efficient in finding and reaching referees and in distributing the
refereeing load more broadly and evenly).

Re: A Note of Caution About "Reforming the System"
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2322.html

>ao> We also have plenty of evidence that lots of simply
>ao> not very solid science that is not fraudulent gets through.

Yes, human judgment, even expert peer judgment, is fallible. And
supplementing its systematic, answerable, labelled application with
open feedback will be very useful: But why construe this supplement
as a substitute? Why would it not simply co-exist with the classical
line of defense?

>ao> (I don't have time to dig up references, but there was a paper
>ao> quite a while ago that looked at the statistical methodology used
>ao> in a large sample of medical papers. It found a horrendously high
>ao> rate of misapplications of statistics.

Horrendous compared to what? Compared to how many would have been found
if the raw preprints had all simply been publicly posted, hoping that
the right vetters would find and test them all for us all spontaneously?
What evidence is there that they could or would, without the systematic
mediation of someone who was qualified and responsible for the outcome
(such as a journal editor)?

Or would attention have to be drawn by someone having poisoned
himself on the basis of an unrefereed remedy first?
http://www.nih.gov/about/director/ebiomed/com0509.htm#harn45

On the other hand, to repeat, open feedback on an open access classically
refereed literature, pre-, during, and post- refereeing, would certainly
be a welcome second line of defence, and would no doubt help improve
the quality of the research literature.

>ao> There are lots more examples.) The point is that classical peer
>ao> review does not provide much of a signal, especially for journals
>ao> in the lower quality tiers.

And you think self-selected feedback would provide at least as much of a
"signal," especially for journals in the lower quality tiers?

(Sometimes I think what you are saying is that the elite work does not
really need peer review and for the rest it doesn't really matter...!)

>ao> So how does science progress? Well, there are all sorts of checks
>ao> that are applied post-publication. (And none of them are infallible.
>ao> Even a few Nobel prizes are now regarded as having been given in error.)
>ao> Basically classical peer review is just one noisy and uncertain signal
>ao> that the scholarly community relies on.

Classical peer review is not a signal; it is a dynamic, interactive
quality-control and tagging system -- and the only one that is systematic
and answerable. I cannot see any way that anarchic self-selected feedback
can replace this (other than by re-inventing classical peer review under
another name) -- though I can see how it can (and does) complement it.

>sh> If self-archiving had (mirabile dictu) begun instead with refereed
>sh> postprints [rather than unrefereed preprints], we might have
>sh> spared ourselves these misconstruals [about peer review] and
>sh> we might have been further along the road to open access by now...
>
>ao> The incentives were not there to do this. The authors, who after all
>ao> control the information flow, could see the benefits to themselves
>ao> of quick circulation of preprints. Open access to published journal
>ao> articles was of much less value to them, since they typically had access
>ao> to those journals in their libraries.

I agree that those were the initial conditions that turned it into a
historical fact that self-archiving en masse began with physicists,
self-archiving their pre-refereeing preprints first (though soon also
their published postprints too). They were already a (paper) "preprint
culture" and (the elite among them) did not lack access to the
toll-based journal literature.

That's why I said it would have been "mirabile dictu" if it had begun
instead with refereed postprints. But that is all history now, and our
eyes are opened, and the data are in, and it should now be clear that open
access to the entire research literature, before and after refereeing,
is what will be optimal for all. And that what peer-reviewed research
needs now is to be freed from access-blocking tolls, not from
quality-controlling peer review.

Stevan Harnad


Reply all
Reply to author
Forward
0 new messages