GPeerReview vs open peer commentary systems

10 views
Skip to first unread message

mikeg...@gmail.com

unread,
Feb 11, 2009, 7:45:31 AM2/11/09
to GPeerReview
A lot of people have been asking how this system differs from websites
that let people comment about pre-publication releases on arXiv.org
and similar sites. I'd like to clear this up.

The purpose of peer commentary sites is to sample how "the community"
feels about a publication. In a sense, they are like a democracy in
which only experts are allowed to vote. GPeerReview, by contrast,
encourages authors to present their ideas with as much support as they
can muster. This is a substantial difference. In other words,
GPeerReview lets authors pick and choose the reviews they like. In a
peer commentary system, you can tell if the whole community likes or
dislikes something because it will have a high or low average score.
With GPeerReview, every paper will have a high average score!

So how is that better? Think about some of the truly great ideas of
the past. How many of them would have survived a democratic rating
process? The only thing you get when you measure the mean reaction to
anything is a measure of how mainstream it is. The really bad ideas
tend to be non-traditional. The really good ideas do too. Promoting
whatever is already popular and well-accepted is neither useful nor
productive in science. On the other hand, the GPeerReview method
doesn't care about the mean reaction. It cares how much credible
support the author can find for his idea. This is a far more natural
way for innovation to operate, and it will enable ideas with merit to
prosper.

Suppose an idea has the support of twenty brilliant experts, but the
average community thinks it's a bad idea and a waste of time. Should
this idea be judged to have no merit and terminated? Certainly not!
Let the experts promote the idea and see if it succeeds. Science
should be a meritocracy, not a democracy.

When you evaluate a paper with GPeerReview, you cannot just average
the review scores. You must follow the links to see *WHO* signs off on
the ideas. This is why the links must be able to be followed in an
automated manner. This is why links must be hyperlinks to actual
reviews, not just citations with the name of a journal that will try
to sell you more information. Bad ideas will have lots of support from
people who have no path to a community's center, or who have mostly
circular references. Good ideas will receive the backing of people who
have a history of having good ideas, or of recognizing good ideas.

Peer review has survived this far in academia because we haven't asked
for a mean consensus from the community for every idea. We've had a
small number of referees look at the paper and decide if it is good.
But this is just hit-and-miss. GPeerReview lets authors seek support
from experts, the way it should be done.

Another significant difference with peer commentary systems is the
motivation for performing reviews. In a peer commentary system,
getting people to perform reviews is like pulling teeth. They often
need to force people to review random papers in order to obtain a
sufficient number of reviews for any paper. This is a terrible thing
to do to the concept of peer review. The result is that only people
with lots of time on their hands will decide to participate in the
open peer review sites. These are exactly the people whose reviews you
don't really want.

With GPeerReview, the reviews you make will be out there in the open
with your name on them to show that you are an active player in
helping to advance the science. The motivation for writing reviews is
the same as the motivation for writing papers--it's something the
community needs. This is the right motivation. It is not only a much
better motivation, but it is also one that will encourage better more
thought-out reviews. Contrast this with the reviews that are produced
by peer commentary systems.

The motivation for reviews must be the same as for writing papers.
This is the only way we'll get people's best work in both the papers
and the reviews. When that starts happening, it naturally follows that
the ideas with the best merit will begin to be properly recognized,
and therefore, the science will advance as fast as researchers work to
make it advance. We've got to stop hindering our progress with these
terrible voting mechanisms and external review requirements that only
promote mainstream ideas and mediocre innovations.

(Sorry for writing such a long one. I couldn't sleep until I said all
this. I'll try to generally keep my future posts short if I can.)

Niko

unread,
Feb 11, 2009, 11:54:23 AM2/11/09
to GPeerReview
thanks for this post, i was wondering about this point.

i agree with a lot of your views, but i think limiting the system to
author-initiated recommendations or endorsements is not helpful. you
have a point about a democratic or score-averaging system potentially
failing to promote truly original ideas. but there is a counterpoint
that has equal merit: there needs to be a mechanism for criticism and
debunking of false claims.

we need a system for open post-publication peer review. like the
current peer review system, there will be editor-elicited reviews to
start the process and attempt to balance the biases of the first few
reviewers somewhat. the editors could be asked to serve this function
by the authors -- thus editing will imply a tentatively positive
attitude from the editor (which should partially address your
concerns).

in addition, the system can accommodate author-initiated reviews,
which are likely to be more open to the ideas presented in the paper
and less critical.

every reviewer rates the paper on multiple dimensions. something like
a minimum of five dimensions (e.g. justification of claims based on
empirical evidence, justification of claims based in theoretical
argument, originality, potential positive impact, overall quality).
the list of dimensions itself must be freely extensible.

crucially, i agree with you that the overall score of the paper must
not be an average. instead every user can define his or her own paper
evaluation function. for example, the user might be like you and feel
that score averaging fails to highlight truly original developments.
in that case he could restrict his analysis to author-initiated
reviews. there will be an open competition for the best evaluation
formulae. formulae will differ in how they weight the different scales
and how they weight different reviewers. reviewer rating distributions
could be used to normalize ratings and to ensure that every individual
scientist influences the process about equally (thus people writing
more reviews will not increase your influence, but only spread it more
thinly). publication success could also be a useful metastatistic for
weighting reviews -- although there might be arguments against this.

so in sum, as we move toward an open system for scientific publishing,
let's not throw out the baby with the bathwater: peer review is the
best evaluation mechanism there is, editor-selected reviewers will
often do a good job, and we must enable criticism and debunking along
with praise and support. however, we need to move from the current
secret and intransparently journal-controlled pre-publication
reviewing to open post-publication reviewing and give reviewers credit
for their work (as you suggest) -- unless they prefer remain
anonymous.

--nikolaus kriegeskorte

ps: sorry that my previous post appeared as a separate discussion line
-- i was meaning to reply within the project merit discussion (feel
free to copy my post there if you can).

Mike Gashler

unread,
Feb 11, 2009, 2:45:46 PM2/11/09
to gpeer...@googlegroups.com
That's a good point that there needs to be a mechanism to debunk false claims. Would it solve that problem if reviewers were simply allowed to revise or recant their endorsement of an idea? (That might open a can of worms wrt having no centralized certificate authority, though. I'd have to look deeper at how GnuPG handles invalidating keys.)

Niko

unread,
Feb 11, 2009, 7:35:22 PM2/11/09
to GPeerReview
mike,

the same technology that can allow author-solicited reviews can also
allow editor-solicited and unsolicited reviews. why limit the
generality of the system at the outset? review analysis can select
information for paper evaluation. i've elaborated this more here:
http://futureofscipub.wordpress.com/.

from a strategic point of view, it does not seem wise to ask people to
accept two independent new ideas at once. your tool could also power
general open post-publication peer review, right? so the other
question can be decided later on.

by the way, i agree with your ultimate goal statement -- very well
put. the beauty of the ideas we're exploring is that they don't
require a sudden change to a new system. rather they build on the
existing system and add alternatives, which will open the way toward
free scientific publishing...

--niko

Ultimate Goal
We intend for the peer-review web to do for scientific publishing
what the world wide web has done for media publishing. As it becomes
increasingly practical to evaluate researchers based on the reviews of
their peers, the need for centralized big-name journals begins to
diminish. The power is returned to those most qualified to give
meaningful reviews: the peers. As long as big journals provide a
useful service, this tool will only enhance their effectiveness. But
the more they take months to review our publications, and the more
they give unqualified reviews, and the more they force us to clear
irrelevant hurdles prior to publication, and the more they lock up our
works behind fees and copyright transfers, the more this tool will
provide an alternative to their services.


On Feb 11, 2:45 pm, Mike Gashler <mikegash...@gmail.com> wrote:
> That's a good point that there needs to be a mechanism to debunk false
> claims. Would it solve that problem if reviewers were simply allowed to
> revise or recant their endorsement of an idea? (That might open a can of
> worms wrt having no centralized certificate authority, though. I'd have to
> look deeper at how GnuPG handles invalidating keys.)
>

Niko

unread,
Feb 11, 2009, 7:42:45 PM2/11/09
to GPeerReview
regarding revising reviews (and papers): there definitely needs to be
a mechanism for this.

my ideas on revisions are as follows:
- reviews and papers can largely be handled the same way: both are
(typically signed) "scientific publications".
- a publication can be revised: the author submits the revision with a
reference to the original.
- the complete history of all revisions (including the original
version) is always accessible. but the latest revision takes
precedence: it is the only thing the typical user will see and the
only thing the typical paper evaluation function will analyze.
- publications can be withdrawn by submitting an empty document as a
revision.

--niko


On Feb 11, 2:45 pm, Mike Gashler <mikegash...@gmail.com> wrote:
> That's a good point that there needs to be a mechanism to debunk false
> claims. Would it solve that problem if reviewers were simply allowed to
> revise or recant their endorsement of an idea? (That might open a can of
> worms wrt having no centralized certificate authority, though. I'd have to
> look deeper at how GnuPG handles invalidating keys.)
>

Mike Gashler

unread,
Feb 11, 2009, 9:48:33 PM2/11/09
to gpeer...@googlegroups.com
I think I had read your post too quickly and missed your point. I'm sorry about that. I definitely agree that there is no reason to impose artificial limitations on the use of this tool. Further, you are exactly right that it's to our advantage to mesh with existing systems as much as possible. I think I sometimes tend to get too excited about my idealistic vision for the future and forget that practical people need to hear about how this can be useful in the short-term too.
Reply all
Reply to author
Forward
0 new messages