thanks for this post, i was wondering about this point.
i agree with a lot of your views, but i think limiting the system to
author-initiated recommendations or endorsements is not helpful. you
have a point about a democratic or score-averaging system potentially
failing to promote truly original ideas. but there is a counterpoint
that has equal merit: there needs to be a mechanism for criticism and
debunking of false claims.
we need a system for open post-publication peer review. like the
current peer review system, there will be editor-elicited reviews to
start the process and attempt to balance the biases of the first few
reviewers somewhat. the editors could be asked to serve this function
by the authors -- thus editing will imply a tentatively positive
attitude from the editor (which should partially address your
concerns).
in addition, the system can accommodate author-initiated reviews,
which are likely to be more open to the ideas presented in the paper
and less critical.
every reviewer rates the paper on multiple dimensions. something like
a minimum of five dimensions (e.g. justification of claims based on
empirical evidence, justification of claims based in theoretical
argument, originality, potential positive impact, overall quality).
the list of dimensions itself must be freely extensible.
crucially, i agree with you that the overall score of the paper must
not be an average. instead every user can define his or her own paper
evaluation function. for example, the user might be like you and feel
that score averaging fails to highlight truly original developments.
in that case he could restrict his analysis to author-initiated
reviews. there will be an open competition for the best evaluation
formulae. formulae will differ in how they weight the different scales
and how they weight different reviewers. reviewer rating distributions
could be used to normalize ratings and to ensure that every individual
scientist influences the process about equally (thus people writing
more reviews will not increase your influence, but only spread it more
thinly). publication success could also be a useful metastatistic for
weighting reviews -- although there might be arguments against this.
so in sum, as we move toward an open system for scientific publishing,
let's not throw out the baby with the bathwater: peer review is the
best evaluation mechanism there is, editor-selected reviewers will
often do a good job, and we must enable criticism and debunking along
with praise and support. however, we need to move from the current
secret and intransparently journal-controlled pre-publication
reviewing to open post-publication reviewing and give reviewers credit
for their work (as you suggest) -- unless they prefer remain
anonymous.
--nikolaus kriegeskorte
ps: sorry that my previous post appeared as a separate discussion line
-- i was meaning to reply within the project merit discussion (feel
free to copy my post there if you can).