Indeed, if performing reviews was widely recognized as a useful
contribution to the science, then researchers would have the same
incentive to write thorough reviews as they currently have to publish
papers. It would serve as evidence that a researcher has breadth of
knowledge, and would be something one coule point out on a c.v.
One reason I think researchers are hesitant to adopt a scoring mechanism
is that poorly implemented ones expose them to a lot of risk. To
illustrate, consider the example of when Joseph Fourier first proposed
the Fourier Transform. His reviewers rejected the paper as
incomprehensible. This was not a case of incompetent referees. They were
no less mathematicians than LaGrange and Gauss! Further, I do not think
that this example represents an outlier case in science. Very good ideas
are often rejected. Good ideas are difficult to recognize because they
are non-mainstream. Further, the best reviewers are often too busy to
spend a lot of time thoroughly understanding every idea. Fortunately,
the journal system enables dedicated researchers to keep submitting
their ideas until they are recognized, and without paying a
credibility-penalty for every rejection. I think it is important to
ensure that any scoring mechanism also have this property.
There is a simple solution: Allow authors to discard any review that
they do not like. (This is what happens to reviews that result in a
rejection with the current system. Authors only mention acceptances on
their resumes.) There is no need to formally identify the bad papers.
Every paper is naturally assumed to be poor until it obtains good
reviews from credible reviewers, or a lot of citations. A mechanism
allowing authors to discard disfavorable reviews would make researchers
less afraid to use a new system because they would know that it could
not hurt them. The last thing a researcher wants to do is expose himself
to the possiblity of having incompetent, negligent, or busy researchers
destroy his career.
Unfortunately, such a mechanism also makes it difficult to assign a
single score to each paper. This is why GPeerReview does not use a
particular metric. Instead, it allows the
reader/employer/grant-proposal-referee to design their own algorithm
that will analyze the graph of papers/reviews/reviewers according to
their own priorities. Further, this makes the system much more difficult
to game, and it means we aren't trying to solve so many problems in one
shot.
> <mailto:
mikeg...@gmail.com> <
mikeg...@gmail.com
> <mailto:
mikeg...@gmail.com>> wrote:
>
> Interesting. I've not yet seen any document-ranking systems that
> were also decentralized. I'm having some trouble understanding how
> that could work. (Also, I use Linux, and OpenOffice does a very
> poor job of rendering your PowerPoint presentation, so I probably
> missed some important details.) Do you also use digital signatures
> to prevent people from hacking the protocol to boost their
> rankings? How do you handle people that create multiple accounts?
>
> Are you at all interested in debating our differences and then
> trying to work together? From what I can understand, here's how
> our projects seem to overlap:
>
> Problems we both appear to attack:
>
> * Articles/papers/works should receive multiple endorsements,
> not just one journal to back them.
> * Peers, not just journals, should be empowered to review
> papers and give endorsements.
>
> Problems you appear to attack that we do not:
>
> * Distributed document storage
> * Document ranking
> * Reviewer/annotator credibility ranking
> * A convenient web interface
>
> Problems we attack that I didn't see mentioned in your system:
>
> * Integration with existing journal system