Have a look at ScoreFinder

8 views
Skip to first unread message

Aaron Harwood

unread,
Jul 22, 2009, 10:42:40 PM7/22/09
to gpeer...@googlegroups.com

Hi,

We have been working on a similar system:

http://project.guiguan.net/ScoreFinder/

--aaron

mikeg...@gmail.com

unread,
Jul 23, 2009, 2:23:10 PM7/23/09
to gpeer...@googlegroups.com
Interesting. I've not yet seen any document-ranking systems that were also decentralized. I'm having some trouble understanding how that could work. (Also, I use Linux, and OpenOffice does a very poor job of rendering your PowerPoint presentation, so I probably missed some important details.) Do you also use digital signatures to prevent people from hacking the protocol to boost their rankings? How do you handle people that create multiple accounts?

Are you at all interested in debating our differences and then trying to work together? From what I can understand, here's how our projects seem to overlap:

Problems we both appear to attack:
  • Articles/papers/works should receive multiple endorsements, not just one journal to back them.
  • Peers, not just journals, should be empowered to review papers and give endorsements.
Problems you appear to attack that we do not:
  • Distributed document storage
  • Document ranking
  • Reviewer/annotator credibility ranking
  • A convenient web interface
Problems we attack that I didn't see mentioned in your system:
  • Integration with existing journal system
-Mike

Matthews

unread,
Jul 23, 2009, 8:57:26 PM7/23/09
to gpeer...@googlegroups.com
Dear Mike et al.,

re: Reviewer/annotator credibility ranking

Assessing reviewer credibility could be achieved by treating reviews as further documents that can be assessed (by authors, readers, and other reviewers) for academic merit, quality of expression, helpfulness to the author, and helpfulness to readers. However, to prevent an endless succession of reviews of reviews of reviews, assessment of a primary reviewer could be restricted to a simple point or star grading system, which itself is not subject to further review.

Reviews at their best can be mini-research papers that address many aspects of a paper in detail. A major problem is recruiting enough reviewers to actually look at new and old papers in detail. This has always been one of the main roles of journal editors, to seek out suitable reviewers - and it can be a very difficult role.

It would be great if the review system could be somehow integrated with (or promoted to) social/research networks where potential reviewers are likely to be found. The review system could also be integrated into its own social network of researchers (and others) who like to do reviews, as one of their research-related roles.

It might also be nice to encourage authors to review their own papers retrospectively, after a certain number of years. I wonder if a 'Literature Flashback' mode could be instituted, to pull up old papers and reviews from the distant (or not so distant) past, and to encourage historical explorations of research literature and new reviews of old papers.

Best regards, Peter

****
--
Dr Peter J. Matthews
Department of Social research
& Field Sciences Laboratory
National Museum of Ethnology, Senri Expo Park, Suita City, Osaka 565-8511, Japan.

Tel. +81-6-6878-8344 (office).
Tel. +81-6-6876-2151 (exchange)
Fax. +81-6-6878-7503 (office)

The Research Cooperative
http://cooperative.ning.com
Please visit and join!

Mike Gashler

unread,
Jul 25, 2009, 12:26:34 PM7/25/09
to gpeer...@googlegroups.com
Indeed, if performing reviews was widely recognized as a useful
contribution to the science, then researchers would have the same
incentive to write thorough reviews as they currently have to publish
papers. It would serve as evidence that a researcher has breadth of
knowledge, and would be something one coule point out on a c.v.

One reason I think researchers are hesitant to adopt a scoring mechanism
is that poorly implemented ones expose them to a lot of risk. To
illustrate, consider the example of when Joseph Fourier first proposed
the Fourier Transform. His reviewers rejected the paper as
incomprehensible. This was not a case of incompetent referees. They were
no less mathematicians than LaGrange and Gauss! Further, I do not think
that this example represents an outlier case in science. Very good ideas
are often rejected. Good ideas are difficult to recognize because they
are non-mainstream. Further, the best reviewers are often too busy to
spend a lot of time thoroughly understanding every idea. Fortunately,
the journal system enables dedicated researchers to keep submitting
their ideas until they are recognized, and without paying a
credibility-penalty for every rejection. I think it is important to
ensure that any scoring mechanism also have this property.

There is a simple solution: Allow authors to discard any review that
they do not like. (This is what happens to reviews that result in a
rejection with the current system. Authors only mention acceptances on
their resumes.) There is no need to formally identify the bad papers.
Every paper is naturally assumed to be poor until it obtains good
reviews from credible reviewers, or a lot of citations. A mechanism
allowing authors to discard disfavorable reviews would make researchers
less afraid to use a new system because they would know that it could
not hurt them. The last thing a researcher wants to do is expose himself
to the possiblity of having incompetent, negligent, or busy researchers
destroy his career.

Unfortunately, such a mechanism also makes it difficult to assign a
single score to each paper. This is why GPeerReview does not use a
particular metric. Instead, it allows the
reader/employer/grant-proposal-referee to design their own algorithm
that will analyze the graph of papers/reviews/reviewers according to
their own priorities. Further, this makes the system much more difficult
to game, and it means we aren't trying to solve so many problems in one
shot.
> <mailto:mikeg...@gmail.com> <mikeg...@gmail.com
> <mailto:mikeg...@gmail.com>> wrote:
>
> Interesting. I've not yet seen any document-ranking systems that
> were also decentralized. I'm having some trouble understanding how
> that could work. (Also, I use Linux, and OpenOffice does a very
> poor job of rendering your PowerPoint presentation, so I probably
> missed some important details.) Do you also use digital signatures
> to prevent people from hacking the protocol to boost their
> rankings? How do you handle people that create multiple accounts?
>
> Are you at all interested in debating our differences and then
> trying to work together? From what I can understand, here's how
> our projects seem to overlap:
>
> Problems we both appear to attack:
>
> * Articles/papers/works should receive multiple endorsements,
> not just one journal to back them.
> * Peers, not just journals, should be empowered to review
> papers and give endorsements.
>
> Problems you appear to attack that we do not:
>
> * Distributed document storage
> * Document ranking
> * Reviewer/annotator credibility ranking
> * A convenient web interface
>
> Problems we attack that I didn't see mentioned in your system:
>
> * Integration with existing journal system
Reply all
Reply to author
Forward
0 new messages