Project merit discussion

6 views
Skip to first unread message

mikeg...@gmail.com

unread,
Feb 5, 2009, 10:07:06 AM2/5/09
to GPeerReview
If you have something to say about why this project is doomed to
failure or is superseded by some other concept, or is destined to
change the world, this is the place to discuss it.

mikeg...@gmail.com

unread,
Feb 5, 2009, 11:19:33 AM2/5/09
to GPeerReview
I think the strongest argument raised in the Slashdot discussion
(http://science.slashdot.org/article.pl?sid=09/02/04/2153208) against
this project was that analysis of citation graphs already fulfills
this need.

I think that this argument is weak for several reasons:
1- Most citations come from the sections that review prior work in a
field. In these sections, people generally try to cite the most
prominent work and ideas that are related to the topic. Therefore,
this has the effect of reinforcing already-popular ideas. The whole
point of peer review is to bring good new ideas to the fore-front.
Citations cannot fulfill this need. Emphasizing already-popular ideas
is not an important goal.
2- Anyone can cite a paper. It's not easy to weight citations, and
it's especially not easy to design automated tools that can figure
this out. Our tool is designed with automated analysis in mind.
3- Another common reason to cite a paper is to say "Look, our results
are better than this one". Thus, ideas that are early and easy to beat
are frequently cited. What about the ideas that are tough to beat?

UBfusion

unread,
Feb 5, 2009, 11:45:22 AM2/5/09
to GPeerReview
From the front page: "GPeerReview is a command-line tool .."

Please notify us If and when a GUI tool appears. The GUI will change
the world.

UBfusion

unread,
Feb 5, 2009, 12:26:12 PM2/5/09
to GPeerReview
I have not even read about the implementation principles, but please
allow me to think aloud wiht you (reflex reactions are very important
to me).

Regarding GPeerReview criticisms based on existing tools (citation
indexes): Existing tools have several problems, the main one being
that worldwide publication databases are yet neither complete nor
integrated. Authors that are active for 3 decades or more, can attest
that the indexes miss several of their early (and less often, even
recent) work.

Another problem is that existing indexes often are not aware of the
publications in proceedings volumes or other collective works
(especially if the latter is published in English, but not in an
English-speaking country.

The total number of publications rises almost exponentially. If you
take into account the recent surge in Asian (mostly Chinese)
publications, I am not sure whether the existing indexing systems are
scalable enough. I predict that in the near future misindexing or non-
indexing cases will increase rapidly.

I am sure there are many other important parameters, but if designed
well, the GPeerReview system might help fill in many gaps.

Finally, some thoughts regarding GPeerReview as a new Universal Index:

Personally, when I look for a paper, I never use index facilities, not
even Google scholar. Plain Google search with the authors' names and
some keywords works much better for me. If GPeerReview started
collecting paper information and indexing them, GPeerReview could (and
perhaps should) become the Universal Citation Index. That could be
achieved if some elements of the reviews (e.g. headers containing
publication information) were collected and archived by Google. I see
two main problems, though: a) whether and how reviewer's anonymity
should be preservedq and b) whether the text of the reviews should be
remain private or not.

We should remind that journal reviewers are usually anonymous to the
authors and the content of the review is usually known only by the
journal publisher. GPeerReview should think whether it will follow
this (commercial, that has become scientific!!!) paradigm or whether
e.g. there should be a public part of the review and a private one,
the authors of the paper deciding what is public and what remains
private. Examples of private elements could be non-constructive
criticism, or a new contributed idea that could be the seed for a new
paper etc. Public elements could be constructive criticism and e.g.
relations of the work under review with older literature or non-
english work.

Related to the above ideas, I think that the GPeerReview project
should take seriously into account and hopefully align with the
Budapest Open Access Initiative (http://www.soros.org/openaccess/).
Many national research centres already are establishing Open Access
depositories and I personally believe that there will be swift changes
in the whole scientific publication circuit (not to mention the
potential or eventual bankruptcy of minor publishing houses). I think
the project's first steps definitely need input from authors from many
countries, reviewers, publishing houses, existing open journals and
the Open Access Initiative.

All the above seem to me important design factors that should be
extensively discussed in this Google Group and influence the design of
GPeerReview right from its inception. The project is interesting,
valuable and has the potential to become indispensable to the
scientific enterprise in the 21st century.

mikeg...@gmail.com

unread,
Feb 5, 2009, 1:21:12 PM2/5/09
to GPeerReview
A lot of people have mentioned the notion that reviewers will prefer
to remain anonymous. Unfortunately, total anonymity renders a review
meaningless since there are so many people with opinions about things
they don't understand. Journals solve this problem by using the
journal's name to give the reviews some authority. I think this same
solution applies here. If people want anonymous or double-blind
reviews, then there needs to form organizations that will select high-
quality reviewers, and give them authority to sign their reviews with
the organizations private key. I think the shortest path to make this
happen is to persuade existing journals and conferences to start
signing the papers they accept.

So in other words, I think anonymity is a separate problem that is not
addressed by this tool, and doesn't need to be. I do think it is an
important and serious issue, but I don't have a solution that is any
better than the existing solution. I'm afraid it might weaken this
tool if we claim to solve problems like anonymous reviews, but don't
have a solid solution.

Peter Capek

unread,
Feb 5, 2009, 2:32:33 PM2/5/09
to gpeer...@googlegroups.com
I'm not particularly advocating it, but I can't resist pointing out that the implication of
the comments made today about reviewer credibility and reviewer anonymity
and how they interact is to build a system analogous to Google's page rank.  However, in such
a system reviewers (when they act as authors) would have their work be assessed
by their peers and, when they act as reviewers, they would have their reviews
"moderated" by their status or assessment.   Of course, this eliminates the opportunity
for individuals who wish to evaluate the credibility of a bunch of reviewers to do so
on their own basis (unless there were a feasible way to have the page rank-like
system provide individual tuning).

Ruadhan

unread,
Feb 6, 2009, 6:20:12 PM2/6/09
to GPeerReview
Myself and some other theoretical neuroscientists are starting a
journal
called (unsurprisingly) the Journal of Theoretical Neuroscience, and
the
idea behind the reviewing system is very very similar to GPeerReview.

We've thought a bit about some of the problems raised here, like
anonymity and so on, and have produced a system to deal with them.
There's a stub of a website at theoreticalneuroscience.org explaining
some of the details.

The system works as follows:

Each user must prove that he or she is who they claim to be. After
this, they can submit reviews anonymously if they wish. Because
the subject area is theoretical, the articles which are submitted will
be arguments or proofs, and the reviewers will be asked to testify
that they understand the argument or proof to be correct, objective
and relevant. If a large majority of reviewers agree that these
criteria
have been satisfied, and this condition persists for a set period of
time (say a month), then the article will be automatically "published"
in the Journal, and will become citable.

We'll implement a system of community policing, where users of
the site can flag reviews as frivolous and indicate their suspicions
of sock puppets. When a threshold number of users have done this,
the reviews will lose their voting power, or the users accused of
sock puppetry will have their identities revealed to a trusted third
party, or a website administrator if no trusted third parties can be
identified.

The other aspect of the system is the way people are motivated
to review articles. Basically, in order for a person to have his
article
published, others will have to review it, and the way for this to
happen
is for him to review others' articles. Author A will click on a button
that
says "Give me a random article to review", and if he reviews that
article
to the community's satisfaction, his article will be given to Author B
when author B wants to review a random article.

I think this addresses a problem of misplaced incentives and
motivation
that has existed with traditional journals. Typically, journal editors
must
ask reviewers to do them a favour by reviewing articles, while the
author,
who is the one who wants this reviewing service provided, doesn't have
to do anything.

Our system has a type of strict equality between users, or at least
between
users who are able to produce reviews that the rest of the community
don't
flag as inadequate. I think this is necessary in order to persuade
participants
that they will be treated fairly (a guarantee that nobody has with
traditional
journals). Publication in the journal will just indicate that, as far
as the system
we have in place can determine, the article is thought to be correct
by a
large majority of the community.

Of course, reviewers (and authors) can choose to reveal their names
alongside
their articles and reviews if they want, and other users can use
whatever metrics
they like to assign credibility to authors and reviewers.

About DOI, there's a crossref fee of $1 for each article deposited,
but we don't
expect every single argument and proof to be significant enough to
warrant
submitting them all, so our tentative plan is to allow authors to pay
the $1 fee
and have their article incorporated into CrossRef if they want to. In
order to
comply with CrossRef's terms, we might have to have two journals, one
of
which has every article linked into their system (we might call it JTN
Prime).

From talking to colleagues at conferences, I've found that most of the
junior
professors, postdocs and graduate students are enthusiastic about the
idea,
while the more senior professors tend to find it difficult to see
what's wrong with
the current peer-review system.

I don't see any reason why the system can't be completely compatible
with
GPeerReview. Once we have verified a user's identity and have a copy
of
his public key, he should be able to use GPeerReview to sign the
articles
that we give him to review. We could easily produce a GPeerReview
template
for the journal.

Best of luck,
Ruadhan.

On Feb 5, 11:32 am, Peter Capek <ca...@ieee.org> wrote:
> I'm not particularly advocating it, but I can't resist pointing out that the
> implication of
> the comments made today about reviewer credibility and reviewer anonymity
> and how they interact is to build a system analogous to Google's page rank.
> However, in such
> a system reviewers (when they act as authors) would have their work be
> assessed
> by their peers and, when they act as reviewers, they would have their
> reviews
> "moderated" by their status or assessment.   Of course, this eliminates the
> opportunity
> for individuals who wish to evaluate the credibility of a bunch of reviewers
> to do so
> on their own basis (unless there were a feasible way to have the page
> rank-like
> system provide individual tuning).
>

Mike Gashler

unread,
Feb 7, 2009, 12:03:26 PM2/7/09
to ruadhan.o...@gmail.com, gpeer...@googlegroups.com
Ruadhan,

That's very interesting. I would be particularly interested in making our systems compatible and perhaps finding ways to promote each others' systems if possible. I think the more places we try this, the more likely one of them is to take root and begin to grow because a lot of people are calling for some reform. It's silly that "publishing" has become somewhat synonymous with massive hurdles to jump over, restricted viewing, copyright complications, and lots of time spent revising issues that are irrelevant to the science. Eventually one of these more modern systems has got to find success.

(I study artificial neural networks myself, so I also have a bit of interest in theoretical neuroscience, as some of the principles seem to bleed across the two fields.)

For the sake of refining the designs of our systems, let me point out a few issues that I think are significant. (Please do the same for mine.) Actually, I only have one really big concern, which is that I am averse to centralized systems. Here are my reasons:
1- I don't see any reason why review and evaluation system needs to be tied to the same organization that provides the document storage and delivery system. Wouldn't it require less investment on your part if you used arXiv or any number of other content-delivery systems to actually hold the documents? If authors can publish anywhere, even on their own web server if they so choose, then the system is much more author-friendly.
2- Centralized systems always seem to fall back to some sort of voting/moderation mechanism. I've studied game theory too much to believe in that. A good place to start would be Arrow's impossibility theorem. Essentially, nomatter how you tally the votes, I can game your system. And if I can game it, there are plenty of other smart people who can game it too. All formulas are susceptible to gaming. The only solution is to stop playing that game. You cannot losslessly reduce multi-dimensional data into one dimension, and you cannot find a compromise of weights that makes everyone happy. (Someone won't be happy with making compromises.)
3- Conflicts of interest just seem to have a way of abiogenetically spawing in centralized systems. I can trust a centralized journal that operates in a well-established way (if I have to), but it's very difficult for me to trust a centralized system that operates in some new way because I fear that when they run low on money, the people in charge will often change the rules in attempt to save their system and justify that it's a necessary evil, or that it's for the greater good.

I think that Open Peer Commentary is a good example of what not to do. It tends to de-emphasizes work that is not in vogue. The last thing the world needs is an academic pop-culture with a handful of super-stars. There is also a strong tendency to emphasize the opinions of people who have nothing better to do than review papers, which tend to be the people whose reviews are not very good.

I think all these issues can be solved, but I'm not yet convinced that all the necessary solutions can be rolled up and packaged in a single journal. I also hope that my opinions are helpful rather than discouraging. I am very much in support of people trying new things like this. The world will be much better off when one of them begins to take root.

-Mike

Ruadhan

unread,
Feb 9, 2009, 8:18:39 PM2/9/09
to GPeerReview
On Feb 7, 9:03 am, Mike Gashler <mikegash...@gmail.com> wrote:
> Ruadhan,
>
> That's very interesting. I would be particularly interested in making our
> systems compatible and perhaps finding ways to promote each others' systems
> if possible. I think the more places we try this, the more likely one of
> them is to take root and begin to grow because a lot of people are calling
> for some reform. It's silly that "publishing" has become somewhat synonymous
> with massive hurdles to jump over, restricted viewing, copyright
> complications, and lots of time spent revising issues that are irrelevant to
> the science. Eventually one of these more modern systems has got to find
> success.

I agree. The traditional peer-review system is not going to be able to
survive in the coming century. When future generations ask whether
proper procedure was followed in reaching some conclusion, they
are not going to rely on the brand name of a journal when an
alternative
system of checking digital signatures is available.

My personal view of self-promotion is that it should be kept to the
minimum necessary to inform the relevant people of the relevant
facts. I think it's a bad idea to define success as getting a lot of
attention. I would say the project is a success if it leaves a trail
of digital signatures and public keys which will be able to convince
future generations that certain standards have been met, and to which
they can add. Once the system is in place, others will be able to
find it and add to it, and it will grow monotonically, a huge
sprawling
network of digital proofs that various people did indeed express
specific
opinions about particular things on particular dates. The proofs will
be just as valid and comprehensible in a hundred years' time.

I also don't think it's true that there are specific important people
whose
approval is needed before the project can be called a success. The
people who don't want to participate will eventually just become
irrelevant.
The aspect of this that motivates me is the fact that it's inevitable:
We've
already won, because verifying a chain of digital signatures is
objectively
better than relying on the reputation of a journal. One can be
checked;
the other can't. In one case, trust is required; in the other, it
isn't.

But yes, it would make a lot of sense for us to bring one another to
the attention of our users. It will help them to realise that this is
widespread.

> (I study artificial neural networks myself, so I also have a bit of interest
> in theoretical neuroscience, as some of the principles seem to bleed across
> the two fields.)

Definitely. Work on artificial neural networks is most relevant and
welcome.

> For the sake of refining the designs of our systems, let me point out a few
> issues that I think are significant. (Please do the same for mine.)
> Actually, I only have one really big concern, which is that I am averse to
> centralized systems. Here are my reasons:
> 1- I don't see any reason why review and evaluation system needs to be tied
> to the same organization that provides the document storage and delivery
> system. Wouldn't it require less investment on your part if you used arXiv
> or any number of other content-delivery systems to actually hold the
> documents? If authors can publish anywhere, even on their own web server if
> they so choose, then the system is much more author-friendly.

I agree; it shouldn't make a difference where the documents are
stored.
The reason that we're doing it is that we want to be able to provide
authors
with the ability to say that their article has been published in a
peer-reviewed
journal.

This is partly because of backward compatibility. Right now, if a
young theorist starts waving his work at a senior theorist and asking
for
attention, or a job, the senior theorist will reflexively ask, "Was it
published
in a peer reviewed journal?" Also, prospective employers expect to see
a list of articles published in peer-reviewed journals on one's CV. If
the
job applicant starts to explain a new peer-review system to the
prospective
employer, the prospective employer may be skeptical and might
entertain
the thought that this applicant didn't publish in a peer-reviewed
journal because
he is incompetent.

For those senior people who don't have the time or the patience to
learn
about the latest advances in peer-review technology, the sight of a
plain
old "Published in the Journal of Theoretical Neuroscience, Vol. 4, pp.
56-59"
will be comfortable and familiar.

> 2- Centralized systems always seem to fall back to some sort of
> voting/moderation mechanism. I've studied game theory too much to believe in
> that. A good place to start would be Arrow's impossibility theorem.
> Essentially, nomatter how you tally the votes, I can game your system. And
> if I can game it, there are plenty of other smart people who can game it
> too. All formulas are susceptible to gaming. The only solution is to stop
> playing that game. You cannot losslessly reduce multi-dimensional data into
> one dimension, and you cannot find a compromise of weights that makes
> everyone happy. (Someone won't be happy with making compromises.)

Right, but this isn't exactly one of those cases. The people in
question
aren't exactly voting. Each one is testifying that he understands the
article and that it is correct, or that he understands it and there is
an
error. The reviewers can confer with each other, try to persuade one
another, accuse one another of being sock puppets and so on. The
only time that an article will be published will be when there is
widespread
agreement that it is correct among the reviewers. Also, the reviewers
are mostly randomly selected, so they are less likely to have a
conflict
of interest. Even if you bring several friends to the site and ask
them
to lie in their testimony about your article, their combined vote will
count at most 20% of the total vote if they choose your article to
review,
rather than choosing a random article. If they decide to start
choosing
random articles to review, to affect the remaining 80% of the vote,
then
your article is no more likely to be given to them to review than it
is to
someone else.

The other aspect of the journal is that since we are distinguishing
between
"publication" and "certification", there will be a formal mechanism
for
correcting mistakes. If an argument which was once thought to be
correct
is found to have an error, the article can't be unpublished, but the
certification
can be revoked. We'll keep a certificate revocation list so that
anybody can
check to see if the majority of the community currently believes an
argument
to be correct.

> 3- Conflicts of interest just seem to have a way of abiogenetically spawing
> in centralized systems. I can trust a centralized journal that operates in a
> well-established way (if I have to), but it's very difficult for me to trust
> a centralized system that operates in some new way because I fear that when
> they run low on money, the people in charge will often change the rules in
> attempt to save their system and justify that it's a necessary evil, or that
> it's for the greater good.

You're certainly right about that and it has been a worry for us. The
non-profit
organization that we have founded to support the journal is bound by
its
articles of incorporation and bylaws to delegate the authority to
decide whether
something satisfies the publication criteria to the community, with no
individual
having greater influence than any other individual over the
publication process,
and to publish the journal for free on the internet. Should future
directors of the
society turn evil and try to institute a dictatorship in which they
usurp the
authority to decide what to publish in the journal, legal action can
be taken
against them by the California attorney general to make them comply
with
the law.

We're also trying to keep expenses to a minimum. There are several web
hosts
that will provide free hosting for non-profits. The directors of the
society are
unpaid volunteers. If money really becomes tight, we can abandon the
interface
with CrossRef ($375 a year). There really isn't any reason we
shouldn't be able
to operate for free, and we don't intend to allow such reasons to
arise in the
future.

What we're aiming for is the ability to say, "You don't have to trust
us." It's
comparable to the way that we have confidence in GPG - we don't know
for
sure that there aren't vulnerabilities, but it's open to public
scrutiny. If somebody
suspects there's a vulnerability, they can look at the source code and
find it.
Similarly, if somebody doesn't trust that the journal is working as
stated, they
can look in the logs and check the digital signatures.

> I think that Open Peer
> Commentary<http://en.wikipedia.org/wiki/Open_Peer_Commentary>is a good
> example of what not to do. It tends to de-emphasizes work that is
> not in vogue. The last thing the world needs is an academic pop-culture with
> a handful of super-stars. There is also a strong tendency to emphasize the
> opinions of people who have nothing better to do than review papers, which
> tend to be the people whose reviews are not very good.

I definitely agree. The super-star and brand-name culture is not
conducive to
professional research. My earlier discipline, theoretical physics, is
in a disgraceful
mess nowadays because of the influence of the marketplace, fame and
entertainment.

> I think all these issues can be solved, but I'm not yet convinced that all
> the necessary solutions can be rolled up and packaged in a single journal. I
> also hope that my opinions are helpful rather than discouraging. I am very
> much in support of people trying new things like this. The world will be
> much better off when one of them begins to take root.

Your opinions are great to hear. It was great to hear about
GPeerReview.
It should make it easier for us to explain what we're doing. We could
explain that GPeerReview is a peer-to-peer peer review system (if you
don't mind having it characterized that way), with the journal as a
system which aggregates, counts, signs, certifies and generally keeps
track of the reviews, signatures and public keys.

I don't really have any criticisms of GPeerReview in that sense. The
only
things that it seems to lack are the things that the journal is
supposed
to provide - an incentive for reviewers, compatibility with the
existing
system, a controlled level of anonymity and so on. I think of the
process
of taking root as what we're doing by linking up the two projects and
maintaining compatibility. Our plan is to make the code that runs the
journal open source so that people in other fields can take an off-the-
shelf
software package and have a journal that operates with the same
review procedure without much effort.

Best,
Ruadhan.

mikeg...@gmail.com

unread,
Feb 10, 2009, 2:49:27 PM2/10/09
to GPeerReview
(*I forked this into a thread about the Journal of Theoretical
Neuroscience via email, and will continue to discuss GPeerReview
here.)

Ruadhan makes an excellent point that GPeerReview does not provide
many of the things that researchers expect from a Journal.
Specifically, it does not store your paper. It does not have a
reputation. It does not provide an elegant-sounding name that you can
put on your resume, and it doesn't even work with standard citations.
It doesn't help you find a reviewer. It doesn't provide the service of
organizing double-blind reviews. Simply put, GPeerReview doesn't do
what journals do.

This is a feature, not a bug!

Those services can all continue to be provided by journals or other
providers. I think it is important for us to recognize that we are
*not* trying to be a replacement for publishing in journals. Perhaps
some of these services could *eventually* become unnecessary when
there exists a big graph/network connecting papers and reviews, but
even then, some of these services will still be relevant. As Ruadhan
points out, journals don't need to go away for us to succeed. All
we're trying to do is create hyperlinks between reviews and papers. If
that happens, we have accomplished our goals. (It so happens that the
existence of such a network/graph would solve many of the problems
with publication and review in scientific communities. That's why
we're working so hard to create one. But let's be clear that we are
not trying to overthrow or even replace journals. We might make them
compete a little though.)

In the near term, we would really like journals to start signing the
papers that they accept. This way, it will be natural for other
reviewers join in via the same mechanism. We're not going to make that
happen by making enemies of the journals. Actually, they provide all
the services we don't want to provide. Storing papers and setting up
double-blind reviews, for example, will be very important for our
cause. So let's be clear that our goal is to embrace and extend, not
to replace journals.
> ...
>
> read more »

Andrew Taylor

unread,
Feb 15, 2009, 7:19:29 PM2/15/09
to GPeerReview
I agree about the GUI. I can tell you straight away that nobody in my
research unit apart from the Token Physicists is going to use any tool
that doesn't work with big friendly buttons. I can't even convince
them to use Thunderbird. They call PDFs "Word documents". They write
down numbers by pressing print-screen, pasting the shot into word, and
saving 12 digits into a 6MB .doc file. These people can't be reasoned
with -- it needs to be made as easy as possible to adopt. Ideally, I'd
want a little thing that you could run from QuickLaunch or similar,
paste in a citation, write a few notes, hit 'send' and forget. If
you're spending 20 minutes reading a paper, spending an extra one
reviewing it too is only polite. Maybe it would be even worth allowing
for simple 'empty' reviews with just the paper hash and your name, to
make it as easy as pressing one button to almost 'digg' a paper.

Of course, perhaps it would be better if it did require reviewers to
actually take a little time: less reviews but the average usefulness
would be more.

Mike Gashler

unread,
Feb 15, 2009, 7:48:16 PM2/15/09
to gpeer...@googlegroups.com
We're currently in the design-phase of making a web-based version. The
idea is that you could email a request-for-endorsement to someone, and
all that person would have to do is click on the link you send them,
fill out a form, and press submit. I'm not really concerned about making
it "too easy". If people endorse a lot of bad ideas, I think it will be
sufficiently clear that the endorsement of that person means little or
nothing. (If that's not abundantly clear, then the whole premise upon
which this tool is based is flawed, and I don't yet think that is the case.)

Research Cooperative

unread,
Feb 26, 2009, 6:57:49 PM2/26/09
to GPeerReview, Peter Matthews (gmail)
Dear Mike,

Whether or not a review is meaningless does not depend on whether or
not the reviewer is anonymous. Both kinds of review have value, and I
prefer reviewing for journals that allow reviewers to choose to be
anonymous or not. From the author's point of view, the traditional
system is working well if the journal organisers succeed in finding
reviewer who can make serious informed comments that help the author
write a better paper, whether or not it is accepted the journal that
started the review process. Too many authors submit papers that are
not ready for publication and that do require serious editing, and too
many journals accept papers that impose too much work on reviewers,
because the journals are struggling to find enough contributors, and
wish the reviewers to act as editors at the same time. The roles of
editor and reviewer naturally overlap, but when the overlap is too
much, a journal will start losing the support if its circle of
volunteer reviewers.

Having said all that, I think that any tool that makes it easier for
authors to have their work reviewed, before and/or after publication
is going to be useful because the expontential increase in online
publication is simply not being matched by an increase in the number
of willing editors and reviewers. I fully support your effort, but
whether or not the system you create becomes accepted as a useful
measure of merit in a paper will depend (a) how you design it, (b) how
people use it, (c) how frequently it is used in a particular field, or
publication, and its resulting familiarity to readers.

It would be good if you can have an add-on that allows reviewers using
your tool to make personal contact with journal editors, and vice
versa, so that the tool also serves as a recruiting device for
journals that wish to organise in-depth reviews for their authors 9and
so that reviewers can move into such work if they have enough
interest, time, and experience.

I have been operating a social network site called The Research
Cooperative (cooperative.ning.com) that aims to help editors,
reviewers, and publishers make contact with each other more
effectively, but it is slow work to build this kind of network, and
your efforts will surely help raise awareness of the need for
reviewers, in one way or another.

Finally, a journal only ever acquires real name authority if it has a
long enough history of publication, and a multi-generational record of
satisfied authors and readers. In bad times, when there are not enough
contributors or reviewers and so on, the quality may fall, and the
name of the journal may help it survive (for example, by attracting
new sources of support). But a journal cannot rely long on its name
only. Paying subscribers (and advertisers) will stop paying if they
see the quality fall for too long a period. The revolution of Online
Publishing and Open Access will only succeed if the publishers
involved continue to make efforts to build human networks around their
publications. The reduced cost of publication and distribution is a
great advance, but there will always be a cost involved in building
and maintaining human support networks, through person-to-person
contact, negotiation, trust building, and so on.

Best regards, Peter

On Feb 6, 3:21 am, mikegash...@gmail.com wrote:
... Unfortunately, total anonymity renders a review
> meaningless since there are so many people with opinions about things
> they don't understand. Journals solve this problem by using the
> journal's name to give the reviews some authority....

Mike Gashler

unread,
Feb 27, 2009, 11:48:21 AM2/27/09
to gpeer...@googlegroups.com
You're absolutely right. When a journal puts their name behind a review, that review carries a lot of meaning, whether or not the name of the reviewer is kept hidden. When I said anonymous reviews are meaningless, I was referring to completely anonymous reviews (i.e. someone creates an account with a peer commentary site using a throw-away email address and a pseudonymn and starts reviewing papers). (BTW, I've been searching all week for an opportunity to use the King Kong defence in honor of current events =). You've made my day!) I suppose this (completely anonymous) isn't what people usually mean when they say "anonymous review", so I think I need to be more careful with my terminology.

The notion of providing tools to help journals coordinate reviews sounds like a good idea, but I'd need some specifics about what it would do exactly so we could debate it properly. So far I've been having a little trouble getting others to actually write down their designs in our design spec. People here are too quick to defer to my plans (or perhaps I just argue too loudly). If you have some time to help with this effort, we could really benefit from your ideas. I very much think that we need to make this tool interoperate with existing systems, and this might be part of the way to make that happen.

-Mike

Kevin Seale

unread,
Mar 11, 2009, 5:46:52 PM3/11/09
to gpeer...@googlegroups.com

Mike,

I have student that is interested in working on GPReview for the summer. Can you and I speak on the phone about this project and how he can get started?

 


Mike Gashler

unread,
Mar 12, 2009, 11:12:56 AM3/12/09
to gpeer...@googlegroups.com
Sure. I'll send my phone number via email.

Matthews

unread,
Mar 14, 2009, 5:56:02 AM3/14/09
to gpeer...@googlegroups.com
Dear Mike,

Perhaps to define the functions of a reviewing and rating system, the elements of a traditional review need to be identified. We also need to consider who is responsible for a published work, and what is actually being reviewed or rated.

For reviews that take place before publication, all authors have equal academic responsibility though they may actually contribute to a paper in very different ways (a professor may plan the research, a creative technician may do most of the physical work involved and have creative efforts acknowledged with coauthorship; a post-doc and PhD student may carry out most of the the detailed planning and interpretation of experiments, and the three academic authors may be involved in the survey of related research and interpretation of results.

For reviews that take place after publication, the efforts of pre-publication reveiwers, editors, copy-editors, and the journal management are also being judged, even if this is not usually explicit.

To what extent should the third author of a four-author paper be tarred or decorated by the views of a cruel or generous reviewer? Without reference to author guidelines, how can a review and rating system assess the role of a first author versus a last author or senior author? I have never seen explicit explanation of the author sequence in a particular journal: this seems to be left in the hands of the authors, but there is no place in the paper itself to indicate who is senior or not. Perhaps all authors should be ignored in a review except for the corresponding author(s), on the assumption that those who do not have responsibility for corespondence do not have full or equal responsibility for the content of the paper.

Since there are elements of responsibility for papers within a publication itself, the sum of these elements (the reviewers, editors etc.) could perhaps be rated by a meta-rating based on how papers in a particular publication are rated across a particular isssue, volume, or period of time.

As for the review itself, this needs to examine matters such as:

1. Orginality or novelty value.
2. Follow-up value for work that expands, refutes, or confirms previous research.
3. Theoretical presentation, appropriateness, reliability and advancement.
4. Methodological presentation, appropriateness, reliability and advancement.
5. Relevance to the expected readership of the journal, in light of the journal's stated aims.
6. General significance, regardless of the aims of the particular journal (occasionally, a great paper might be published in the wrong place).
7. Visual presentation
8. Readability (logic, clarity, avoidance of unnecessary jargon, etc.)

Different reviewers might like to self-rate their own competence to comment on different aspects of a paper. The best judge of readability might be someone who is interested in thye subject, but not an expert in the theory and methods involved.

The final (or current) weighting of a review might then consider whether the reviewers (to date) have all been able to comment on all aspects, or have rated themselves in different ways.

If you make it possible for reviewers to explicitly comment only on what they are happy or best able to comment on, then more people may be willing to conduct reviews. Reviewers might also like to be able to reveal whether their review is based on full reading and detailed consideration of all elements of a paper, or only on a partial reading of the elements of most concern or interest to the reviewer. The default option might be the latter, to be honest about how most reviews are conducted.

Sorry, I have not looked closely at your design spec recently; where should I go to see this? Perhaps you should add the link or address as a routine part of your signature, so that people following your messages and replies can always jump to the core of your project.

Hmm, I am running out of steam and should cook a meal for my family. I want my son to give me a good review.

Cheers, Peter

*****
--
http://cooperative.ning.com

An international, online meeting place for research writers, editors, translators, and publishers

Matthews

unread,
Mar 14, 2009, 5:59:46 AM3/14/09
to gpeer...@googlegroups.com
Sorry, in the following sentence I wrote review when I meant paper:

The final (or current) weighting of a [PAPER] might then consider whether the reviewers (to date) have all been able to comment on all aspects, or have rated themselves in different ways.

P.

Qubyte

unread,
Mar 14, 2009, 8:50:44 AM3/14/09
to GPeerReview
We need to be a little careful here. The ordering of the authors
changes from subject to subject, and even from country to country. The
way we tend to do it is the PI (principle investigator) goes last, the
student who did the real work goes first, and everyone else goes
between. I've seen papers actually place a footnote is there are two
first authors who deserve equal credit.

This seems to be getting too close to a rating system. I was under the
impression that a reviewer should write a review, like a little essay,
which is far more useful than just ticking 1-10 boxes. Guidelines are
of course essential. The authors can also reject reviews they don't
like. The weighting given to the review should be related to the
reviewers relative distance to the PI or something similar, as well as
how connected the reviewer is. Essentially the algorithm should be
automatic, based upon graph theory.

On Mar 14, 9:59 am, Matthews <researchcooperat...@gmail.com> wrote:
> Sorry, in the following sentence I wrote review when I meant paper:
>
> The final (or current) weighting of a [PAPER] might then consider whether
> the reviewers (to date) have all been able to comment on all aspects, or
> have rated themselves in different ways.
>
> P.
>
> > On Sat, Feb 28, 2009 at 1:48 AM, Mike Gashler <mikegash...@gmail.com>wrote:
>
> >> You're absolutely right. When a journal puts their name behind a review,
> >> that review carries a lot of meaning, whether or not the name of the
> >> reviewer is kept hidden. When I said anonymous reviews are meaningless, I
> >> was referring to completely anonymous reviews (i.e. someone creates an
> >> account with a peer commentary site using a throw-away email address and a
> >> pseudonymn and starts reviewing papers). (BTW, I've been searching all week
> >> for an opportunity to use the King Kong defence<http://en.wikipedia.org/wiki/King_kong_defense>in honor of current events =). You've made my day!) I suppose this
> ...
>
> read more »

Mike Gashler

unread,
Mar 14, 2009, 9:20:19 AM3/14/09
to gpeer...@googlegroups.com
In the existing system, when a paper is rejected, the matter is handled privately. (No one mentions on his c.v. how many journals rejected a paper before it was accepted.) This is something the existing system has done right, and we would only hurt ourselves by trying to "fix" this. There is so much skepticism in scientific communities, that all ideas are implicitly "tarred" unless they are well "decorated". Lately, I've been trying to use the term "endorsement" rather than "review". We should formalize this in the spec. A positive review will come with a digitally-signed endorsement, while a negative review will just come in the form of a private email filled with suggestions that the author can delete when he's done with it. Fortunately, this is is how the systems works naturally, so we don't have to actually do anything to make it happen this way. We simply don't publish endorsements. The burden of publishing endorsements (correctly) rests with the authors.

Of course someone could set up a peer-commentary site where authors cannot control the reviews that are published. I think that you've identified a fundamental flaw with peer-commentary sites. Currently, such sites don't seem to be finding a lot of success with researchers. I think this is one reason. Another reason is a built-in conflict-of-interest respecting the motivation for performing reviews, but I'm drifting off-topic.

Regarding the design spec, it's currently an OpenOffice doc currently located in our Subversion repository. It is rather incomplete at the moment. I think that Subversion may be too much for non-computer-scientists to handle. Do you think I should move it to the wiki? Perhaps it will get more attention there.

Qubyte

unread,
Mar 14, 2009, 10:30:59 AM3/14/09
to GPeerReview
The wiki is a good idea for that I think. I have an excellent example
of a broken peer-commentary site:

http://quantalk.org/view.php?id1=116&thread=1

This is also an excellent indication that people are ready for the
gpeerreview approach! The pure endorsement is the best idea. It keeps
it simple. A simple thumbs up, and the rest is between the authors and
the reviewer in terms of what gets made public.
> ...
>
> read more »

Qubyte

unread,
Mar 18, 2009, 6:58:29 AM3/18/09
to GPeerReview
Wow, take a look at this!

http://michaelnielsen.org/blog/?page_id=181

This guy wrote the book on quantum information (my field). There are
at least three copies in our office alone. It turns out he's taking a
year off to study issues remarkably similar to those that gpeerreview
is trying to address. He's also writing a book on them. I'm getting a
weird feeling of convergence here.
> ...
>
> read more »
Reply all
Reply to author
Forward
0 new messages