On Feb 7, 9:03 am, Mike Gashler <
mikegash...@gmail.com> wrote:
> Ruadhan,
>
> That's very interesting. I would be particularly interested in making our
> systems compatible and perhaps finding ways to promote each others' systems
> if possible. I think the more places we try this, the more likely one of
> them is to take root and begin to grow because a lot of people are calling
> for some reform. It's silly that "publishing" has become somewhat synonymous
> with massive hurdles to jump over, restricted viewing, copyright
> complications, and lots of time spent revising issues that are irrelevant to
> the science. Eventually one of these more modern systems has got to find
> success.
I agree. The traditional peer-review system is not going to be able to
survive in the coming century. When future generations ask whether
proper procedure was followed in reaching some conclusion, they
are not going to rely on the brand name of a journal when an
alternative
system of checking digital signatures is available.
My personal view of self-promotion is that it should be kept to the
minimum necessary to inform the relevant people of the relevant
facts. I think it's a bad idea to define success as getting a lot of
attention. I would say the project is a success if it leaves a trail
of digital signatures and public keys which will be able to convince
future generations that certain standards have been met, and to which
they can add. Once the system is in place, others will be able to
find it and add to it, and it will grow monotonically, a huge
sprawling
network of digital proofs that various people did indeed express
specific
opinions about particular things on particular dates. The proofs will
be just as valid and comprehensible in a hundred years' time.
I also don't think it's true that there are specific important people
whose
approval is needed before the project can be called a success. The
people who don't want to participate will eventually just become
irrelevant.
The aspect of this that motivates me is the fact that it's inevitable:
We've
already won, because verifying a chain of digital signatures is
objectively
better than relying on the reputation of a journal. One can be
checked;
the other can't. In one case, trust is required; in the other, it
isn't.
But yes, it would make a lot of sense for us to bring one another to
the attention of our users. It will help them to realise that this is
widespread.
> (I study artificial neural networks myself, so I also have a bit of interest
> in theoretical neuroscience, as some of the principles seem to bleed across
> the two fields.)
Definitely. Work on artificial neural networks is most relevant and
welcome.
> For the sake of refining the designs of our systems, let me point out a few
> issues that I think are significant. (Please do the same for mine.)
> Actually, I only have one really big concern, which is that I am averse to
> centralized systems. Here are my reasons:
> 1- I don't see any reason why review and evaluation system needs to be tied
> to the same organization that provides the document storage and delivery
> system. Wouldn't it require less investment on your part if you used arXiv
> or any number of other content-delivery systems to actually hold the
> documents? If authors can publish anywhere, even on their own web server if
> they so choose, then the system is much more author-friendly.
I agree; it shouldn't make a difference where the documents are
stored.
The reason that we're doing it is that we want to be able to provide
authors
with the ability to say that their article has been published in a
peer-reviewed
journal.
This is partly because of backward compatibility. Right now, if a
young theorist starts waving his work at a senior theorist and asking
for
attention, or a job, the senior theorist will reflexively ask, "Was it
published
in a peer reviewed journal?" Also, prospective employers expect to see
a list of articles published in peer-reviewed journals on one's CV. If
the
job applicant starts to explain a new peer-review system to the
prospective
employer, the prospective employer may be skeptical and might
entertain
the thought that this applicant didn't publish in a peer-reviewed
journal because
he is incompetent.
For those senior people who don't have the time or the patience to
learn
about the latest advances in peer-review technology, the sight of a
plain
old "Published in the Journal of Theoretical Neuroscience, Vol. 4, pp.
56-59"
will be comfortable and familiar.
> 2- Centralized systems always seem to fall back to some sort of
> voting/moderation mechanism. I've studied game theory too much to believe in
> that. A good place to start would be Arrow's impossibility theorem.
> Essentially, nomatter how you tally the votes, I can game your system. And
> if I can game it, there are plenty of other smart people who can game it
> too. All formulas are susceptible to gaming. The only solution is to stop
> playing that game. You cannot losslessly reduce multi-dimensional data into
> one dimension, and you cannot find a compromise of weights that makes
> everyone happy. (Someone won't be happy with making compromises.)
Right, but this isn't exactly one of those cases. The people in
question
aren't exactly voting. Each one is testifying that he understands the
article and that it is correct, or that he understands it and there is
an
error. The reviewers can confer with each other, try to persuade one
another, accuse one another of being sock puppets and so on. The
only time that an article will be published will be when there is
widespread
agreement that it is correct among the reviewers. Also, the reviewers
are mostly randomly selected, so they are less likely to have a
conflict
of interest. Even if you bring several friends to the site and ask
them
to lie in their testimony about your article, their combined vote will
count at most 20% of the total vote if they choose your article to
review,
rather than choosing a random article. If they decide to start
choosing
random articles to review, to affect the remaining 80% of the vote,
then
your article is no more likely to be given to them to review than it
is to
someone else.
The other aspect of the journal is that since we are distinguishing
between
"publication" and "certification", there will be a formal mechanism
for
correcting mistakes. If an argument which was once thought to be
correct
is found to have an error, the article can't be unpublished, but the
certification
can be revoked. We'll keep a certificate revocation list so that
anybody can
check to see if the majority of the community currently believes an
argument
to be correct.
> 3- Conflicts of interest just seem to have a way of abiogenetically spawing
> in centralized systems. I can trust a centralized journal that operates in a
> well-established way (if I have to), but it's very difficult for me to trust
> a centralized system that operates in some new way because I fear that when
> they run low on money, the people in charge will often change the rules in
> attempt to save their system and justify that it's a necessary evil, or that
> it's for the greater good.
You're certainly right about that and it has been a worry for us. The
non-profit
organization that we have founded to support the journal is bound by
its
articles of incorporation and bylaws to delegate the authority to
decide whether
something satisfies the publication criteria to the community, with no
individual
having greater influence than any other individual over the
publication process,
and to publish the journal for free on the internet. Should future
directors of the
society turn evil and try to institute a dictatorship in which they
usurp the
authority to decide what to publish in the journal, legal action can
be taken
against them by the California attorney general to make them comply
with
the law.
We're also trying to keep expenses to a minimum. There are several web
hosts
that will provide free hosting for non-profits. The directors of the
society are
unpaid volunteers. If money really becomes tight, we can abandon the
interface
with CrossRef ($375 a year). There really isn't any reason we
shouldn't be able
to operate for free, and we don't intend to allow such reasons to
arise in the
future.
What we're aiming for is the ability to say, "You don't have to trust
us." It's
comparable to the way that we have confidence in GPG - we don't know
for
sure that there aren't vulnerabilities, but it's open to public
scrutiny. If somebody
suspects there's a vulnerability, they can look at the source code and
find it.
Similarly, if somebody doesn't trust that the journal is working as
stated, they
can look in the logs and check the digital signatures.
> I think that Open Peer
> Commentary<
http://en.wikipedia.org/wiki/Open_Peer_Commentary>is a good
> example of what not to do. It tends to de-emphasizes work that is
> not in vogue. The last thing the world needs is an academic pop-culture with
> a handful of super-stars. There is also a strong tendency to emphasize the
> opinions of people who have nothing better to do than review papers, which
> tend to be the people whose reviews are not very good.
I definitely agree. The super-star and brand-name culture is not
conducive to
professional research. My earlier discipline, theoretical physics, is
in a disgraceful
mess nowadays because of the influence of the marketplace, fame and
entertainment.
> I think all these issues can be solved, but I'm not yet convinced that all
> the necessary solutions can be rolled up and packaged in a single journal. I
> also hope that my opinions are helpful rather than discouraging. I am very
> much in support of people trying new things like this. The world will be
> much better off when one of them begins to take root.
Your opinions are great to hear. It was great to hear about
GPeerReview.
It should make it easier for us to explain what we're doing. We could
explain that GPeerReview is a peer-to-peer peer review system (if you
don't mind having it characterized that way), with the journal as a
system which aggregates, counts, signs, certifies and generally keeps
track of the reviews, signatures and public keys.
I don't really have any criticisms of GPeerReview in that sense. The
only
things that it seems to lack are the things that the journal is
supposed
to provide - an incentive for reviewers, compatibility with the
existing
system, a controlled level of anonymity and so on. I think of the
process
of taking root as what we're doing by linking up the two projects and
maintaining compatibility. Our plan is to make the code that runs the
journal open source so that people in other fields can take an off-the-
shelf
software package and have a journal that operates with the same
review procedure without much effort.
Best,
Ruadhan.