The Five Stars of Online Journal Articles – an article evaluation framework

16 views
Skip to first unread message

David Shotton

unread,
Oct 17, 2011, 1:33:05 PM10/17/11
to FoRC, beyond-...@googlegroups.com
Dear folks,

I hope you'll be interested to read my new paper "The Five Stars of
Online Journal Articles – an article evaluation framework".

A summary blog post is available at
http://opencitations.wordpress.com/2011/10/17/the-five-stars-of-online-journal-articles-3/.

This points to the longer article, submitted for publication, with a
preprint in Nature Preceedings (http://precedings.nature.com/documents/
6542/).

Kind regards,

David

Juliana Freire

unread,
Oct 17, 2011, 3:04:35 PM10/17/11
to David Shotton, beyond-...@googlegroups.com, Juliana Freire
Hi David,

This is interesting. But maybe you need another star to represent "result reproducibility".

Best,
Juliana

Phillip Lord

unread,
Oct 18, 2011, 5:38:18 AM10/18/11
to beyond-...@googlegroups.com

Well, this is not something a paper or publication process can ensure,
although it can help to enable it (with metadata and accessible data
which is on David's list).

Nor are (or should) all academic studies by reproducible. A case study
on a patient; an longtitudinal study of a cohort; ethonographic studies;
historical studies. Some people are very keen on reproducible
experiments, but it is not always possible, and not always the best way
to achieve things.

Enjoyed the post, David. The only change I would make is instead of "peer
reviewed", I would say "peer reviewable". The distinction is that, to my
mind, peer review can happen after the publication process as well as
before. Indeed, this may be the main purpose OF the publication, which
is the case with RFC (request for comment) documents.

Phil

--
Phillip Lord, Phone: +44 (0) 191 222 7827
Lecturer in Bioinformatics, Email: philli...@newcastle.ac.uk
School of Computing Science, http://homepages.cs.ncl.ac.uk/phillip.lord
Room 914 Claremont Tower, skype: russet_apples
Newcastle University, msn: m...@russet.org.uk
NE1 7RU twitter: phillord

cameron...@stfc.ac.uk

unread,
Oct 18, 2011, 6:19:14 AM10/18/11
to beyond-...@googlegroups.com
I think I agree with Phil here. Reproducible is a value judgement that can only really be determined in the very long term. In terms of a communication you can define and test whether something adheres to best practice for enabling reproducibility but that's different from being reproducible. David's description of data availability and usability seems to be working towards that best practice statement. I imagine both Juliana and myself would like to see more mention of process and code alongside the references to data as well but I think it can be folded in there.

> Enjoyed the post, David. The only change I would make is instead of "peer
> reviewed", I would say "peer reviewable". The distinction is that, to my
> mind, peer review can happen after the publication process as well as
> before. Indeed, this may be the main purpose OF the publication, which
> is the case with RFC (request for comment) documents.

I too had issues with this and have been thinking a little about it. I think it may be more the way it reads than David's intention but this feels like it is built around traditional review processes. I would suggest the "levels" be more like:

1. Peer reviewed. Has the article been critiqued and reviewed by two or more appropriate experts (this can occur either before or publication)
2. Responsive review. Has the author been able to and made substantive responses to these comments (e.g. through a reply or through changes to the manuscript)
3. Continuous review. Is the review process continuing? (i.e. are additional relevant findings or refutations linked from the article, are the authors responsive to comments?)
4. Open peer review. Is the whole review process entirely transparent with the record of changes and comments made available for examination by any interested party?

I'd take issue with there being any significant difference between the _review_ process for PLoS ONE and other journals. The criteria for publication may be different but that's quite a separate issue. In general terms I see no difference between "light" peer review as practiced in journals and "full review". There might be another lighter layer called "Sanity Check" which is what is notionally done e.g. at the Arxiv and at Nature Precedings. So there is a difference between a random blog post and something on the Arxiv for instance.

But definitely a good concept and I like the way its laid out and thought through.

Cheers

Cameron


> Phil
>
> Juliana Freire <freire....@gmail.com> writes:
>> This is interesting. But maybe you need another star to represent
>> "result reproducibility".
>>
>> Best,
>> Juliana
>>
>> On Oct 17, 2011, at 1:33 PM, David Shotton wrote:
>>
>>> Dear folks,
>>>
>>> I hope you'll be interested to read my new paper "The Five Stars of
>>> Online Journal Articles – an article evaluation framework".
>>>
>>> A summary blog post is available at
>>> http://opencitations.wordpress.com/2011/10/17/the-five-stars-of-online-journal-articles-3/.
>>>
>>> This points to the longer article, submitted for publication, with a
>>> preprint in Nature Preceedings (http://precedings.nature.com/documents/
>>> 6542/).
>>>
>>> Kind regards,
>>>
>>> David
>>>
>>
>>
>>
>
> --
> Phillip Lord, Phone: +44 (0) 191 222 7827
> Lecturer in Bioinformatics, Email: philli...@newcastle.ac.uk
> School of Computing Science, http://homepages.cs.ncl.ac.uk/phillip.lord
> Room 914 Claremont Tower, skype: russet_apples
> Newcastle University, msn: m...@russet.org.uk
> NE1 7RU twitter: phillord

--
Scanned by iCritical.

Ivan Herman

unread,
Oct 18, 2011, 6:22:04 AM10/18/11
to for...@googlegroups.com, beyond-...@googlegroups.com
As Carol put it: I love it. I will try to give some publicity to this in my own circles... Thanks.

Ivan

As a side-issue: I usually heard the term "eating your own dogfood". I agree that "drinking my own champagne" is certainly more pleasing:-)


----
Ivan Herman, W3C Semantic Web Activity Lead
Home: http://www.w3.org/People/Ivan/
mobile: +31-641044153
PGP Key: http://www.ivan-herman.net/pgpkey.html
FOAF: http://www.ivan-herman.net/foaf.rdf

Phillip Lord

unread,
Oct 18, 2011, 6:40:35 AM10/18/11
to beyond-...@googlegroups.com

I would separate out the notion of process of reviewing from quality of
reviewing. This is the way that, for example, Gene Ontology evidence
codes work; they describe the type of experimental evidence, but
explicitly state that this should not be interpreted as quality of the
evidence. Even if everybody does interpret it this way.

For instance, many scientists would see open, post-publication peer
review as being light-weight. Engineers used to the RFC process see the
same thing as anything but; a full release of technical specifications
is what you need to get something that will work.

The peer-review process is useful to know, but it does not necessarily
tell you anything at all about the quality of either the peer-review or
the resultant paper.

Phil

Peter Murray-Rust

unread,
Oct 18, 2011, 10:32:25 AM10/18/11
to beyond-...@googlegroups.com, FoRC
On Mon, Oct 17, 2011 at 6:33 PM, David Shotton <david....@zoo.ox.ac.uk> wrote:
Dear folks,

I hope you'll be interested to read my new paper "The Five Stars of
Online Journal Articles – an article evaluation framework".

A summary blog post is available at
http://opencitations.wordpress.com/2011/10/17/the-five-stars-of-online-journal-articles-3/.

I think there are some useful ideas - I would also agree with some of the posters that publication is very variable across disciplines and it will be difficult to make the stars consistent.

The term "open access" is too vague - personally I think that anything less than CC-BY is of little value in many disciplines. Unless you can formally re-use it you cannot re-use it. Having a PDF on a website, copyright the publisher with all-rights-reserved by default is "OA" for Stevan Harnad - it is unacceptable for scientists.

This points to the longer article, submitted for publication, with a
preprint in Nature Preceedings (http://precedings.nature.com/documents/
6542/
).

Kind regards,

David




--
Peter Murray-Rust
Reader in Molecular Informatics
Unilever Centre, Dep. Of Chemistry
University of Cambridge
CB2 1EW, UK
+44-1223-763069

Chris Maloney

unread,
Oct 18, 2011, 11:21:47 AM10/18/11
to beyond-...@googlegroups.com
Hi, I can't help but note (and I'm sure it hasn't escaped the notice of others here) that this email thread is a bit of a self-referential test of this review aspect of this paper. 

Does this thread meet the criteria of 1-4 below?

cameron...@stfc.ac.uk

unread,
Oct 20, 2011, 3:30:29 AM10/20/11
to beyond-...@googlegroups.com

On 18 Oct 2011, at 16:21, Chris Maloney wrote:

> Hi, I can't help but note (and I'm sure it hasn't escaped the notice of others here) that this email thread is a bit of a self-referential test of this review aspect of this paper.

Thats a good point.

> Does this thread meet the criteria of 1-4 below?

I would say it does, but not optimally. We've had review, I'm sure David will respond or take into consideration the points in any revisions. The process could at least in principle continue beyond publication (and indeed has if you accept the preprint as formal publication). And this list is discoverable and readable online. The only thing not provided is a clear link from the final version back to this discussion - something I've argued that publishers should provide for a while and an opportunity to add real continuing vale, the connection into ongoing discussion around the published work.

Cheers

Cameron

--
Scanned by iCritical.

Reply all
Reply to author
Forward
0 new messages