[Article] Open access: All human knowledge is there—so why can’t everybody access it?

10 views
Skip to first unread message

Timothy Vollmer

unread,
Jun 10, 2016, 3:35:49 PM6/10/16
to CC Staff, CC Affiliates, Open Policy Network
FYI--Glyn Moody's excellent, comprehensive history of open access. 

Timothy Vollmer

unread,
Jun 15, 2016, 5:47:06 PM6/15/16
to Patrick Peiffer, CC Staff, CC Affiliates, Open Policy Network
Hi Patrick:

I agree that a complete, accurate, and maintained list would be useful for the reasons you mention. But I wonder what the best way to approach it would be. I know of a few initiatives working around this. For example: 

1. Directory of Open Access Journals (https://doaj.org/), which indexes nearly 9000 OA journals by license. But as you can see, not all of those indexed journals capture license information at the article level. I assume DOAJ is getting some sort of structured data from those journals in order to display that licensing info. 

2. NISO Access License and Indicators (http://www.niso.org/apps/group_public/download.php/14226/rp-22-2015_ALI.pdf), which has been adopted as a best practice with regard to the inclusion of metadata for scholarly articles that notes access level, license (where applicable), and even embargo period. I'm unclear to what extent this recommendation is being considered or implemented in article metadata. 

Does anyone know of similar projects to get a sense of how this is being handled?

thanks, 
timothy  
 


On Mon, Jun 13, 2016 at 5:32 PM, Patrick Peiffer <peiffer...@gmail.com> wrote:
Hi,
As a librarian...  I believe there is a CC angle here: There is no list of articles with their licence terms, as in: table of contents with the rights statement, embargo period and possibly cc licence, for each article.

This (intentional) obscurity makes double-dipping easy: as a library subscriber I have no way to know how many articles are or become open access during the subscription period. Without this data point it's impossible to negotiate a correct price, especially for bundles of hundreds of titles.

Would such a project fit cc's strategy? Make the licensors (scientists) happy by contributing to the greater good (proper licensing -> better statistics -> saving library budgets)

Patrick
cc luxembourg



_______________________________________________
cc-affiliates mailing list
cc-aff...@lists.ibiblio.org
http://lists.ibiblio.org/mailman/listinfo/cc-affiliates

J. Andrés Delgado

unread,
Jun 16, 2016, 12:40:39 AM6/16/16
to Timothy Vollmer, Patrick Peiffer, CC Staff, CC Affiliates, Open Policy Network


El 16-06-15 a las 02:46 PM, Timothy Vollmer escribió:
On Mon, Jun 13, 2016 at 5:32 PM, Patrick Peiffer <peiffer...@gmail.com> wrote:
Hi,
As a librarian...  I believe there is a CC angle here: There is no list of articles with their licence terms, as in: table of contents with the rights statement, embargo period and possibly cc licence, for each article.

This (intentional) obscurity makes double-dipping easy: as a library subscriber I have no way to know how many articles are or become open access during the subscription period. Without this data point it's impossible to negotiate a correct price, especially for bundles of hundreds of titles.

Would such a project fit cc's strategy? Make the licensors (scientists) happy by contributing to the greater good (proper licensing -> better statistics -> saving library budgets)

Patrick
cc luxembourg
Hi,

I used to work as part of the National Secretariat of Higher Education in Ecuador and tried to promote open access in universities. What I found is that most universities hired the same consultant to build the repository for them and as such this guy was in control of what he could ask as a required field or not in the repositories. These metadata, as you all know, is later collected and aggregated. The problem is they were thinking of requiring to indicate whether an article is open access or not but did not think about the license. You can imagine how different everything would be if those who manage repositories require this simple thing, so I worked to make it happen.

As far as I can tell this can be an strategy implemented in every country where, by law or regulation, repositories are required to indicate the license of each article. It is up to every activist/policy-maker/analyst in the field to find the best way to achieve this. Additionally, I would recommend to talk with those who offer their technical service to universities as this could be much simpler in that case.

Hope this helps.

Best,
Andrés Delgado-Ron

Cameron Neylon

unread,
Jun 16, 2016, 2:55:56 AM6/16/16
to J. Andrés Delgado, Timothy Vollmer, Patrick Peiffer, CC Staff, CC Affiliates, Open Policy Network
There are basically two ways to achieve this goal:

1. Gain better adherence to global metadata standards implemented at the publisher level
2. Do "annotation" over the corpus (i.e. maintain the list ourselves)

I've actually been involved in efforts on both arms of this. NISO-ALI and its implementation by Crossref provides a mechanism for publishers to provide article level metadata on license information (and embargoes). And when I was at PLOS we worked with CottageLabs to build http://howopenisit.org/

The NISO ALI recommendations are being gradually implemented by publishers and the quality of this information is improving, if slowly. Essentially this is the same info that DOAJ aggregates with DOAJ being somewhat more complete for those small journals that only use a single license.

As you can see from the HOII website this is currently down. This was a service that scraped article pages to identify license text and collected that info. It never really got the community support it needed to build the scraper sets or move it forward to a larger scale, tho the underlying design is being used in new projects by Cottage Labs. This is much harder to do at scale than it seemed and is ultimately very fragile. Human annotation won't scale for this system in my opinion.

Overall the quality of license data in the ecosystem is improving, but its a slow process. Of course where there is actual obfuscation both methods fail. The most effective way forward IMO is to help Crossref and funders apply pressure to publishers to move faster on making good metadata available and ensure that OS platforms for publishing bake good information practice into their systems from the beginning.

Cheers

Cameron

--
You received this message because you are subscribed to the Google Groups "Open Policy Network" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-policy-net...@googlegroups.com.
To post to this group, send email to open-poli...@googlegroups.com.
Visit this group at https://groups.google.com/group/open-policy-network.
For more options, visit https://groups.google.com/d/optout.
--
Professor Cameron Neylon
Centre for Culture and Technology
Curtin University, Western Australia
Reply all
Reply to author
Forward
0 new messages