Wired article on preprints in Biology

50 views
Skip to first unread message

Plutchak, T Scott

unread,
Jul 21, 2017, 4:27:22 PM7/21/17
to osi20...@googlegroups.com

I found this to be a very well done, balanced piece.  And highly relevant to our discussions here.

Molteni, M. (2017, July 8) Biology’s roiling debate over publishing research early.  Wired.

https://www.wired.com/story/biologys-roiling-debate-over-publishing-preprint-research-early/

 

 

Scott

 

T Scott Plutchak | Director of Digital Data Curation Strategies

UAB | The University of Alabama at Birmingham

AB 420M

O: 205-996-4716 | M: 205-283-5538

http://orcid.org/0000-0003-4712-5233

 

uab.edu

Knowledge that will change your world

 

Schultz, Jack C.

unread,
Jul 22, 2017, 9:21:25 AM7/22/17
to Plutchak, T Scott, osi20...@googlegroups.com
This article does not appear to address one of the biggest concerns about preprints: the possibility that readers can scoop the authors using their own data. Here’s the scenario (a real one, btw):

A refereed paper that depends on a data set assembled by the authors (e.g., an RNAseq data set) requires providing access to that data set. In a refereed publication that would happen when the article was published, and access to it would be limited until then. Often a series of papers depending on that same data set will be produced over the course of a year or more. Large, well-staffed and –equipped labs can use access to that data set to publish work that the original authors intended to publish. A good lab can produce an ‘in silico’ study in a few days using other people’s data. 

Right now, what happens is that authors frequently renege on the agreement to provide access to data, even when providing a URL or other access point. (In fact, I had to repeat an entire RNAseq experiment recently because the authors of an existing one for the same system refused to answer emails requesting access.)  This violates some NSF and NIH rules, but there’s no enforcement. 

So…delaying access to data is the way anyone who fears being scooped by others usually goes at present. Any preprint system that provides access to data that can be mined is, frankly, scary.

Perhaps you all have figured this out. I’d be interested in workable solutions, for sure!


  JACK 


--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.
To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

Rick Anderson

unread,
Jul 22, 2017, 11:28:38 AM7/22/17
to Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com
Interestingly, just two days ago I got a phone call from a member of our faculty in a biomedical field. She was very upset because one of her research assistants had uploaded a paper they were working on to bioRxiv without her (the faculty member’s) knowledge, and as a result some researchers at another institution had been able to scoop them and formally publish first. When she tried to get her paper deleted from bioRxiv, she couldn’t. She must have found my contact info because I’m on biorXiv’s advisory board — she called me very upset and asked if there was anything I could to to help. But the agreement you click on when you upload a paper makes it very clear that once it’s uploaded, you can’t delete it from the server. She was angry about this, and called the system “very misleading.”

Two things struck me about this interaction: first, of course she was dead wrong — there’s nothing misleading about biorXiv’s practices. What happened was that her assistant didn’t read the agreement before hitting the upload button.

Second, whether she’s right or wrong doesn’t really matter when it comes to biorXiv’s relations with its user base. What matters is that she came away angry, feeling that her rights had been abused. Her experience with biorXiv left her less likely, rather than more likely, to want to make her work openly available in the future. I doubt that she’ll contribute to biorXiv again, absent a strong mandate to do so. Again, none of this is biorXiv’s fault — but what matters to biorXiv’s future isn’t being right, but being loved.

---
Rick Anderson
Assoc. Dean for Collections & Scholarly Communication
Marriott Library, University of Utah

Dr D.A. Kingsley

unread,
Jul 22, 2017, 11:54:59 AM7/22/17
to Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com
Hi Jack,

The problem is the way we reward researchers. Currently the only thing that counts is publication of novel results in high impact journals. This in itself has led to a very serious problem in relation to the veracity of the academic record and why we face a reproducibility crisis.

I have written and presented about this a fair bit:

The solution is (spoiler alert) Open Research. Publication of the different aspects of the research process and rewarding that publication. If people publish data and this data is citable and valued then there is an incentive to share and work openly. 

But under the current arrangement it is not surprising Rick’s colleague was so upset. Their work has effectively been ‘wasted’. It is a scenario also that also points to the cut throat nature of some disciplines. No wonder our research community is stressed.

Danny

Dr Danny Kingsley

Head, Office of Scholarly Communication

Cambridge University

e: da...@cam.ac.uk

p: 01223 747 437

m: 07711 500 564

Schultz, Jack C.

unread,
Jul 22, 2017, 11:58:29 AM7/22/17
to Dr D.A. Kingsley, Plutchak, T Scott, osi20...@googlegroups.com
Of course Open Research would be a solution, but ‘natural’ selection acts on the individual and precludes that kind of group agreement. It’s only when everyone – the entire enterprise – is threatened with extinction that you can get that kind of cooperation.




  JACK 

Dr D.A. Kingsley

unread,
Jul 22, 2017, 12:02:26 PM7/22/17
to Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com
Like a reproducibility crisis?

Schultz, Jack C.

unread,
Jul 22, 2017, 12:06:20 PM7/22/17
to Dr D.A. Kingsley, Plutchak, T Scott, osi20...@googlegroups.com
Could be, I suppose, but you still have to get everybody’s work exposed to criticism and I don’t see how to get there.
Blogs are playing that role somewhat now, pointing out flaws in published work. 
But as you say, we’re rewarded for the first big result, and I don’t see that changing soon.

The replication problem is not new – it’s just finally getting attention – and very very complex. It’s likely to have a negative impact on the credibility of science long before it gets addressed in any serious way.

Barrett, Kim

unread,
Jul 22, 2017, 12:42:47 PM7/22/17
to Rick Anderson, Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com
Personally, I would not upload any of my work to bioRxiv for similar reasons.  I think it will require a huge culture change before this is well accepted in the biomedical field. It may never happen.  And those colleagues in fields more disposed to this route (computational neuroscience, for example) tell me they don't typically deposit until they have the paper ready to submit, which kind of defeats the purpose in the first place.

Kim E. Barrett, Ph.D.
Distinguished Professor of Medicine
UC San Diego
Editor-in-Chief, The Journal of Physiology 

Sent from my iPad

Glenn Hampson

unread,
Jul 22, 2017, 1:02:25 PM7/22/17
to Schultz, Jack C., Dr D.A. Kingsley, Plutchak, T Scott, osi20...@googlegroups.com

What’s being proposed here is something fundamental. Hard science has never in its 400 year history given boldface credit to people who run the right experiment but fail to draw the right conclusion (or any conclusion). Or people who draw the right conclusion but don’t publish it. Or even people who publish the right conclusion in a journal that no one reads. So moving to this kind of kinder gentler scientific environment is truly a sea change---much more than just changing the culture of communication in academia, but changing the nature of research itself.

 

There are some “co-ops” that might fit the bill for what Jack is describing---research networks that pool and share information (the HIV Vaccine Trials Network, for instance---HVTN---or Sage Bionetworks---both headquartered at the Hutch here in Seattle). Even in these networks, though, there is real reluctance to open up full throttle. And it’s not just concern about being scooped. In medical research, there’s also concern about misinterpreting findings, being able to use data (because it isn’t standardized), getting proper credit and timing for data use, etc., which is one reason why you see data intermediaries for some databases who dole out permissions for accessing data to qualified applicants only.

 

Jack’s natural selection v. group agreement analogy is very apt and one that even Karl Marx couldn’t solve. Remove the individual incentives from the system and progress stagnates. Allow too much accumulation of wealth and you have something like today---haves and have-nots along with unnatural brakes and dams in the system. Finding the right balance between robust info generation and uptake is probably our sweet spot. Did the culture of communication paper touch on this at all? The patent paper? These will be coming out soon and we should talk blue sky about what kinds of approaches might solve this problem---if not bioRxiv then what (or what else needs to be in place)?

Dr D.A. Kingsley

unread,
Jul 22, 2017, 1:11:53 PM7/22/17
to Glenn Hampson, Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com
But there are examples already Glenn. Some journals review the research protocol before there are any results and then on the basis of it being a good study design will publish the findings be them positive or otherwise. We should be publishing null results – it would save everyone a considerable amount of time and energy.

Danny

Dr Danny Kingsley

Head, Office of Scholarly Communication

Cambridge University

e: da...@cam.ac.uk

p: 01223 747 437

m: 07711 500 564

David Wojick

unread,
Jul 22, 2017, 1:26:20 PM7/22/17
to osi2016-25-googlegroups.com
The threat of being scooped is one of several reasons why the US Interagency Working Group on Digital Data opted to recommend just requiring Data Management Plans (DMP) for grant awards. (I did staff work for the IWGDD.) This threat may be a legitimate reason to not publish one's data, although the reasonableness of the DMP is.supposed to be part of the award evaluation, so this threat claim might get peer reviewed with the proposal.

Jack, you mention NIH and NSF rules that apparently require making one's data accessible. I am a bit surprised since NSF pioneered the DMP, following the IWGDD recommendation. In fact most agencies have now adopted the DMP requirement for the data part of their Public Access plans. Can you point to these rules?

David
http://insidepublicaccess.com/

At 09:21 AM 7/22/2017, you wrote:
This article does not appear to address one of the biggest concerns about preprints: the possibility that readers can scoop the authors using their own data. Here's the scenario (a real one, btw):

A refereed paper that depends on a data set assembled by the authors (e.g., an RNAseq data set) requires providing access to that data set. In a refereed publication that would happen when the article was published, and access to it would be limited until then. Often a series of papers depending on that same data set will be produced over the course of a year or more. Large, well-staffed and –eequipped labs can use access to that data set to publish work that the original authors intended to publish. A good lab can produce an "in silico" study in a few days using other people's data.

Right now, what happens is that authors frequently renege on the agreement to provide access to data, even when providing a URL or other access point. (In fact, I had to repeat an entire RNAseq experiment recently because the authors of an existing one for the same system refused to answer emails requesting access.)  This violates some NSF and NIH rules, but there's no enforcement.

So…delaying access to data is the way anyone who fears being sscooped by others usually goes at present. Any preprint system that provides access to data that can be mined is, frankly, scary.

Glenn Hampson

unread,
Jul 22, 2017, 1:39:51 PM7/22/17
to Dr D.A. Kingsley, Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com

Hi Danny,

 

Interesting model but I’m not clear how that solves the whole question---publishing null findings would be great, of course. Also, without knowing more about the specific journals you’re referring to here, I’d be pleasantly surprised if protocols are being published of large, well-funded bio studies (I’ve seen lists of 1-2 page social sciences protocols). The clinical trials protocols I’ve reviewed are mostly 100-page tomes so complex they make journal articles look like comic books by comparison; these studies themselves take years to pitch, fund, design and get rolling---sometimes whole careers center around one study. How does all this investment get unlocked (and what is the science case here---is there a need and demand for this amongst researchers?) without removing the incentives, rewards, and protections for investing?

 

Cheers,

 

Glenn

Schultz, Jack C.

unread,
Jul 22, 2017, 1:43:08 PM7/22/17
to David Wojick, osi2016-25-googlegroups.com
For Engineering, look here:  https://www.nsf.gov/bfa/dias/policy/dmp.jsp
The Bio requirements are not stated as directly or strongly as engineering’s. But generally grantees are supposed to understand that data gathered with public funds must be made available in a timely way – whatever that is.

The NSF DMP requirements include reporting when and how data gathered during the project will be “disseminated and made available” in considerable detail, including "identifier or accession numbers for data sets”. Since this needs to be reported annually, the implication is that data sharing should be happening before the end of the grant period.

In reality, DMPs are routinely ignored by PIs, reviewers, panels and (as far as I can tell) NSF officials. In forty years of NSF support I have never heard or seen a whisper of concern about data management. It certainly plays no role in the panel evaluations I participate in. I’ve seen funded proposals with ridiculous DMPs. 

 

Another important detail (alluded to in my previous message) is that NIH (and maybe NSF) requires making data available once a paper is published. Does preprint count for that?  And the same data can be mined by the authors – or, if released, by others – for years after that first publication. A good data set is valuable for a long time. 


  JACK 

--

Dr D.A. Kingsley

unread,
Jul 22, 2017, 1:43:34 PM7/22/17
to Glenn Hampson, Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com
Hi Glenn,

From:

Rewarding study inception

In his presentation about HARKing (Hypothesising After the Results are Known) at FORCE2016 Eric Turner, Associate Professor OHSU suggested that what matters is the scientific question and methodological rigour. We should be emphasising not the study completion but study inception before we can be biased by the results.  It is already a requirement to post results of industry sponsored research in ClinicalTrials.gov – a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world. Turner argues we should be using it to see the existence of studies.  He suggested reviews of protocols should happen without the results (but not include the methods section because this is written after the results are known).

There are some attempts to do this already. In 2013 Registered Reports was launched: “The philosophy of this approach is as old as the scientific method itself: If our aim is to advance knowledge then editorial decisions must be based on the rigour of the experimental design and likely replicability of the findings – and never on how the results looked in the end.” The proposal and process is described here. The guidelines for reviewers and authors are here, including the requirement to “upload their raw data and laboratory log to a free and publicly accessible file-sharing service.”

This approach has been met with praise by a group of scientists with positions on more than 100 journal editorial boards, who are “calling for all empirical journals in the life sciences – including those journals that we serve – to offer pre-registered articles at the earliest opportunity”. The signatories noted “The aim here isn’t to punish the academic community for playing the game that we created; rather, we seek to change the rules of the game itself.” And that really is the crux of the argument. We need to move away from the one point of reward.

Dr D.A. Kingsley

unread,
Jul 22, 2017, 1:46:10 PM7/22/17
to Schultz, Jack C., David Wojick, osi2016-25-googlegroups.com
Again from:


Getting data out there

There is definite movement towards opening research. In the UK there is now a requirement from most funders that the data underpinning research publications are made available. Down under, the Research Data Australia project is a register of data from over 100 institutions, providing a single point to search, find and reuse data. The European Union has an Open Data Portal.

Resistance to sharing data amongst the research community is often due to the idea that if data is released with the first publication then there is a risk that the researcher will be ‘scooped’ before they can get those all-important journal articles out. In response to this query during a discussion with the EPSRC it was pointed out that the RCUK Common Principles state that those who undertake Research Council funded work may be entitled to a limited period of privileged use of the data they have collected to enable them to publish the results of their research. However, the length of this period varies by research discipline.

If the publication of data itself were rewarded as a ‘research output’ (which of course is what it is), then the issue of being scooped becomes moot. There have been small steps towards this goal, such as a standard method of citing data.

A new publication option is Sciencematters, which allows researchers to submit observations which are subjected to triple-blind peer review, so that the data is evaluated solely on its merits, rather than on the researcher’s name or organisation. As they indicate “Standard data, orphan data, negative data, confirmatory data and contradictory data are all published. What emerges is an honest view of the science that is done, rather than just the science that sells a story”.

Despite the benefits of having data available there are some vocal objectors to the idea of sharing data. In January this year a scathing editorial in the New England Journal of Medicine suggested that researchers who used other people’s data were ‘research parasites’. Unsurprisingly this position raised a small storm of protest (an example is here). This was so sustained that four days later a clarification was issued, which did not include the word ‘parasites’.

Evaluating & rewarding data

Ironically, one benefit of sharing data could be an improvement to the quality of the data itself. A 2011 study into why some researchers were reluctant to share their data found this to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Professor Marcus Munafo in his presentation at the Research Libraries UK conferenceheld earlier this year suggested that we need to introduce quality control methods implicitly into our daily practice. Open data is a very good step in that direction. There is evidence that researchers who know their data is going to be made open are more thorough in their checking of it. Maybe it is time for an update in the way we do science – we have statistical software that can run hundreds of analysis, and we can do text and data mining of lots of papers. We need to build in new processes and systems that refine science and think about new ways of rewarding science.

So should researchers be rewarded simply for making their data available? Probably not, some kind of evaluation is necessary. In a public discussion about data sharing held at Cambridge University last year, there was the suggestion that rather than having the formal peer review of data, it would be better to have an evaluation structure based on the re-use of data – for example, valuing data which was downloadable, well-labelled and re-usable.

Need to publish null results

Generally, this series looking at the case for Open Research has argued that the big problem is the only thing that ‘counts’ is publication in high impact journals. So what happens to all the results that don’t ‘find’ anything?

Most null results are never published with a study in 2014 finding that of 221 sociological studies conducted between 2002 and 2012, only 48% of the completed studies had been published. This is a problem because not only is the scientific record inaccurate, it means  the publication bias “may cause others to waste time repeating the work, or conceal failed attempts to replicate published research”.

But it is not just the academic reward system that is preventing the widespread publication of null results – the interference of commercial interests on the publication record is another factor. A recent study looked into the issue of publication agreements – and whether a research group had signed one prior to conducting randomised clinical trials for a commercial entity. The research found that  70% of protocols mentioned an agreement on publication rights between industry and academic investigators; in 86% of those agreements, industry retained the right to disapprove or at least review manuscripts before publication. Even more concerning was  that journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol.

There are serious issues with the research record due to selected results and selected publication which would be ameliorated by the requirement to publish all results – including null results.

There are some attempts to address this issue. Since June 2002 the Journal of Articles in Support of the Null Hypothesis has been published bi-annually. The World Health Organisation has a Statement on the Public Disclosure of Clinical Trial Results, saying: “Negative and inconclusive as well as positive results must be published or otherwise made publicly available”. A project launched in February last year by PLOS ONE is a collection focusing on negative, null and inconclusive results. The Missing Pieces collection had 20 articles in it as of today.

In January this year there were reports that a group of ten editors of management, organisational behaviour and work psychology research had pledged they would publish the results of well-conceived, designed, and conducted research even if the result was null.  The way this will work is the paper is presented without results or discussion first and it is assessed on theory, methodology, measurement information, and analysis plan.


Dr Danny Kingsley

Head, Office of Scholarly Communication

Cambridge University

e: da...@cam.ac.uk

p: 01223 747 437

m: 07711 500 564

From: <osi20...@googlegroups.com> on behalf of "Schultz, Jack C."

Schultz, Jack C.

unread,
Jul 22, 2017, 1:49:11 PM7/22/17
to Dr D.A. Kingsley, Glenn Hampson, Plutchak, T Scott, osi20...@googlegroups.com
From the journal Nature:

"Data availability

Please provide a Data Availability statement in the Methods section under “Data Availability”; detailed guidance can be found in our data availability and data citations policy. Certain data types must be deposited in an appropriate public structured data depository (details are available here.), and the accession number(s) provided in the manuscript. Full access is required at publication. Should full access to data be required for peer review, authors must provide it.”

"Extended data       To improve its readability and navigability online, all data integral to the work being described should be included in up to ten multi-panel Extended Data display items similar to regular printed figures and tables. These will not appear in print but are included in the online versions of the published article. If the main finding includes a complex process we encourage the inclusion of a schematic to aid readers unfamiliar with the topic. For initial submission you may include Extended Data items as regular display items in the body of the manuscript or as Supplementary Information. But if accepted for publication, all Extended Data will need to be properly formatted.”

I believe Nature has added an additional review aimed specifically at analytical, mainly statistical, methods, in response to replicability issues.

David Wojick

unread,
Jul 22, 2017, 2:02:58 PM7/22/17
to osi2016-25-googlegroups.com
They all seem pretty flexible, which is as it should be.

David
http://insidepublicaccess.com/

At 01:43 PM 7/22/2017, Schultz, Jack C. wrote:
The Bio requirements are not stated as directly or strongly as engineering’s. But generally grantees are supposed to understand that data gathered with public funds must be made available in a timely way – whatever that iis.

The NSF DMP requirements include reporting when and how data gathered during the project will be “disseminated and made available†in considerable detail, including "identifier or accession numbers for data sets†. Since this needs to be reported annually, the implication is that data sharing should be happening before the end of the grant period.

In reality, DMPs are routinely ignored by PIs, reviewers, panels and (as far as I can tell) NSF officials. In forty years of NSF support I have never heard or seen a whisper of concern about data management. It certainly plays no role in the panel evaluations I participate in. I’ve seen funded proposals with ridiculous DMPs.

 

Another important detail (alluded to in my previous message) is that NIH (and maybe NSF) requires making data available once a paper is published. Does preprint count for that?  And the same data can be mined by the authors – or, if released, by others –– for years after that first publication. A good data set is valuable for a long time.


  JACK


Jack C. Schultz

jackcs...@gmail.com

@jackcschultz

https://schultzappel.wordpress.com



From: < osi20...@googlegroups.com> on behalf of David Wojick <dwo...@craigellachie.us >
Date: Saturday, July 22, 2017 at 1:26 PM
To: "osi2016-25-googlegroups.com" < osi20...@googlegroups.com>
Subject: Re: Wired article on preprints in Biology

The threat of being scooped is one of several reasons why the US Interagency Working Group on Digital Data opted to recommend just requiring Data Management Plans (DMP) for grant awards. (I did staff work for the IWGDD.) This threat may be a legitimate reason to not publish one's data, although the reasonableness of the DMP is.supposed to be part of the award evaluation, so this threat claim might get peer reviewed with the proposal.

Jack, you mention NIH and NSF rules that apparently require making one's data accessible. I am a bit surprised since NSF pioneered the DMP, following the IWGDD recommendation. In fact most agencies have now adopted the DMP requirement for the data part of their Public Access plans. Can you point to these rules?

At 09:21 AM 7/22/2017, you wrote:
This article does not appear to address one of the biggest concerns about preprints: the possibility that readers can scoop the authors using their own data. Here's the scenario (a real one, btw):

A refereed paper that depends on a data set assembled by the authors (e.g., an RNAseq data set) requires providing access to that data set. In a refereed publication that would happen when the article was published, and access to it would be limited until then. Often a series of papers depending on that same data set will be produced over the course of a year or more. Large, well-staffed and –eequipped labs can use access to that data set tto publish work that the original authors intended to publish. A good lab can produce an "in silico" study in a few days using other people's data.

Anthony Watkinson

unread,
Jul 22, 2017, 2:08:02 PM7/22/17
to Barrett, Kim, Rick Anderson, Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com

Several of the early career researchers I am interviewing do tell me that they deposit on BioRxiv and groups have  made it compulsory but yes Kim (what I had not noticed before) is that deposit is at the same time as submission.

 

I have also seen the ATGU publication policy (see https://www.atgu.mgh.harvard.edu/PubPolicy) and I quote:

 

We commit to depositing all manuscripts from our labs on an open access preprint server (such as bioRxiv.org) no later than the time of submission, in order to ensure rapid evaluation and use of the findings by the broadest possible community.  We further commit to submitting manuscripts only to journals that permit, either directly or through submission to free repositories (such as PubMed Central), free and unrestricted access to the final published work.

 

The bold is mine

 

Anthony

Jo De

unread,
Jul 22, 2017, 2:53:56 PM7/22/17
to Anthony Watkinson, Barrett, Kim, Rick Anderson, Schultz, Jack C., Plutchak, T Scott, The Open Scholarship Initiative
OK, something published in  bioRxiv that became "scooped" would still serve as prior art and effectively block patent rights on the "scooper" unless the scooper had filed a patent application prior to the date of the  bioRxiv publication.  But a full blown patent application at hand would seem unlikely in a situation that sounds like the pre-publication data is actually functioning as an extra control for the benefit of the scooper in order to advance the scooper's own formal publication. This situation functions upon a distorted perspective fueled by a culture of aggressive competition, bad citation etiquette, and people getting away with same. If one of our intentions for open is to advance the pace of discovery, system abuses should be noted and rectified.
Joann

.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.

Glenn Hampson

unread,
Jul 22, 2017, 4:15:08 PM7/22/17
to Dr D.A. Kingsley, Schultz, Jack C., Plutchak, T Scott, osi20...@googlegroups.com

Thanks Danny---for this link and the one on data. I note that the supporters of the protocol registration idea seem to be concentrated in psych and addiction studies---maybe it would work really well for these fields (especially psych, which as we recall has been slapped with power and replicability criticisms of late). I can’t really see their review pipeline working for clinical trials in medicine, but maybe the right approach (as was suggested by John Dove in his OSI2017 fast pitch) is to support each discipline in figuring out what works best for its particular needs.

David Wojick

unread,
Jul 22, 2017, 5:41:16 PM7/22/17
to osi20...@googlegroups.com
I have been very puzzled about the concept of replication in psychology. If you run the second study on the same people, they may well change the result simply because they were in the first study. They are in that sense different people after the first study than they were before it, so results might be expected to vary.

If you run it on a second group, this is a second very small sample of a huge population, so sampling theory says the results should be different. People vary greatly in many properties, especially psych. 

Thus replication in psych seems like a confused concept. There may be no simple laws of human behavior, such that psych studies should be replicable. If so then the failure of replication is not a problem, just a normal feature.

David

Glenn Hampson

unread,
Jul 22, 2017, 6:02:01 PM7/22/17
to osi20...@googlegroups.com

To its great credit, psych has worked hard over the years to become more rigorous (some would argue that they’ve just layered stats on top of still flimsy study designs, but there’s also been real growth in study design as well). However, some areas of psych may be more susceptible to being built on a house of cards because this rigor hasn’t always existed. But at the risk of dismissing the entire field (as natural scientists can do with social sciences---remember we’ve been down this road before with how there’s a pecking order in science, with mathematicians being the purest of all)---at the risk of dismissing psych entirely, there are solid foundations in this field (cognitive, behavioral, etc.) that are increasingly being tied in with bio fields (neurophys, etc.). My cognitive pysch grad prof from back in the 1980s---Elizabeth Loftus---is an NAS fellow and AAAS board member---not the kinds of distinctions that get passed out lightly in science 😊

Schultz, Jack C.

unread,
Jul 22, 2017, 6:15:32 PM7/22/17
to David Wojick, osi20...@googlegroups.com
Here are several issues to consider.

Each experiment is itself only a sample. Although scientists THINK they control conditions so well that each experiment need not be repeated, that’s not really true, as you point out below. There’s variation within the study, as well as among repeated studies (if they were done). So for thorough statistical confidence the experiment should be repeated often enough to get statistical significance there. A single publication may report doing the same experiment twice, but that’s not statistically significant. Many claim to have done more but only show “a representative result”. 


You could never (or rarely) get a confirming repeat result published anywhere.  Even then, if you think about it, a sample size of 2 (repeating the original study once) probably isn't all that useful statistically either. 


Most of the reports of unreplicability I’ve seen may arise from failure to reproduce all the conditions, treatments, methods, etc. etc. precisely as originally done. The sin here is that people don’t publish their methods in enough detail to allow replication, sometimes on purpose. This is complicated. The same precise experiment with mice, for example, could get dramatically different results in different rooms (something that has been shown recently). Adequate controls are rarely done.


Cost limits most of these activities. If it costs $150,000 to get a ‘good’ result (often much more), no one can really afford to spend that much repeating someone else’s work, unless you think you can refute it. And funding agencies won’t fund that kind of repeated work. They are satisfied with “internal” replication. 


Over the years, my lab group has found many many refereed papers during our weekly journal clubs that we decided shouldn’t have been accepted. I’m guessing that half of what we think we know in the biological sciences is incorrect or unsubstantiated. 


  JACK 

Schultz, Jack C.

unread,
Jul 22, 2017, 6:17:49 PM7/22/17
to Glenn Hampson, osi20...@googlegroups.com
Three comments.
1. This is a real problem.
2. It’s not just psych’s although the enormous variation in human behavior makes it conspicuous there.
3. I would never use membership in the NAS as an indication of quality. 




  JACK 

Anthony Watkinson

unread,
Jul 22, 2017, 6:22:46 PM7/22/17
to Jo De, Barrett, Kim, Rick Anderson, Schultz, Jack C., Plutchak, T Scott, The Open Scholarship Initiative

As I understand it traditional pre-print culture enables feedback before submission at least in physics. It is the next step after conference presentations. I agree that it seems odd to leave posting a preprint until submission other than to establish prior art and in the life sciences are most researchers worried about patents?

Anthony

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of Jo De
Sent: 22 July 2017 19:54
To: Anthony Watkinson
Cc: Barrett, Kim; Rick Anderson; Schultz, Jack C.; Plutchak, T Scott; The Open Scholarship Initiative
Subject: Re: Wired article on preprints in Biology

 

OK, something published in  bioRxiv that became "scooped" would still serve as prior art and effectively block patent rights on the "scooper" unless the scooper had filed a patent application prior to the date of the  bioRxiv publication.  But a full blown patent application at hand would seem unlikely in a situation that sounds like the pre-publication data is actually functioning as an extra control for the benefit of the scooper in order to advance the scooper's own formal publication. This situation functions upon a distorted perspective fueled by a culture of aggressive competition, bad citation etiquette, and people getting away with same. If one of our intentions for open is to advance the pace of discovery, system abuses should be noted and rectified.

Joann

 

On Sat, Jul 22, 2017 at 2:07 PM, Anthony Watkinson <anthony....@btinternet.com> wrote:

Several of the early career researchers I am interviewing do tell me that they deposit on BioRxiv and groups have  made it compulsory but yes Kim (what I had not noticed before) is that deposit is at the same time as submission.

 

I have also seen the ATGU publication policy (see https://www.atgu.mgh.harvard.edu/PubPolicy) and I quote:

 

We commit to depositing all manuscripts from our labs on an open access preprint server (such as bioRxiv.org) no later than the time of submission, in order to ensure rapid evaluation and use of the findings by the broadest possible community.  We further commit to submitting manuscripts only to journals that permit, either directly or through submission to free repositories (such as PubMed Central), free and unrestricted access to the final published work.

 

The bold is mine

 

Anthony

 

.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

Schultz, Jack C.

unread,
Jul 22, 2017, 6:26:54 PM7/22/17
to Anthony Watkinson, Jo De, Barrett, Kim, Rick Anderson, Plutchak, T Scott, The Open Scholarship Initiative
There’s plenty of patenting in life sciences.  I’m pretty sure IP protected for patents can’t be displayed anyplace. 

But the issue for researchers is the same irrespective of patent activity.  

IMHO the preprint system would establish first discovery status and invite comments. But it comes with the risk of having others use your data to scoop you.




  JACK 

Fiore, Steve

unread,
Jul 22, 2017, 6:35:14 PM7/22/17
to osi20...@googlegroups.com

There has been quite a bit written about replication/reproducibility in the last few years.  People often make the mistake that it is a social science problem but that is simply because Psychology has taken it head on.  If you want some background part of the movement, some of the leaders recently published this article "A manifesto for reproducible science" -- https://www.nature.com/articles/s41562-016-0021).  But here is one of the original articles to wake people up to this problem ("Why most published research findings are false" https://www.ncbi.nlm.nih.gov/pubmed/16060722).   


More recently, DARPA has gotten involved (see, for example, "Accelerating Discovery with New Tools and Methods for Next Generation Social Science" -- https://www.darpa.mil/news-events/2016-03-04).  For those interested in more on this, below I've cut-and-pasted a thread from one of my science policy lists where I have links to to various distinct initiatives dealing with this but also in the context of what DARPA is trying to accomplish.  This is a movement gaining much momentum and it is related to many of the open scholarship issues associated with data sharing etc. (this just came up at a conference I attended where a scientist noted that, with his grant, he "had" to upload the data). 


Best,

Steve


--------

Stephen M. Fiore, Ph.D.

Professor, Cognitive Sciences, Department of Philosophy (philosophy.cah.ucf.edu/staff.php?id=134)

Director, Cognitive Sciences Laboratory, Institute for Simulation & Training (http://csl.ist.ucf.edu/)

University of Central Florida

sfi...@ist.ucf.edu




From: Fiore, Steve
Sent: Thursday, September 8, 2016 3:19 PM
To: SCI...@LISTSERV.NSF.GOV; SCIT...@list.nih.gov
Subject: DARPA RFI "Forensic Social Science Supercolliders (FS3)"
 

Hi Everyone - there is a new Request for Information out from DARPA.  It's titled "Forensic Social Science Supercolliders (FS3) and can be found here (http://bit.ly/2csENve).  I'm sharing it with the list for a couple of reasons. 


First, it is generally an interesting topic in that it is around the development of new approaches for 'doing' social science research.  Historically, it draws from the idea of "strong inference", but fits within the current context of the push for open science and improving the way we do and verify science. This includes creating an infrastructure for open science, reproducibility, and registered replications as is being promoted by the Center for Open Science (see https://cos.io/).  Relatedly, it fits with what is being done conceptually with the idea of "metascience" (see "Metascience could rescue the ‘replication crisis’" -- http://go.nature.com/2cJbWGr), and methodologically with Stanford's "Meta-Research Innovation Center" -- http://metrics.stanford.edu/). I've cut-and-pasted some of the relevant text from the RFI below, but, if you're interested in this topic, I encourage you to read it all.  


Second, I thought folks might also be interested from a science of science policy perspective, in that this appears to be an evolution of an RFI out last year.  That one was titled "New Capabilities for Experimental Falsifiability in Social, Behavioral and Economic Sciences" (see http://bit.ly/2bVY9Gg).  The PM had a very engaging workshop on that in the Fall of 2015 and the ideas around that topic were further developed and refined.  I find this new framing of "forensic social science", along with the idea of developing simulation capabilities as an analog to supercolliders, to be quite intriguing.  So you can compare the two RFIs and see how it has evolved into this newer refinement/request.


Enjoy,

Steve Fiore



Request for Information (RFI) DARPA-SN-16-70 

Forensic Social Science Supercolliders (FS3)i 

The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is requesting information on new ideas, approaches, and capabilities for developing interactive simulations that can be used to calibrate the validity of different social science research methods and tools in drawing “strong inference” about causal mechanisms that can lead to emergent complex behaviors in human social systems.


-------------------------------------

Today, social scientists are increasingly incorporating simulation as a research method, particularly agent-based models (ABMs) and system-level simulations (which may include large-scale distributed online games). Where simulation is used currently, however, it generally appears as part of a larger collection of social science research methods, usually as an early exploratory mechanism to help identify or refine hypotheses that can then guide further data collection and observation in the “real world.” Hence simulations currently also suffer from the above limitations due to a lack of ground truth. Rather than simply being another research method, simulations – if advanced to a sufficient level of sophistication – might provide new capabilities for calibrating the inferential validity of social science research methods in the first place. Indeed, DARPA hypothesizes that ABMs, system DARPA-SN-16-70, FS3 RFI 2 simulations, and games – rather than being complementary research tools – might provide initial capabilities as a kind of “social supercollider” in which other methods and tools can themselves be tested. If successful, such simulations could provide testbeds in which social science research methods can be forensically evaluated for their capabilities and their limitations to correctly identify and characterize different causal mechanisms and dynamics that give rise to observed complex behaviors and systems. Simulations might also engender new opportunities to test, calibrate, and explore a wide range of combinations of existing methods and tools, and potentially enable the discovery of novel hybrid social science research methods with unique capabilities for correctly inferring causality. DARPA is referring to this potential capability as “forensic” for two reasons. First, if successful, the “supercollider” could support simulations that, while artificial, allow for complex social behavior to emerge from relatively simple first principles, where these first principles are known because they were coded into the simulation from the beginning. Provided these simulations are sufficiently sophisticated, they will enable the testing and evaluation of inferences derived from different social science research methods against ground truth with precision and certainty almost never available in the “real world”. Second, this kind of social science supercollider could allow simulations of sudden, disruptive changes in key parameters or system behaviors (phase transitions, tipping points, etc.) in order to further calibrate the accuracy of different methods to correctly infer “what really happened” to cause the observed simulated behaviors. 




From: Fiore, Steve
Sent: Thursday, September 3, 2015 8:44 PM
To: SCI...@LISTSERV.NSF.GOV
Subject: RFI out from DARPA on… "New Capabilities for Experimental Falsifiability in Social, Behavioral and Economic Sciences"
 

Hi Everyone - I just got notice about an interesting development.  It looks like DARPA is considering funding empirical work with a Popperian philosophy of science twist.  So I thought some folks on this list might be interested in contributing to this RFI as it could really benefit from expert input.


Best,

Stephen M. Fiore, Ph.D.

President, INGRoup

 

INGRoup 11th Annual Conference

July 14-16, 2016 - Helsinki, Finland



-----------------------------------------------------


RFI: New Capabilities for Experimental Falsifiability in Social, Behavioral and Economic Sciences

"The Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) is requesting information on and suggestions for research to develop novel methods, including new tools, platforms, techniques, and/or approaches, that could contribute to the development of unprecedented capabilities for testing the experimental falsifiability of (i.e., disconfirming) models, theories, and hypotheses in social, behavioral and economic (SBE) sciences."

Here is a link to the RFI:  https://www.fbo.gov/index?s=opportunity&mode=form&id=c5f4210eb10ece263d3510f99472badc&tab=core&_cview=0




--------

Stephen M. Fiore, Ph.D.

Professor, Cognitive Sciences, Department of Philosophy (philosophy.cah.ucf.edu/staff.php?id=134)

Director, Cognitive Sciences Laboratory, Institute for Simulation & Training (http://csl.ist.ucf.edu/)

University of Central Florida

sfi...@ist.ucf.edu




From: osi20...@googlegroups.com <osi20...@googlegroups.com> on behalf of Schultz, Jack C. <schu...@missouri.edu>
Sent: Saturday, July 22, 2017 6:15 PM
To: David Wojick; osi20...@googlegroups.com

Anthony Watkinson

unread,
Jul 23, 2017, 6:29:36 AM7/23/17
to Schultz, Jack C., Jo De, Barrett, Kim, Rick Anderson, Plutchak, T Scott, The Open Scholarship Initiative

Thanks Jack for the correction. I agree with your other sentiments. I shall share early career researcher attitudes 2016 and 2017 US and UK  with this group when I have done the work AND they are encouraging because the majority are keen on sharing BUT the ones who are in positions where they have to take responsibility are maybe keen on sharing still but they also accept reasons for disclosing less than they would wish to do all being equal. I think we all reckon that it is OUR job to find ways and encourage ways which will enable researchers to do what they want to do without penalities

Anthony

David Wojick

unread,
Jul 23, 2017, 7:12:38 AM7/23/17
to osi20...@googlegroups.com
I am not suggesting that there is anything wrong with psych, quite the contrary. (I am a cognitive scientist, which is close by.) My point is that a blanket concept of replication that is based on something like bench physics is wrong. That is, a concept of replication may be based on a concept of science that is wrong or inapplicable. A lot of what I have seen on replication falls into this category. 

In many cases, in many sciences, replication is intrinsically impossible, but this is not considered, much less recognized. In my view the replication craze is taking a useful feature of science to unrealistically extreme lengths. (I am reminded of Popper's falsification theory, which did the same thing and which many people have mistakenly embraced.)

David

Schultz, Jack C.

unread,
Jul 23, 2017, 7:31:29 AM7/23/17
to Anthony Watkinson, Jo De, Barrett, Kim, Rick Anderson, Plutchak, T Scott, The Open Scholarship Initiative
No CORRECTION intended, really; more of an expansion.
Thanks for the kind words. I don’t know what you’ve found, but I’m currently between ‘opportunities’, having just retired for the second time, this time as Director of an interdisciplinary life science center, where I learned about ev

Schultz, Jack C.

unread,
Jul 23, 2017, 7:39:50 AM7/23/17
to Anthony Watkinson, Jo De, Barrett, Kim, Rick Anderson, Plutchak, T Scott, The Open Scholarship Initiative
Sorry all- leaned on a ‘send’ key.

Anyway, I’ve had the opportunity to learn about these issues with NIH and other funding sources from colleagues I ‘led’ over the past 10 years to add to my own 40 or so years with NSF funding and service on their panels. In September I’ll be taking on another research leadership job and all of these issues will again be in my corner. 

My contributions here are as an experienced practitioner.  I think it’s valuable to hear from as many people actually confronting these issues as we can.  I look forward to what others can bring forward from colleagues as well. 

Fascinating conversation. 


  JACK 

From: <osi20...@googlegroups.com> on behalf of Anthony Watkinson <anthony....@btinternet.com>
Date: Sunday, July 23, 2017 at 6:29 AM

Glenn Hampson

unread,
Jul 23, 2017, 1:47:43 PM7/23/17
to The Open Scholarship Initiative

Thanks for jumping in Jack (jumping Jack 😊). It’s good to hear your voice again---I, for one, have missed hearing your insights. You were such a big contributor to these conversations in 2016. OSI has worked hard to attract more scientists and researchers to this effort (quite a few folks on this list have PhD’s, of course, but aren’t working in labs at the moment). When you and other active scientists are able to contribute---you, Kim Barrett, Joann Delenick, Lorena Barba, Jeff Tsao and others---it really adds that missing ingredient that we need to help keep this effort fully informed and focused on figuring out real solutions.

 

Best,

 

Glenn

Jo De

unread,
Jul 23, 2017, 9:32:53 PM7/23/17
to Glenn Hampson, The Open Scholarship Initiative

Hello all:

I would like to take this opportunity to further illustrate issues discussed here of priority and publication as related to a class of biological discoveries that may also be eligible for a patent (which provides for protected commercial implementation of a scientific discovery—of course highly beneficial to society at large).  My point is that there are a number of ways that priority can be assessed and also to question to what extent “scooping” is actually just a power play rather than a legitimate indicator of meaningful scientific priority.

The example I am providing is from a set of published documents I have handled in the course of my work as a biological sequences curator.  (I do not know any of the labs or authors involved.) This work starts out as a publication in the journal Cell.

https://doi.org/10.1016/j.cell.2005.01.007

and then results in two follow-up documents—a US patent application (US20060241185A1) filed on February 8 2006 and a granted US patent US7855228  entitled Antibiotics targeting MreB.  Both patent documents can be obtained from the USPTO web based patent and application searches and from Google Patents among other resources online.

If you focus on Figure 2 from the Cell paper you will quickly note that that identical figure constitutes Figure 3 in the patent application and grant.

According to USPTO the priority date of the patentable discovery (invention) is February 10 2005, which is not the paper publication date of that issue of Cell (which shows as February 11, 2005) but the online availability date of this issue which is shown elsewhere on the page. This material also illustrates that (online) published material is also eligible for patenting up until one year of its public dissemination.  This is clearly documented here by the patent application filing date of February 8 2006 (cutting it a little close, but none the less completely eligible for patent protection as a novel invention timely filed).

So publication (online) does not negate patentability for the inventor so long as an application is filed by the inventor within a year.  Academic publication may also facilitate an open patentability assessment, as it can clearly illustrate prior art, that would then be judged for novelty by experienced professional patent examiners rather than by colleagues or competitors who may serve on academic journal editorial boards.

Joann


.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.

Schultz, Jack C.

unread,
Jul 23, 2017, 9:40:58 PM7/23/17
to Jo De, Glenn Hampson, The Open Scholarship Initiative
Great! Very helpful! Thanks!




  JACK 

From: <osi20...@googlegroups.com> on behalf of Jo De <dnn...@gmail.com>
Date: Sunday, July 23, 2017 at 9:32 PM
To: Glenn Hampson <gham...@nationalscience.org>
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

Glenn Hampson

unread,
Jul 23, 2017, 10:55:56 PM7/23/17
to Schultz, Jack C., Jo De, The Open Scholarship Initiative

If I’m understanding you correctly Joann, you’re saying that open data doesn’t necessarily mean you’ll get scooped---that you have a year to file a patent based on this data (even if someone else uses it first). Is this correct? Sorry---out of my depth on this. What if there isn’t a patent involved here? What if you’re just getting scooped by someone else who’s writing a paper and they get credit for a non-patentable discovery? History has generally handed the victory to the person who assembles the puzzle, not the person who creates the puzzle pieces. And going back to the patent issue, are there any mechanisms in place to warn the inventor that their data has been used in a filing so they can respond within before the one year deadline passes?

 

Thank you,

 

Glenn

.

Jo De

unread,
Jul 24, 2017, 1:00:35 AM7/24/17
to Glenn Hampson, Schultz, Jack C., The Open Scholarship Initiative
Hi Brad, first to file not first to invent, a relatively recent change in the US law, has no relevance to my given example. In what way do you believe it does? Glenn, the inventor's name has to be on the patent application so even if the "scooper" repeats the "scoopee's" experiment in their own lab for their patent application the scoopee's work is prior art and will negate that patent. What does it mean "to get credit"? To me it doesn't mean a bunch of buddies influencing an article's acceptance, whether that be a formal journal article or a pre-peer reviewed report.  I'd much rather look to somebody whose paid job it is to call prior art.  Joann

.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.

Fenwick, Brad (ELS)

unread,
Jul 24, 2017, 3:10:14 AM7/24/17
to Jo De, Glenn Hampson, Schultz, Jack C., The Open Scholarship Initiative
Yes, I know all about patent law.  I agree...

Sent from my iPhone

On Jul 24, 2017, at 12:00 AM, Jo De <dnn...@gmail.com> wrote:

*** External email: use caution ***

 

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

Jo De

unread,
Jul 24, 2017, 6:36:13 AM7/24/17
to Fenwick, Brad (ELS), Glenn Hampson, Schultz, Jack C., The Open Scholarship Initiative
Brad, just like the sciences it may be based upon, patent law is evolving, if only due to its having to regularly and fairly deal with the highest titres of novelty.
Joann

Alexander Garcia Castro

unread,
Jul 24, 2017, 9:24:48 AM7/24/17
to David Wojick, osi20...@googlegroups.com
David, what do u think in terms of replication and reproducibility about the Milgram experiment? It has been widely replicated and reproduced by many researchers changing some of the experimental variables and/or keeping it identical to the original experiment. 

I agree, not everything can be reproduced or replicated. but that should not prevent us from aiming to make sure that the digital continuum in the experimental record is preserved. a link between openness and data availability has erroneously been established with reproducibility and replicability. having data available, even in cases when the data has very good metadata, is not directly related to being able to reproduce and replicate an experiment. we should, IMHO, focus more on preserving the digital continuum in the experimental record.

.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.

To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.

To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

susan

unread,
Jul 24, 2017, 10:20:32 AM7/24/17
to The Open Scholarship Initiative
I agree with Danny that we are trying to alter an aspect of the system (communication and knowledge sharing) without altering other aspects of the system (rewards and recognition).   I know I have beaten this "academic social norms" drum too often in the past - but it is mysterious to me why we think we can get around it.   It is a particularly acute issue in biomedical research.   Perhaps registered reports could help with this - if what matters so much is being "first" - or some other mechanism for assigning priority such that a scoop becomes much harder.  There also should be a mechanism for discouraging rather than rewarding scoopers.   It is also important to get beyond the "my data" idea -- a researchers may have a claim on the immediate publication but can not claim primacy on all possible future publications.   

David Wojick

unread,
Jul 24, 2017, 5:05:24 PM7/24/17
to osi20...@googlegroups.com
Alexander, 

Wikipedia says "The experiments have been repeated many times in the following years with consistent results within differing societies, although not with the same percentages around the globe.[5]"
https://en.m.wikipedia.org/wiki/Milgram_experiment

So it is not replicated; there are just "consistent results" with different percentages, whatever that means. It sounds like the statistical sampling case that I described.

I certainly agree that openness and replication are different goals. However, the later is facilitated by the former.

David
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

Glenn Hampson

unread,
Jul 24, 2017, 5:10:45 PM7/24/17
to The Open Scholarship Initiative

I was confused by the earlier points Joann was making---those of you with a patent background understood all this but I was totally lost. So after a few rounds of stupid questions, Joann was kind enough to write her explanation regarding the future of bioRxiv data deposits (with regard to getting scooped). Good stuff---thanks for your patience and expertise Joann:

 

Here’s a more detailed description of the bioRxiv data question from my perspective---I hope this helps. Posting work to bioRxiv establishes a publication date for the material that can be used to establish legal priority over other work, even work appearing in formal journal articles.  After material has been published for a year or more, the author no longer can seek to obtain patent status for themselves to establish clear-cut novelty. Within that timeframe a scooper also cannot successfully pursue a separate patent based on any scooped material.

 

Regarding the citation of scooped materials, in academic work people often have a range of articles they can choose from to cite (or not) in order to discuss prior work.  At USPTO they have to find and cite THE prior work, period. If the bioRxiv work is published earlier and not cited, the granted patent can be challenged in court---and nobody is happy.

 

With regard to the broader questions of data sharing, I think the danger here is a belief that "being scooped" abolishes the inherent priority of the originally posted material, which it does not. I think an author can manipulate bioRxiv to obtain priority in a very competitive field, especially where patenting priority consequences mean very big bucks. But I also think there are authors who submit to bioRxiv in order to make the work, data, etc. more readily usable in a timely manner by other researchers with the understanding that the work is scrupulously and properly cited.

 

One approach to alleviating these concerns and anxieties might be to create posting requirements for bioRxiv wherein the original data poster---the scoopee---has an opportunity to publish their own formal journal article which would cite their bioRxiv data along with whatever relevant scooped material they choose to include (a right of first refusal if you will). All this suggests that it’s important for legitimate scoopees receive academic review and credit for their work (their data deposits), even if doesn’t result in a paper or if somebody else publishes work based on this data before the depositor can publish.

 

 

From: Jo De [mailto:dnn...@gmail.com]
Sent: Monday, July 24, 2017 8:37 AM
To: Glenn Hampson <gham...@nationalscience.org>

Subject: Re: Wired article on preprints in Biology

 

Hi Glenn,

Posting work to bioRxiv establishes a publication date for the material that can be used to establish legal priority over other work, even work appearing in formal journal articles.  Doing this does subject the priority author to possible abuses such as plagiarism, and scooping in the sense that a subsequently written and published formal article by another author or group either ignores the posted findings/uses them without attribution/or other unsavory activities that might be associated with "scooping" to be defined by the research community. I think the danger is a belief that "being scooped" abolishes the inherent priority of the original material, which it does not. I think an author can manipulate bioRxiv to obtain priority in a very competitive field, especially where patenting priority consequences mean very big bucks. But I also think there are authors who submit to bioRxiv in order to make the work, data, etc. more readily usable in a timely manner by other researchers with the understanding that the work is scrupulously and properly cited. Other than this explanation, I am sorry I do not understand your questions.

Joann

 

On Mon, Jul 24, 2017 at 10:57 AM, Glenn Hampson <gham...@nationalscience.org> wrote:

Hi Joann,

 

I simply don’t understand this issue you’ve raised or your explanation---hence my question (and I suspect others on this list feel the same way). So this was an invitation for you to clarify in plainer English what the heck you’re talking about 😊 (plus how it relates to open data, protections from scooping, etc.).

 

Thanks,

 

Glenn

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of Jo De
Sent: Sunday, July 23, 2017 10:01 PM
To: Glenn Hampson <gham...@nationalscience.org>
Cc: Schultz, Jack C. <schu...@missouri.edu>; The Open Scholarship Initiative <osi20...@googlegroups.com>


Subject: Re: Wired article on preprints in Biology

 

Hi Brad, first to file not first to invent, a relatively recent change in the US law, has no relevance to my given example. In what way do you believe it does? Glenn, the inventor's name has to be on the patent application so even if the "scooper" repeats the "scoopee's" experiment in their own lab for their patent application the scoopee's work is prior art and will negate that patent. What does it mean "to get credit"? To me it doesn't mean a bunch of buddies influencing an article's acceptance, whether that be a formal journal article or a pre-peer reviewed report.  I'd much rather look to somebody whose paid job it is to call prior art.  Joann

 

.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit
http://osinitiative.org/osi-listservs

.


---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

Susan Fitzpatrick

unread,
Jul 24, 2017, 5:18:45 PM7/24/17
to David Wojick, osi20...@googlegroups.com

I agree that we need to be careful about the distinctions among sloppy science or incomplete reporting of experimental details and lack of reproducible findings because of true variability in the system and the important role of context – certainly in human behavior.     To me this is a reason to have data widely available – we may learn some interesting.     Care must be taken to truly randomize subjects (not: for convenience I will run the 22 year olds on Tuesday and the 66 year olds next Thursday in my study of the effects of aging on performance of task X).      I have been worried about the “reward and incentive issue” in some experimental fields – reward reproducibility and you will get close to reproducibility from artificial, over-constrained experiments.      A good book on this issue is Chance Development and Aging -- https://www.amazon.com/Chance-Development-Aging-Caleb-Finch/dp/0195133617 - random events, heterogeneity, and context matter in biology and in dynamic complex systems.

 

Susan M. Fitzpatrick, Ph.D.

President, James S. McDonnell Foundation

Visit JSMF forum on academic issues: www.jsmf.org/clothing-the-emperor

SMF blog  www.scientificphilanthropy.com  

 

 

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of David Wojick
Sent: Monday, July 24, 2017 4:08 PM
To: osi20...@googlegroups.com
Subject: Re: Wired article on preprints in Biology

 

Alexander, 

.

--

As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---

You received this message because you are subscribed to a topic in the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/osi2016-25/ACwUjWyxzjs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to osi2016-25+...@googlegroups.com.

Glenn Hampson

unread,
Jul 24, 2017, 6:42:53 PM7/24/17
to The Open Scholarship Initiative

…and also the variability between experimental and observational research, right Susan? A lot of psych work (and social science work, and anthropology, economics, astronomy, and more) doesn’t involve setting up a study, but just observing (I guess you might say the same is true of history research in a broad sense). Figuring out what sample size will be statistically significant for observational work is different than when running an experiment---maybe even irrelevant (a sample size of one in history may be enough)---and different observations may lead to different conclusions (affecting reproducibility). And then there are the hybrid cases---hypothesizing about an existing condition in nature and then setting up an experiment to observe it, for instance (like proving Einstein’s general theory of relativity with the LIGO project). Seems like there may be lots of different flavors here.

cbaur

unread,
Jul 26, 2017, 9:57:15 PM7/26/17
to The Open Scholarship Initiative
This is one the most important topics to determine how to optimize research dissemination and open access.  There are two issues here: 
  1. Culture 
  2. Infrastructure.  
The fear that research will be scooped is due to the relationship researchers have to the word "publication" and a matter of semantics from the publication process that began with the first journal publication in 1665. "Publication" protected researchers ideas by linking them to their content.  But in the digital age, print publishing is no longer necessary.  Research disseminated online can be linked with an author via a DOI that definitively associates a researcher with their pre-print and allows them to claim rights to their work.   Hence, pre-prints with a DOI protect/prevent work from being scooped.  While publication will never be abandoned (it should not be!), to remove the fear that research will be scooped a cultural change is necessary where the term "publication" is replaced with "dissemination."  Instead of publishing the manuscript, researchers can use pre-print repositories or post-publication review systems to share their work.  Altering any cultural pattern is not easy, especially when publication has been fundamental to the scientific process for the last 352 years! The growth and usage of pre-print repositories and social media to share work and ideas is encouraging that this cultural change can occur.  

Why is it important that we change the relationship from publication to dissemination?  Because although the function of the science publication system is to disseminate research, it paradoxically restricts the ability for researchers to publish research.  Out of 807 clinical trials, 52% were unpublished. Out of 594 clinical trials, 50% were unpublished. Less than half of NIH-funded trials are published after 30 months of a study’s completion. After a median of 51 months, one-third of trials remain unpublished. Out of 13,000 clinical trials only 38.3% reported results. These are examples of the quantity of research that is not disseminated. Lack of publication biases the overall literature base on topics and impairs the ability to understand true estimates of efficacy and effectiveness, harming the ability to make accurate and appropriate healthcare recommendations.  See my blog post here for citations: http://bit.ly/2tyCcYB      

Pre-prints are a vehicle that gives researchers control over the dissemination of their research.  The infrastructure then is necessary to allow pre-prints to be disseminated, as bioRxiv, and others provide, and what we, at Empeerial, are trying to develop to gives researchers maximum control over the dissemination process.  By giving researchers control to share their work without fear of being scooped, the barrier for disseminating research is removed so that all evidence can be seen and evaluated pre/post-publication, which is especially important for studies that are not "sexy" with null/negative results that publishers are biased against publishing.  

Giving researchers the flexibility to share/claim rights to their work, the priority then shifts from publishing work, to the science community to evaluate the methodological quality of disseminated research. 

Lots of great ideas in this thread, thank you for stimulating thoughts! 
Chris Baur      

On Friday, July 21, 2017 at 3:27:22 PM UTC-5, tscott wrote:

Barrett, Kim

unread,
Jul 26, 2017, 10:05:03 PM7/26/17
to cbaur, The Open Scholarship Initiative

Publication as defined also implies validation by peers.  That is why it has such currency as a proxy for academic productivity.  I can disseminate all I want, but for tenure people want to see that someone in the audience cares.  And your IP/idea can still be scooped whether your preprint has a DOI or not.

 

Kim E. Barrett, Ph.D.

Distinguished Professor of Medicine, UC San Diego

Editor-in-Chief, The Journal of Physiology

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of cbaur
Sent: Wednesday, July 26, 2017 6:57 PM
To: The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: Wired article on preprints in Biology

 

This is one the most important topics to determine how to optimize research dissemination and open access.  There are two issues here: 

--

Laurie Goodman

unread,
Jul 27, 2017, 3:31:48 AM7/27/17
to Barrett, Kim, cbaur, The Open Scholarship Initiative
I'm curious as to how quickly after a paper was posted in a preprint server someone "scooped" them?

I think at least 1/2 of the papers we have submitted, the authors say they have competition- so, I really would like to know if these proclaimed "scoops" are merely competition rather than someone taking an idea and data from a pre-print server.
The process of peer review: even if you theoretically "stole" someone's information, still takes a long long time. 
So, I'd really like to get to the bottom of the 'scoop' claim, since it will scare people into not releasing information early. So, is it true and how often does it really happen?

I'm not a patent expert: but if you put something in a preprint server... it does establish a date of making available publicly, so that should limit (in so far as the courts are good...) someone stealing your patent. I don't think scientific patent decisions are based on whether it was peer reviewed, but rather on whether the information released is sufficiently complete to make it a solid record of what is to be patented. But, again, not a patent lawyer...


We're doing something a bit fun right now with my journal which is a real time peer review, where papers are submitted to bioRxiv, shunted through the journal portal to our journal, then are being peer reviewed through Academic Karma where the reviewers post their (named) reviews as soon as they are complete.

Yes, the authors and reviewers know that these reviews will be made public before a decision is made: and that the papers can be rejected.

One of the papers already has two reviews in, and the others have reviews starting to come in. It will be interesting to see what reviewers and authors think about the process. (Obviously- it isn't for everyone...)

-L

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Laurie Goodman, PhD

ORCID ID: 0000-0001-9724-5976
GigaScience, Editor-in-Chief 

Main Email: edit...@gigasciencejournal.com

Website: http://www.gigasciencejournal.com

Follow us on Twitter @GigaScience; Find us at FaceBook; Read GigaBlog; Sign up for Article Alerts

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+unsubscribe@googlegroups.com.

Jo De

unread,
Jul 27, 2017, 5:15:33 AM7/27/17
to Laurie Goodman, Barrett, Kim, cbaur, The Open Scholarship Initiative
Hi Laurie,
Certainly in the laboratory research culture it has been acceptable for the lab head (Herr Professor?) to pit underling lab members against each other on competing projects (student against student, post-doc against post-doc, post-docs against students?) in order to up the production factor in the lab and naturally that results in competition for publication on similar topics.  I don't know whether granting agencies also do this within their grantmaking portfolios as well, but it is certainly not out of the question. This practice may somehow have evolved to affect the larger scientific community, consciously or unconsciously and ends up impacting open publishing as a victim to a culture of artificial battles for priority rather than as a collective force against the unknown.
Joann

On Thu, Jul 27, 2017 at 3:31 AM, Laurie Goodman <lau...@gigasciencejournal.com> wrote:
I'm curious as to how quickly after a paper was posted in a preprint server someone "scooped" them?

I think at least 1/2 of the papers we have submitted, the authors say they have competition- so, I really would like to know if these proclaimed "scoops" are merely competition rather than someone taking an idea and data from a pre-print server.
The process of peer review: even if you theoretically "stole" someone's information, still takes a long long time. 
So, I'd really like to get to the bottom of the 'scoop' claim, since it will scare people into not releasing information early. So, is it true and how often does it really happen?

I'm not a patent expert: but if you put something in a preprint server... it does establish a date of making available publicly, so that should limit (in so far as the courts are good...) someone stealing your patent. I don't think scientific patent decisions are based on whether it was peer reviewed, but rather on whether the information released is sufficiently complete to make it a solid record of what is to be patented. But, again, not a patent lawyer...


We're doing something a bit fun right now with my journal which is a real time peer review, where papers are submitted to bioRxiv, shunted through the journal portal to our journal, then are being peer reviewed through Academic Karma where the reviewers post their (named) reviews as soon as they are complete.

Yes, the authors and reviewers know that these reviews will be made public before a decision is made: and that the papers can be rejected.

One of the papers already has two reviews in, and the others have reviews starting to come in. It will be interesting to see what reviewers and authors think about the process. (Obviously- it isn't for everyone...)

-L
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Laurie Goodman, PhD

ORCID ID: 0000-0001-9724-5976
GigaScience, Editor-in-Chief 

Glenn Hampson

unread,
Jul 27, 2017, 11:48:31 AM7/27/17
to The Open Scholarship Initiative

Hi Laurie,

Here’s a fun article by Chris Chambers---a professor of cognitive neuroscience at Cardiff---that speaks to these points you raise: http://blogs.lse.ac.uk/impactofsocialsciences/2016/04/19/so-youve-been-scooped/. And here are the top seven points Chris makes with regard to being scooped (there’s a full paragraph discussion for each of these points---these are just the first few sentences):

  1. It means you are doing something other people care about. Getting scooped is a sign that your research is important and that you are probably asking the right questions.
  2. Being first isn’t necessarily a sign of being a good scientist. Why? Because many initial discoveries are wrong or overclaimed.
  3. Many PIs – me included – are sceptical of researchers who claim to be the first to show something. For one thing it is almost never the case; the vast majority of science is a process of derivative, incremental advance, despite whatever spin the authors cake their abstracts in.
  4. In the vast majority of cases you don’t show you are a brilliant scientist or intellectual force by being the first to claim something. You prove your mettle by shaping the theoretical landscape in which everyone works.
  5. There are a few cases where being the first might matter and can have career benefits. If you’re the first to describe an amazing new technique, or the first to make a Nobel-level discovery then scooping might count. But how many of us fall into that category? 0.0001%?
  6. Remember that what matters in science is the discovery, not the discoverer. That’s why the public pays your salary or stipend.
  7. Finally, if you really feel you have an idea for a study that is unique and you want to declaw the Scoop Monster, consider submitting it as a Registered Report.

This essay raises some interesting questions, like:

  1. As Laurie asks, how often does “scooping” as a form of IP theft actually occur and what other horror stories are out there (learning how much of this is real and how much is urban legend can help us figure out what kinds of safeguards need to be in place).
  2. Is Chris right about his novelty claims? Does rigor trump novelty? I’m not a cognitive neuroscientist, but from what I’ve observed in cancer research, it definitely takes both to reach superstar PI status---the ability to break new ground AND conduct rigorous research.
  3. Does the Nobel Prize matter? I would argue that it does. Humans have a very poor grasp of infinitesimal probabilities, which is why we buy lotto tickets and have a difficult time understanding concepts like evolution and the Big Bang. So even if the odds of winning a Nobel Prize are whatever---0.001%---I’ll bet you that every early career researcher still dreams of making an important contribution and does not set out to just be another cog in the wheel (that realization may set in mid-career, but it’s not there as a post-doc). Maybe more importantly, there are those who think these prizes are totally inconsistent with the modern realities in science---no one discovers anything in a vacuum, and these prizes just perpetuate the myth of the lone researcher and stoke the fires of competition.
  4. Chris’s point #6 is interesting. Science used to be funded by private entities, not the state. Is this how the competition in science started (and if we understand this, does this make it easier to understand how to transition to a more public-spirited ethos)?
  5. And finally, what is the “registered report” that Chris mentions?

Best,

Glenn

Glenn Hampson
Executive Director
National Science Communication Institute (nSCI)
Program Director
Open Scholarship Initiative (OSI)

osi-logo-2016-25-mail

2320 N 137th Street | Seattle, WA 98133
(206) 417-3607 | gham...@nationalscience.org | nationalscience.org

 

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of Laurie Goodman
Sent: Thursday, July 27, 2017 12:32 AM
To: Barrett, Kim <kbar...@ucsd.edu>
Cc: cbaur <cb...@empeerial.com>; The Open Scholarship Initiative <osi20...@googlegroups.com>
Subject: Re: Wired article on preprints in Biology

 

I'm curious as to how quickly after a paper was posted in a preprint server someone "scooped" them?

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.


To post to this group, send email to osi20...@googlegroups.com.
Visit this group at https://groups.google.com/group/osi2016-25.
For more options, visit https://groups.google.com/d/optout.

--
As a public and publicly-funded effort, the conversations on this list can be viewed by the public and are archived. To read this group's complete listserv policy (including disclaimer and reuse information), please visit http://osinitiative.org/osi-listservs.
---
You received this message because you are subscribed to the Google Groups "The Open Scholarship Initiative" group.

To unsubscribe from this group and stop receiving emails from it, send an email to osi2016-25+...@googlegroups.com.

image001.jpg

Susan Fitzpatrick

unread,
Jul 27, 2017, 11:53:15 AM7/27/17
to Glenn Hampson, The Open Scholarship Initiative

https://osf.io/8mpji/wiki/home/ for explanation of registered reports.    BTW – JSMF has just approved some funding to center for open science to do some research on journal articles from registered reports.

 

Susan M. Fitzpatrick, Ph.D.

President, James S. McDonnell Foundation

Visit JSMF forum on academic issues: www.jsmf.org/clothing-the-emperor

SMF blog  www.scientificphilanthropy.com  

 

 

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of Glenn Hampson
Sent: Thursday, July 27, 2017 10:48 AM
To: 'The Open Scholarship Initiative' <osi20...@googlegroups.com>
Subject: RE: Wired article on preprints in Biology

 

Hi Laurie,

Here’s a fun article by Chris Chambers---a professor of cognitive neuroscience at Cardiff---that speaks to these points you raise: http://blogs.lse.ac.uk/impactofsocialsciences/2016/04/19/so-youve-been-scooped/. And here are the top seven points Chris makes with regard to being scooped (there’s a full paragraph discussion for each of these points---these are just the first few sentences):

1.       It means you are doing something other people care about. Getting scooped is a sign that your research is important and that you are probably asking the right questions.

2.       Being first isn’t necessarily a sign of being a good scientist. Why? Because many initial discoveries are wrong or overclaimed.

3.       Many PIs – me included – are sceptical of researchers who claim to be the first to show something. For one thing it is almost never the case; the vast majority of science is a process of derivative, incremental advance, despite whatever spin the authors cake their abstracts in.

4.       In the vast majority of cases you don’t show you are a brilliant scientist or intellectual force by being the first to claim something. You prove your mettle by shaping the theoretical landscape in which everyone works.

5.       There are a few cases where being the first might matter and can have career benefits. If you’re the first to describe an amazing new technique, or the first to make a Nobel-level discovery then scooping might count. But how many of us fall into that category? 0.0001%?

6.       Remember that what matters in science is the discovery, not the discoverer. That’s why the public pays your salary or stipend.

7.       Finally, if you really feel you have an idea for a study that is unique and you want to declaw the Scoop Monster, consider submitting it as a Registered Report.

You received this message because you are subscribed to a topic in the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/osi2016-25/ACwUjWyxzjs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to osi2016-25+...@googlegroups.com.

image001.jpg

Holland, Claudia

unread,
Jul 27, 2017, 11:59:01 AM7/27/17
to Glenn Hampson, The Open Scholarship Initiative

From the standpoint of a PhD student (granted, not the central figure of this discussion), being first to “publish” original research in the form of a dissertation is mandatory, is it not? You’re pretty much screwed if you’re scooped.

 

Claudia

 

From: osi20...@googlegroups.com [mailto:osi20...@googlegroups.com] On Behalf Of Glenn Hampson
Sent: Thursday, July 27, 2017 10:48 AM
To: 'The Open Scholarship Initiative' <osi20...@googlegroups.com>
Subject: RE: Wired article on preprints in Biology

 

Hi Laurie,

Here’s a fun article by Chris Chambers---a professor of cognitive neuroscience at Cardiff---that speaks to these points you raise: http://blogs.lse.ac.uk/impactofsocialsciences/2016/04/19/so-youve-been-scooped/. And here are the top seven points Chris makes with regard to being scooped (there’s a full paragraph discussion for each of these points---these are just the first few sentences):

1.       It means you are doing something other people care about. Getting scooped is a sign that your research is important and that you are probably asking the right questions.

2.       Being first isn’t necessarily a sign of being a good scientist. Why? Because many initial discoveries are wrong or overclaimed.

3.       Many PIs – me included – are sceptical of researchers who claim to be the first to show something. For one thing it is almost never the case; the vast majority of science is a process of derivative, incremental advance, despite whatever spin the authors cake their abstracts in.

4.       In the vast majority of cases you don’t show you are a brilliant scientist or intellectual force by being the first to claim something. You prove your mettle by shaping the theoretical landscape in which everyone works.

5.       There are a few cases where being the first might matter and can have career benefits. If you’re the first to describe an amazing new technique, or the first to make a Nobel-level discovery then scooping might count. But how many of us fall into that category? 0.0001%?

6.       Remember that what matters in science is the discovery, not the discoverer. That’s why the public pays your salary or stipend.

7.       Finally, if you really feel you have an idea for a study that is unique and you want to declaw the Scoop Monster, consider submitting it as a Registered Report.

Rick Anderson

unread,
Jul 27, 2017, 12:04:21 PM7/27/17
to Holland, Claudia, Glenn Hampson, The Open Scholarship Initiative
And going back to the experience of the biomed researcher who called me the other day in the wake of her bad experience with biorXiv: one of the things she said to me on the phone was something along the lines of “this isn’t just about me and my reputation; it’s about getting grants, and being able to pay people’s salaries.” She clearly believes that being scooped threatens her funding, and therefore her colleagues’ livelihoods. It guess it’s possible she was being overdramatic, but I don’t know enough about the world she’s working in to say whether that’s the case, and I’m inclined to take her word for it since it’s her world and not mine.

This all points, again, to the importance of ensuring that we include the voices of working scholars and scientists in our ongoing discussion of how to reform the world of scholcomm. They understand stuff that we don’t, and vice versa.

---
Rick Anderson
Assoc. Dean for Collections & Scholarly Communication
Marriott Library, University of Utah

Follow us on Twitter@GigaScience; Find us at FaceBook; ReadGigaBlog; Sign up forArticle Alerts

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

image001.jpg

Schultz, Jack C.

unread,
Jul 27, 2017, 12:40:57 PM7/27/17
to Rick Anderson, Holland, Claudia, Glenn Hampson, The Open Scholarship Initiative
Academic researchers essentially run small businesses. Their ‘income’ comes from grants and their ‘products’ are refereed publications.  Grad students and staff are hired with those funds, and they and their families depend on that granting success. Of course large labs can have over a dozen grad students and postdocs and several staff members. 

A reasonable guess about the annual cost of maintaining a small lab in a ‘cheap’ science is maybe $250,000. (With the new federal requirements that postdocs be paid a minimum of $47,500 per year, that cost has gone up.  One postdoc costs just under $100K with fringes and indirect costs.)  Since grant size at NSF is about $750,000 for 3 years, that means that the researcher must win another 3-yr grant every 2.5 years to keep employees and students paid. What many do is try to win multiple grants that overlap to provide financial security and research continuity. I have colleagues who spend the majority of their time writing grant proposals and submit 5 or more per year. 

As access to funding has tightened, many feel the need to produce high-impact, highly-visible results to remain in the funding stream.  At NSF, for example, the magic word on panels is “transformative”.  If a proposal can make a case for being ‘transformative’ it usually moves up the scale, often to the top.  

This has had the unfortunate consequence of sometimes tempting researchers to overstate the significance of their work. On the other hand, it has also helped stimulate more contact with the public, since many feel that visibility can improve funding chances. 


A factor germane to this conversation is that subsequent granting success also depends on productive under previous grants. So, someone with a data set that can be mined for more papers necessarily wants to get as many publications from that data set as possible to ensure continued funding. 


So is that biomed researcher hyperventilating? Well, it’s stressful running a business that can fail every few years unless you win another ‘prize’. Some researchers can cope with that stress, while others can’t and many drop out of academia for that reason. 


  JACK 

image001.jpg

Susan Fitzpatrick

unread,
Jul 27, 2017, 12:45:28 PM7/27/17
to Schultz, Jack C., Rick Anderson, Holland, Claudia, Glenn Hampson, The Open Scholarship Initiative

 

Susan M. Fitzpatrick, Ph.D.

President, James S. McDonnell Foundation

Visit JSMF forum on academic issues: www.jsmf.org/clothing-the-emperor

SMF blog  www.scientificphilanthropy.com  

 

 

 

You received this message because you are subscribed to a topic in the Google Groups "The Open Scholarship Initiative" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/osi2016-25/ACwUjWyxzjs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to osi2016-25+...@googlegroups.com.

image001.jpg

Barrett, Kim

unread,
Jul 27, 2017, 2:06:21 PM7/27/17
to Rick Anderson, Holland, Claudia, Glenn Hampson, The Open Scholarship Initiative

Her point is absolutely true

 

Kim E. Barrett, Ph.D.

Distinguished Professor of Medicine, UC San Diego

Editor-in-Chief, The Journal of Physiology

Ph: 858 534 2796

 

Reply all
Reply to author
Forward
0 new messages