Officeof Strategic Coordination (Common Fund) Purpose The Office of Strategic Coordination (OSC) seeks to enhance the validity of the impactful lines of biomedical research it supports. The purpose of this Notice of Special Interest (NOSI) is to provide administrative supplements to active Common Fund-supported awards to conduct preparatory activities that enable high-fidelity replication studies using independent contract research resources, which will be supported by NIH.
This initiative is funded through the NIH Common Fund, which supports cross-cutting programs that are expected to have exceptionally high impact. Many Common Fund initiatives invite investigators to develop bold and innovative approaches to address problems that may seem intractable or to seize new opportunities that offer the potential for rapid progress.
Reproducibility of biomedical research is a critical feature in the advancement of knowledge (see ). The use of rigorous research and design principles coupled with transparency in reporting methods and outcomes enables important lines of biomedical research to be replicated by other researchers. Replication studies are a core part of the scientific process and critical in assessing the validity of novel research outcomes, particularly those that form the basis of evidence-based practices to improve public health. Preclinical research is particularly ripe for independent replication because it is thought to be the area most susceptible to reproducibility issues (Collins and Tabak, 2014).
The objective of this Notice is to provide support for necessary preparatory activities to enable replication of impactful Common Fund-supported biomedical studies by an independent contract research organization (CRO). Applications must identify a preclinical study with high potential impact on public health that is amenable to replication by an independent CRO. The capabilities of the CRO are listed on the Common Fund website ( -initiative/faq). The study to be replicated may be an experimental study (e.g., replicating an experiment) or a validation study (e.g., replicating a demonstration of the capabilities of a novel tool, technology, or method) and must be completed within the one-year budget period. The applicant must be prepared to engage with the CRO immediately after the start of an award, and provide required research methods, protocols, and unique experimental materials as soon as possible. They must be available for consultation with the CRO throughout the period of award to ensure that the replicated study has high fidelity to the original study. The methods and results of the replication studies conducted through this NOSI will not be made public without the consent of the principal investigator(s). The CRO will provide NIH with anonymized, aggregated data on the replicability of the selected studies.
A modest budget (described below) may be requested to support activities and materials required to enable a replication study by the CRO. The Common Fund will only support replication/validation of preclinical research studies and will prioritize studies that have the potential for translation to clinical use. Human subjects studies will not be supported. There is no limit on the number of applications from each awardee that can be submitted in response to this NOSI, but each application must propose to have a unique study replicated. To help NIH understand the utility of this approach for replication research, the Common Fund may request feedback from awardees related to the impact of this activity on their research program.
The Common Fund will conduct 2 batches of administrative review of applications (see table above) and will support meritorious applications based upon the availability of funds. Review of applications will consider the following factors:
A technical assistance webinar will be held with NIH staff to discuss this supplement opportunity and answer questions about available research capabilities and the supplement review process. Questions may be submitted ahead of time to
CF-Repl...@od.nih.gov. Additional questions may be taken during the webinar if time allows. Details about the webinar, including registration information and frequently asked questions and answers are posted and will be periodically updated at the website -initiative/faq. Webinar materials will be available after the webinar on the same website. Applicants are strongly encouraged to monitor the Replication Initiative website for updates when preparing applications and before final submission.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
Disclaimer This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.
Replication is central to the rhetoric and the practice of science. Yet relatively little has been written about the methodology of replication. In particular, there has been very little written about how to define replication in mathematical and statistical terms, how to analyze ensembles of studies to determine whether the results replicate, or how to plan ensembles of studies to assess replication.
This project addressed these needs by showing that the analysis of replication is linked to the statistical methodology of meta-analysis. It showed that there were alternative definitions of replication that were all reasonable, but subtly different. It also offered statistical analysis strategies for use with any of these strategies and provided an analysis of their decision theoretic properties.
The project showed that the nave idea that replicability could best be addressed conducting a single replication study had a serious flaw: The analysis of whether the results of the two studies agreed would inevitably be weaker (have less statistical power) than either of the original studies. The project examined recent empirical studies of replication and showed that the analyses used were at best suboptimal and sometimes did not measure replicability at all.
Thank you for visiting
nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
On the other hand, when the main experiment by from Thibodeau and Boroditsky8,9 was replicated by Steen, Reijnierse and Burgers,10 their findings did not confirm the original result. Thereupon, Thibodeau and Boroditsky11 adjusted the norms for answer-coding, reanalysed the data provided by Steen et al.10 and thus replicated the effect after all. In short, despite the convincing initial evidence, it was impossible to draw any clear-cut conclusions. For this reason, a replication of the central experiment was conducted in German. The objective was to replicate the metaphor-framing effect empirically confirmed by Thibodeau and Boroditsky8,9,11. We focused on the first and central experiment, just as Steen et al.10 had done. A translation of the original material was used, so our procedure matched the original with respect to topic (crime), metaphors (virus, beast) and response format (open-end).
Altogether, 122 individuals took part in the study. The data from 7 of the participants had to be excluded from the analysis (see section Exclusion of cases), so the final sample consisted of 115 individuals (83 females, 32 males). Their ages ranged from 18 to 51 years with a mean of 22.4 and a standard deviation of 3.9. Participants were semi-randomly assigned to one of four experimental groups of equal size (28 for the beast simile and 29 each for the virus metaphor, the virus simile and the beast metaphor). All participants were native German speakers and university students or graduates. 42.6 percent of the participants were majoring in psychology, 19.2 percent in politics and 30.4 percent in other subjects; 9.8 percent did not indicate their subjects. Participation was voluntary; participants could choose between attending the experiment to fulfil a course requirement or receiving a small reimbursement.
After the texts had been translated and adapted, there were four versions of the study material differing in terms of the text version they contained. Assigning versions to participants was done in a semi-randomised manner, i.e., the sets of study material were alternately sorted and handed out to participants in that order.
The first part of the pdf file contains the entire study materials in the original German version. The second part of the pdf file takes the form of a table containing the following information: number of participant, experimental condition the participant was in, transcribed answer in its original form, demographics (age, gender, education). Participation was voluntary. Subjects were assured that their data would be treated anonymously and that it would be impossible to infer their individual identity. The table used for answer coding is available in the third part of the pdf file and is also presented in German. The Excel file shows the coded responses of the participants.
Since there was no overall difference between the two types of figure, there was no reason to treat the data separately with regard to this factor, so the data from the two levels were analysed conjointly.
3a8082e126