Unlikesome other models explored in this series, DBIR is not a detailed method with defined steps. Rather, it is a family of activities that employs collaborative and systematic inquiry, designing tools and processes to improve teaching and learning, and creating a balance between researcher and practitioner expertise.
The model does not provide a specific protocol for testing and refining solutions, instead, the work of DBIR partnerships adheres to the four guiding principles described earlier. When measuring the effects of interventions, DBIR partnerships often consider multiple types of evidence. These may include running controlled experiments, however, they are not generally viewed as necessary. The STEMGenetics team, for example, used classroom observation techniques to check for increases in student understanding, they analyzed student assessments, and solicited formal feedback from teachers implementing the curricula to iteratively refine the units of study.
When DBIR partnerships see variation in the data they collect, their focus turns to the third principle: developing theory and knowledge of both classroom learning and implementation through systematic inquiry. (The very name of the model indicates how important it is to get implementation right.) When something works in one setting, but not in another, DBIR partners will examine how specific elements in each of the different classrooms, schools, or districts affects the implementation. They will study implementation systematically and revise components based on what they learn. The DBIR approach sees the learning context as having a significant influence on whether or not an intervention is successful, therefore, understanding that context (the school culture, the capabilities of system actors, overarching policies, etc.) is key.
One element of capacity-building entails improving social capital. This is achieved by ensuring that all stakeholders have access to the same resources and experts. Additionally, in order to implement the changes in a coordinated manner, everyone involved needs to know certain information about a school or district, such as their process for professional development to improve teaching and learning. The STEMGenetics team built capacity and social capital by utilizing mentor teachers who were able to provide immediate, on-the-ground support to their peers during implementation.
Recently, as DBIR has become more widely used, LearnDBIR and the Research-Practice Collaboratory have been creating shared tools and practices. Additionally, a network is developing around the DBIR model with an annual workshop for people interested in engaging in research using this approach and sharing their work across a wider variety of contexts.
Social relationships are key to the potential of networked improvement communities to accelerate and sharpen education change using the improvement science approach. Veterans of the process explain how they keep strengthening those connections while expanding their networks.
Recent reviews of the use and application of implementation frameworks in implementation efforts highlight the limited use of frameworks, despite the value in doing so. As such, this article aims to provide recommendations to enhance the application of implementation frameworks, for implementation researchers, intermediaries, and practitioners.
Ideally, an implementation framework, or multiple frameworks should be used prior to and throughout an implementation effort. This includes both in implementation science research studies and in real-world implementation projects. To guide this application, outlined are ten recommendations for using implementation frameworks across the implementation process. The recommendations have been written in the rough chronological order of an implementation effort; however, we understand these may vary depending on the project or context: (1) select a suitable framework(s), (2) establish and maintain community stakeholder engagement and partnerships, (3) define issue and develop research or evaluation questions and hypotheses, (4) develop an implementation mechanistic process model or logic model, (5) select research and evaluation methods (6) determine implementation factors/determinants, (7) select and tailor, or develop, implementation strategy(s), (8) specify implementation outcomes and evaluate implementation, (9) use a framework(s) at micro level to conduct and tailor implementation, and (10) write the proposal and report. Ideally, a framework(s) would be applied to each of the recommendations. For this article, we begin by discussing each recommendation within the context of frameworks broadly, followed by specific examples using the Exploration, Preparation, Implementation, Sustainment (EPIS) framework.
The use of conceptual and theoretical frameworks provides a foundation from which generalizable implementation knowledge can be advanced. On the contrary, superficial use of frameworks hinders being able to use, learn from, and work sequentially to progress the field. Following the provided ten recommendations, we hope to assist researchers, intermediaries, and practitioners to improve the use of implementation science frameworks.
Increase implementation intermediaries and practitioners ability to use implementation frameworks as a shared language to familiarize stakeholders with implementation and as practical tools for planning, executing, and evaluating real-world implementation efforts
There is great value in effectively using implementation frameworks, models, and theories [1, 2]. When used in research, they can guide the design and conduct of studies, inform the theoretical and empirical thinking of research teams, and aid interpretation of findings. For intermediaries and practitioners, they can provide shared language to familiarize stakeholders with implementation and function as practical tools for planning, executing, and evaluating real-world implementation efforts. Implementation frameworks, models, and theories have proliferated, and there are concerns that they are not used optimally to substantiate or advance implementation science and practice.
Theories are generally specific and predictive, with directional relationships between concepts making them suitable for hypothesis testing as they may guide what may or may not work [3]. Models are also specific in scope, however are more often prescriptive, for example, delineating a series of steps. Frameworks on the other hand tend to organize, explain, or describe information and the range and relationships between concepts, including some which delineate processes, and therefore are useful for communication. While we acknowledge the need for greater use of implementation frameworks, models, and potentially even more so theories, we use the term frameworks to encompass the broadest organizing structure.
Suboptimal use of frameworks can impact the viability and success of implementation efforts [4]. This can result in wasted resources, erroneous conclusions, specification errors in implementation methods and data analyses, and attenuated reviews of funding applications [5]. There can be a lack of theory or poorly articulated assumptions (i.e., program theory/logic model), guiding which constructs or processes are involved, operationalized, measured, and analyzed. While guidance for effective grant applications [4] and standards for evaluating implementation science proposals exist [6], the poor use of frameworks goes beyond proposals and projects and can slow or misguide the progress of implementation science as a field. Consistent terms and constructs aid communication and synthesis of findings and therefore are keys to replication and to building the evidence base. In real-world practice, the suboptimal use of implementation frameworks can lead stakeholders to misjudge their implementation context or develop inappropriate implementation strategies. Just as important, poor use of frameworks can slow the translation of research evidence into practice, and thereby limit public health impact.
Frameworks are graphical or narrative representations of the factors, concepts, or variables of a phenomenon [3]. In the case of implementation science, the phenomenon of interest is implementation. Implementation frameworks can provide a structure for the following: (1) describing and/or guiding the process of translating effective interventions and research evidence into practice (process frameworks), (2) analyzing what influences implementation outcomes (determinant frameworks), and (3) evaluating implementation efforts (outcome frameworks) [2]. Concepts within implementation frameworks may therefore include the following: the implementation process, often delineated into a series of phases; factors influencing the implementation process, frequently referred to as determinants or barriers and facilitators/enablers; implementation strategies to guide the implementation process; and implementation outcomes. The breadth and depth to which the concepts are described within frameworks vary [7].
Recent analyses of implementation science studies show suboptimal use of implementation frameworks [1, 8]. Suboptimal use of a framework is where it is applied conceptually, but not operationalized or incorporated throughout the phases of an implementation effort, such as limited use to guide research methods [1, 9]. While there is some published guidance on the use of specific frameworks such as the Theoretical Domains Framework (TDF) [10], RE-AIM [11], the Consolidated Framework for Implementation Research (CFIR) [12], the Exploration, Preparation, Implementation, Sustainment (EPIS) framework [1], and combined frameworks [13], there is a need for explicit guidance on the use of frameworks generally. As such, this article provides recommendations and concrete approaches to enhance the use of implementation science frameworks by researchers, intermediaries, and practitioners.
3a8082e126