Evaluating the Future. Podcast and paper prepared with and for the EU Evaluation Support Services Unit, 2020
Representing theories of change: technical challenges with evaluation consequences. CEDIL Inception Paper, 2018.
Evaluating the impact of flexible development interventions using a ‘loose’ theory of change Reflections on the Australia-Mekong NGO Engagement Platform. ODI Methods Lab Working Paper, March 2016
--
If you have any concerns about any of the postings on this email list please email me directly at rick....@gmail.com
---
You received this message because you are subscribed to the Google Groups "MostSignificantChange (MSC) 2020+ email list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mostsignificantchange-msc-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mostsignificantchange-msc-2020-email-list/CAPfRy0LjqgcKXu39U91K-QfWRr%2BNvEfyeEwR3gBMcSLG%2BEOqaA%40mail.gmail.com.
Hi Rick,I think a lot of what you included seem like useful quality criteria which I’d tend to agree with. However, Steve is probably right that there might be a more positive angle too. A lot of this is the kind of discussion you find in the book What Counts as Credible Evidence in Applied Research and Evaluation Practice? Not all of which I agree with, but I thought Scriven and Schwandt’s chapters are probably most appropriate here (e.g. credibility, relevance and probative value for Schwandt).Most of what I had proposed fits within a fairly narrow view of rigour (albeit quite consistent with Schwandt), and perhaps for MSC, you might want something a bit more expansive.Hallie Preskill and Jewlya Lynn, for instance, argue that we should redefine rigour for evaluation in complex adaptive settings. These were there four proposed criteria:
- Quality of the Thinking: The extent to which the evaluation’s design and implementation engages in deep analysis that focuses on patterns, themes, and values (drawing on systems thinking); seeks alternative explanations and interpretations; is grounded in the research literature; and looks for outliers that offer different perspectives.
- Credibility and Legitimacy of the Claims: The extent to which the data is trustworthy, including the confidence in the findings; the transferability of findings to other contexts; the consistency and repeatability of the findings; and the extent to which the findings are shaped by respondents, rather than evaluator bias, motivation, or interests.
- Cultural Responsiveness and Context: The extent to which the evaluation questions, methods, and analysis respect and reflect the stakeholders’ values and context, their definitions of success, their experiences and perceptions, and their insights about what is happening.
- Quality and Value of the Learning Process: The extent to which the learning process engages the people who most need the information, in a way that allows for reflection, dialogue, testing assumptions, and asking new questions, directly contributing to making decisions that help improve the process and outcomes.
Some of those might take you in a slightly different direction, leaving more space for self-defined criteria, even if it might be quite reasonable to have some independently defined criteria.Here’s the blog: https://www.fsg.org/blog/redefining-rigor-describing-quality-evaluation-complex-adaptive-settingsHope that helps, and hope others find the rubrics useful (within limits).BestTom.