Asking Claude AI what recent MSC papers I should read first

21 views
Skip to first unread message

rick davies

unread,
Feb 22, 2026, 6:28:44 AM (2 days ago) Feb 22
to MostSignificantChange (MSC) email list
After getting Claude to do a hierarchicial cluster analysis of 14 20025 papers on the use of MSC, I asked this question, and got this response:

Given what you know of my interests and capacities, which of the 14 2025 papers do you think i should read first, in order to expand my understanding of the strengths and limitations of MSC and its future potentials?

Based on what I know of your position as one of MSC's originators, your interest in methodological questions over applied demonstrations, and the analytical work you've just completed on this corpus, here is my reasoning.

Read P11 (Wells et al., Australia) first.
[RD added this ref: Wells, Y., Rayner, J.-A., Fetherstonhaugh, D., & McAuliffe, L. (2025). The Most Significant Change: An Evaluation of Predictive Performance Measures in Nursing Homes. Journal of Evaluation in Clinical Practice, 31(1), e70017. https://doi.org/10.1111/jep.70017]

This is the paper that most directly confronts what MSC is epistemologically for, rather than simply using it. Wells and colleagues deploy MSC alongside predictive performance measures in aged-care nursing homes — a setting where algorithmic, indicator-driven quality assessment is the dominant paradigm. The pairing is not incidental: the paper explicitly tests what MSC can surface that predictive models cannot. For someone thinking about MSC's future potential, the nursing home context is a productive provocation because it situates MSC against the kind of quantitative, forward-looking tool that funders and managers increasingly prefer. It is also the most cross-cutting paper in your matrix — simultaneously methodology-advancing, applied in a health/welfare institutional context, and embedded in a multi-method design — which means it generates the richest set of tensions to think with.

Second: P4 (Hoens et al., Belgium). 

[RD added this ref Hoens, S., D’Hooghe, S., Kint, O., Quintiens, B., Brosens, D., De Donder, L., & Smetcoren, A.-S. (2025). Capturing the most significant change after a work-training program for community health workers. Evaluation and Program Planning, 112, 102667. https://doi.org/10.1016/j.evalprogplan.2025.102667]

This paper foregrounds the selection process specifically, which is the component of MSC that has received the least critical scrutiny in the literature relative to story collection. The Brussels CHW work-training context is relatively straightforward, but the methodological focus on who selects, under what conditions, and what that reveals about participant agency is directly relevant to your interest in MSC's participatory claims and their limits.

Third: P9 (Romero-Torres et al., Canada). The COVID-era project management paper is worth reading in third position because it involves a significant adaptation — a single-round researcher panel replacing the iterative stakeholder selection process. This is a case where the MSC label is retained while a core procedural commitment is quietly abandoned. It raises a question that matters for future MSC development: what is the minimum fidelity threshold below which it is no longer MSC in any meaningful sense, and who decides?
[RD added this ref Romero-Torres, A., Leroux, M.-P., Primeau, M.-D., Delisle, J., & Coulon, T. (2025). Toward Sustainable Project Management Practices: Lessons from the COVID-19 Pandemic Using the Most Significant Change Method. Sustainability, 17(13), Article 13. https://doi.org/10.3390/su17135999]

Papers P8 (Ngare, Kenya, SGBV survivors) and P1 (Ankomah, Nigeria, outcome harvesting) are also worth reading eventually, but they are more about MSC's reach into difficult empirical territory than about its internal logic or future direction.


A question worth sitting with before you read P11: do you think MSC's value in a predictive-performance environment is primarily that it catches what the algorithm misses, or that it reframes what "performance" should mean in the first place? The answer shapes what you would want a future version of MSC to offer in institutional settings.



rick davies

unread,
Feb 22, 2026, 9:47:38 AM (2 days ago) Feb 22
to MostSignificantChange (MSC) email list
And my subsequent response and Claudes reply

The key sentence in the P11 paper is "The interviews explored the staff experiences of implementing the PPMs." So this use of MSC was a form of meta monitoring/evaluation, examining the implementation of the PPM. It was not providing competing data to the PPM. I think this is a point you may have missed, yes or no, or maybe?

11:31
Acknowledged distinction between implementation evaluation and outcome measurement

Yes, that's a point I missed — or more precisely, I framed it carelessly. I described the paper as positioning MSC against predictive performance measures to test what each could capture about the same domain. Your reading is more precise: the objects of inquiry are different. The PPMs measure resident care outcomes; the MSC captures staff experience of implementing those PPMs. They are not competing data streams on the same phenomenon — MSC is operating one level up, as an evaluation of a change management process, with the PPM system itself as the intervention being evaluated.

That reframes the paper's significance considerably. Rather than illustrating a tension between qualitative and quantitative approaches to the same question, it illustrates MSC's utility as a process-monitoring tool in institutional settings — which is a different and arguably more defensible use case, since it sidesteps the comparison problem entirely.

It does, however, raise a follow-on question worth considering: does that meta-level positioning limit MSC's contribution to the paper's core argument? The title promises an evaluation of the PPMs' predictive performance, but if MSC is only capturing implementation experience, it cannot directly address whether the PPMs predicted well. That gap between the paper's framing and what MSC was actually positioned to do may itself be analytically interesting.

Does that distinction hold up in the paper's conclusions, or does the authors' interpretation blur the two levels?

Yes, the papers title does potentially mislead. And the conclusion first sentence also does so: "These findings demonstrate the value of employing the MSC qualitative method to evaluate complex interventions." Another sentence there does suggest an overlap of  purpose: "While not measured, many MSCs implied better care outcomes for residents which were inherent in the themes."

11:36
Identified tension between overstated and honest epistemic claims

That sentence is doing something quite specific and worth naming precisely. "While not measured" is an honest concession that MSC did not generate outcome data on resident care. But "implied" is doing a lot of work — it suggests that staff accounts of their own changed practices and attitudes can be read as a proxy signal for care quality, without that inference being tested or defended methodologically.

This is a recurring vulnerability in MSC applications: the slide from "participants reported significant changes in their practice" to "therefore outcomes for beneficiaries improved." The logical gap between the two is real, and in a clinical setting it is particularly consequential. Improved staff engagement and better care outcomes are plausibly correlated, but the paper is simultaneously acknowledging it cannot demonstrate this and trading on the implication that it probably holds.

What makes this interesting rather than simply a limitation is that it points to something structural about MSC's epistemic position in institutional contexts. MSC is well suited to capturing the texture of change as experienced by those closest to an intervention. It is poorly suited — by design, not by failure — to attributing those experiential changes to downstream outcomes for third parties, particularly where the causal chain is long or the outcome domain is independently measurable. The authors seem aware of this at the sentence level but not at the level of the paper's overall framing.

This arguably strengthens your earlier point about the title misleading: the paper is not really an evaluation of the PPMs at all. It is an evaluation of a PPM implementation process, with an undefended implication about care outcomes added in the conclusion.

Does this suggest to you a more general principle about how MSC's findings should — and should not — be represented when the technique is used at the meta-monitoring level?

Reply all
Reply to author
Forward
0 new messages