Claude AI summary analysis of Kolhe et al. (2025) — MSC evaluation of an intergenerational project in rural central India

18 views
Skip to first unread message

rick davies

unread,
Apr 1, 2026, 8:58:33 AM (4 days ago) Apr 1
to MostSignificantChange (MSC) email list

Analysis: Kolhe et al. (2025) — MSC evaluation of an intergenerational project in rural central India

Journal of Family Medicine and Primary Care, 14(8), 3474–3480


1. Why MSC was used

The project under evaluation — 'Sahjeevan: Chaitanya Natyanche' ('Living Together: the Consciousness of Relationships') — aimed to build age-integrated communities across 18 villages in Wardha district, Maharashtra, by fostering meaningful interaction between older adults and younger generations. This is precisely the kind of complex, community-based social intervention for which MSC is well suited: the outcomes are social, attitudinal, and relational; they cannot be fully anticipated in advance; and they are not easily captured by quantitative indicators.

The authors make this rationale explicit in the Discussion section, contrasting MSC with conventional M&E tools on the grounds that traditional approaches rely on predetermined, objective-driven metrics and can overlook nuanced social changes — shifts in attitudes, behaviour, or community dynamics that are not immediately quantifiable. This closely mirrors the rationale given in the 2005 MSC Guide, which describes MSC as appropriate where programme impacts are "complex and wide-ranging" and where outcomes cannot be specified in advance with precision. The authors also note the dual-beneficiary logic: practitioners gain insight to improve interventions, while participants gain validation, empowerment, and a sense of purpose through storytelling. Both of these are consistent with the Guide's description of MSC as generating learning value for multiple stakeholder levels simultaneously.


2. How it was used

The authors follow a six-step process loosely derived from the Dart and Davies framework, which maps reasonably well onto the ten-step structure in the 2005 Guide, though compressed. The steps as implemented were:

Getting started (Step 1). At each of the 18 villages, the project team held group discussions with intervention stakeholders — panchayat members, master trainers (older adults aged 50+ involved in message dissemination), peer trainers (village-level volunteer older adults), and Anganwadi workers. Stakeholders were asked to share observations of change and were introduced to the purpose of MSC. The Guide recommends this kind of preparatory engagement and emphasises identifying "champions" — people enthusiastic enough about the technique to sustain it. The paper's framing of this step as identifying key people who could "act as catalysts" reflects that intent, though no systematic follow-up on champion sustainability is reported.

Collecting stories (Step 2). Purposive interviewing was conducted by sector-level social workers, using a semi-structured interview guide. Interviewers read notes back to storytellers to verify accuracy, and stories were documented in storytellers' own words. This is consistent with the Guide's emphasis on story authenticity and the importance of capturing the respondent's own voice.

Story writing (Step 3). Social workers produced one-paragraph summaries structured around four elements: situation before change, motivation for change, sequence of events, and why the change was significant. This maps directly onto the summary format recommended in the Guide.

Categorising into domains (Step 4). Two project members read the summaries and categorised them into three domains: individual-level change, family-level change, and community-level change. Critically, this categorisation was done after the stories had been collected, not before — a point to which the authors return as a self-identified limitation (see Section 3 below).

Selection rubric (Step 5). A selection committee was constituted under the principal investigator and included project team members, partner organisation representatives, and village-level stakeholder representatives. Field social workers responsible for explaining their area's stories were excluded to avoid bias. Stories were evaluated on three criteria: the project's role in the story, uniqueness of the change, and value added to older adults' quality of life. Each criterion was scored 0–5, producing a total score of 0–15. This rubric-based approach is an addition not prescribed by the Guide, but is consistent with the Guide's requirement that selection reasons be made explicit and transparent.

Final story selection (Step 6). Stories were read aloud to the panel, discussed in depth, and scored individually by each panellist. A median score was calculated per story per domain. In the event of a tie, further discussion was initiated. The story with the highest median score in each domain was designated the MSC story — yielding five final stories in total (two under Domain 1, two under Domain 2, one under Domain 3).

The median scoring approach is a methodological choice not specified in the Guide, which recommends discussion and consensus-seeking rather than a scoring algorithm. In principle, the rubric and median approach increase transparency and reduce the influence of individual panellists' preferences, but they also risk displacing the richer deliberative process that the Guide regards as one of MSC's most distinctive learning functions. In this implementation it appears the two were combined: discussion preceded scoring, which is a reasonable balance.


3. Challenges encountered and whether they were overcome

a) Domains defined post-hoc

The authors identify this as a limitation: "As Rick Devis [sic] describes in the MSC guide, domains of change must be predefined; we identified these domains after gathering stories and categorising them." This is, however, a mischaracterisation of what the 2005 Guide actually says. The Guide is explicit: "Domains are not essential." It describes post-hoc categorisation as a legitimate practice, citing VSO as an example where field staff collect stories without domain guidance and categorisation happens only when stories reach country office level. The Guide further states: "At the field level... it may be useful to start without specifying domains. Instead, see what sort of stories are generated and valued by the beneficiaries, and then divide these into appropriate domains or have the beneficiaries do so."

The authors are therefore being harder on themselves than the methodology warrants. The post-hoc domains in this paper (individual, family, community) are also a reasonable and widely used typology, comparable to frameworks used in other MSC implementations.

One substantive concern does remain: the authors acknowledge that pre-specification "would have provided us with a diverse set of stories with vivid content while avoiding duplication." This point has some validity — predefined domains can act as a collection guide that broadens the range of stories sought. But the framing as a rule violation rather than a design trade-off is inaccurate.

b) No feedback to stakeholders

The authors attribute the absence of feedback to time constraints. This is a more genuine limitation. The Guide identifies feedback (its Step 6) as one of only three steps it considers fundamental to MSC — alongside story collection (Step 4) and story selection (Step 5). Feedback closes the loop between story contributors and selectors, informs subsequent collection rounds, signals which changes are valued, and demonstrates that stories have been read and taken seriously rather than filed away. The absence of feedback means that a core mechanism of MSC's learning function — the dialogue between organisational levels about what counts as significant — was not activated in this study.

The challenge was not overcome, though the authors do flag it and recommend it as a priority for future iterations.

c) Single point in time

Stories were gathered at one moment, which means the study captured a snapshot rather than a developmental trajectory. The Guide does not prescribe longitudinal collection as obligatory, but the iterative, periodic nature of MSC — with collection feeding selection feeding feedback feeding the next round of collection — is central to its value as a monitoring (not just evaluation) tool. A one-off implementation, while common in the literature, sacrifices this cumulative learning function.

d) Exclusion of field social workers from the selection panel

Social workers who had conducted and explained the interviews were excluded from the selection committee. The rationale — avoiding undue influence from those who had a stake in their own stories being selected — is reasonable and reflects good practice. The Guide does not specifically address this, but the principle is consistent with transparency objectives. This was a deliberate design choice rather than a challenge.


4. Benefits delivered

The paper reports five selected stories spread across three domains, providing concrete illustrations of project impact at individual, family, and community levels. Specific documented changes include: a 70-year-old visually impaired woman finding renewed purpose in life; an older man beginning to teach grandchildren bhajans and kirtans, activating intergenerational skill transfer; a young woman reconciling with her in-laws after recognising the value of older adults in her children's development; an older man beginning to promote yoga and exercise through religious storytelling; and peer-level dissemination of project messages through a community leader.

Beyond the content of the stories themselves, the paper reports several process benefits consistent with what the Guide anticipates:

The MSC process generated narratives that traditional M&E tools would not have captured — particularly attitudinal and relational shifts. It surfaced unexpected changes (the visually impaired woman's story, for example, appears to have been an emergent, unplanned outcome). The selection and discussion process facilitated community ownership of the evaluation and stimulated reflective dialogue among stakeholders about what constitutes meaningful change. The stories also served a communication function, giving the implementation team insight for adaptive programme management.


5. Lessons for future users

On domains: The reflexive concern about post-hoc domain definition is not warranted by the Guide. Future users of MSC should read the Guide's domain guidance closely before treating predefined domains as obligatory. The meaningful design choice is not when to define domains but what function domains are intended to serve — whether to align with organisational objectives, to guide collection, or to emerge from beneficiary-generated stories. In smaller programmes with participant-led purposes, emergent domains are often preferable.

On feedback as a structural requirement: Feedback to story contributors is not optional in a full MSC implementation. Programmes using MSC should build feedback explicitly into project timelines and budgets from the outset. Where time genuinely does not permit a formal feedback round, even minimal feedback (communicating which stories were selected and why) should be treated as a non-negotiable minimum.

On the scoring/rubric approach: The explicit selection rubric with 0–5 criteria scoring is a useful transparency mechanism, particularly in contexts where selection committee members come from multiple organisations and may hold different implicit criteria. It is worth documenting the rubric development process more fully, so other practitioners can adapt or critique it.

On longitudinal design: Single-point-in-time MSC implementations sacrifice the monitoring value of the technique, reducing it to a one-off evaluation exercise. Where project design permits, even two collection points — mid-project and end-project — would substantially improve the method's contribution.

On the meta-monitoring distinction: This paper is a direct evaluation of the intergenerational programme's impact on older adults. It does not slide into the meta-monitoring pattern (evaluating the experience of implementers rather than outcomes for beneficiaries) that is identifiable in some other MSC papers. The stories collected are genuine accounts of change in beneficiaries' lives. That methodological clarity is worth noting as a positive exemplar.

On attribution language: Several passages in the paper move from reported stories to assumed project causation without adequate qualification (e.g., "the project may result in a number of incredible positive changes in society"). This is a general risk in MSC reporting. The Guide's formulation — asking "what is the most significant change that has occurred in your life/community, and why do you think this change occurred?" — preserves participants' causal accounts without conflating them with evaluator attribution. Reporting should maintain that distinction.



Theo Nabben

unread,
Apr 2, 2026, 5:48:43 AM (4 days ago) Apr 2
to mostsignificantchang...@googlegroups.com
These AI summaries are great and help understand application  use & what is good practice. Well done Rick. Have a happy Easter. All the best Theo


--
If you have any concerns about any of the postings on this email list please email me directly at rick....@gmail.com
---
You received this message because you are subscribed to the Google Groups "MostSignificantChange (MSC) email list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mostsignificantchange-msc-...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/mostsignificantchange-msc-2020-email-list/ac0564a2-c939-4458-ab32-2a4f4ba24b30n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages