MSC formed one component of a six-element M&E toolkit developed to support Continuous Quality Improvement (CQI) of Community Score Cards (CSCs) across ten government primary health clinics in the Dominican Republic (2018–2023). The other five components — attendance tracking, facilitation quality, score card indicator scoring, action plans, and an empowerment survey — were primarily quantitative or structured in nature. MSC was explicitly included to fill the gap these tools leave: capturing how change was experienced from the inside, and by whom.
The authors describe MSC as "a qualitative data collection and analysis technique designed to elicit specific stories from the personal perspective of participants that epitomize how an intervention affected change" (citing Davies and Dart, 2005, the same guide used here). Its placement in the toolkit is specifically timed to capture changes that "may take more time to percolate" — implemented every third cycle, approximately every 18 months, rather than at each 6-monthly cycle. This is a deliberate design choice to use MSC as a periodic depth probe rather than a routine monitoring instrument.
An important contextual point, however, is that MSC here is being used as meta-monitoring — that is, monitoring participants' experience of the CSC process itself — rather than as a direct evaluation of health system outcomes. The paper is explicit about this: "The article includes results of how improvements in the health system were advanced but does not aim to provide direct evidence for health outcome improvements due to CSC." The MSC stories accordingly capture changes in behaviour, relationships, and community empowerment as outcomes of the CSC intervention, not changes in health status. This is a legitimate but relatively unusual framing for MSC, which in most documented applications has been used to capture direct program beneficiary outcomes.
The implementation departed in important ways from the standard (vertical) process described in the MSC Guide, substituting a horizontal filtering approach. The structure was as follows:
Following difficulties with this final step (see Section 3), the process was adjusted: each small group continued to select one story, but the final selection of the single overall MSC was shifted from the collective group to local staff and members of the Solutions Committee, who then forwarded the chosen story to the headquarters office.
MSC was implemented in a single site at the time of writing, with plans to extend to the remaining nine sites. The reporting period was 18 months (every three cycles), placing it at the low-frequency end of MSC applications — consistent with the Guide's acknowledgement that yearly collection is used in some organisations (VSO is cited as an example), but carrying the attendant risk that participants may struggle to recall specific changes over such a long interval.
3a. Resistance to selecting a single story (composite story problem)
The most significant challenge was that participants in the collective group stage were "wedded to their own story and did not want to claim any story as most significant." Rather than selecting a single story, they produced composite stories incorporating elements from all submissions. The authors acknowledge these had "some value and insight" but represented a departure from MSC methodology.
The MSC Guide addresses this scenario directly, offering several options when consensus cannot be reached: choose two stories to reflect the range of views; decide that none of the stories adequately represents what is valued; or choose one story but add a caveat recording the dissenting views. Notably, producing a composite story — merging all stories into one synthesised account — is not among these options, and for good reason: it erases the distinctive perspectives and voices that individual stories carry, and it undermines the transparency of the selection process. The resolution adopted (moving final selection authority to local staff and the Solutions Committee) is a pragmatic response but represents a further shift away from participant-driven selection, which the Guide identifies as one of MSC's key participatory features.
3b. Stories were positive and observational rather than personal
The authors note that "participants expressed only positive stories and most stories were written from an observational perspective as opposed to a personal experience." Both of these are well-recognised problems in MSC practice. The MSC Guide explicitly notes that 90–95% of stories in standard applications tend to be about positive changes, and identifies the signalling by facilitators as the key factor — if only positive stories are implicitly welcomed, only positive stories will be provided. The Guide recommends creating a domain specifically for negative stories or framing a domain as "lessons learned," and being explicit with participants that negative changes are equally valuable.
The observational rather than personal framing is also discussed in the Guide, which emphasises that SC stories should document who was involved, what happened, where and when — ideally as a first-person narrative. The Guide warns against field staff using MSC as a generic reporting channel, producing "bullet points and general discussions" rather than specific narrative accounts. The plans described by the authors ("in future efforts, we will probe participants to provide personal stories of change and clarify that there could be negative stories") are appropriate but had not been implemented as of the time of writing.
3c. Bias from listening to others' stories before writing
In some sub-groups, participants heard others' stories before writing their own, creating a risk that earlier stories shaped what was subsequently written. The solution adopted — having each participant write their own story before any verbal sharing — is sound practice and is consistent with the Guide's broader concern about maintaining authentic individual voices throughout the process.
3d. Limited rollout
MSC was only implemented in one of the ten sites at the time of writing, with the stated intention to apply it to the remaining sites. While the Guide does recommend piloting before full-scale rollout, the program had been running for five years across ten sites, making the limitation of evidence to a single MSC application noteworthy. It also means that the MSC results cannot be triangulated across sites, which is one of the primary analytical advantages of a systematic MSC rollout.
Despite the methodological limitations, the MSC process delivered several tangible benefits.
Evidence that quantitative tools could not generate. The score card data and attendance figures showed aggregate patterns (e.g., quality of care scores rising from ~62% to ~81%), but the MSC stories provided texture and meaning: why people had stopped attending the clinic, what specific changes reversed that, and how collective action felt from the inside. The youth story's description of people previously avoiding the clinic due to "insults, dirt, and the bad temper of staff," and now saying "it is a pleasure to go to the clinic," conveys something the score card cannot.
Stakeholder reflection and empowerment. The authors report that "those who participated in the MSC were more regular participants in activities afterwards." The MSC process appears to have generated a reflective loop — seeing their own changes narrated back to them strengthened participants' sense of agency. This aligns with the Guide's observation that MSC, when implemented successfully, shifts the focus of "whole teams of people" toward program impact.
Advocacy and external communication. MSC stories were used for advocacy at higher levels of the health system, shared at a regional pre-symposium meeting for Latin America and the Caribbean ahead of the Health Systems Global Symposium, and used to support national-level partnership development. This is consistent with the Guide's framing of MSC as a tool that not only monitors change but communicates it persuasively to stakeholders who were not present.
Organisational learning and spread. The stories were shared with other country programs to demonstrate what was possible and build buy-in for the CSC approach. This is a recognised secondary function of MSC — using the evidence generated by one implementation context to inform and motivate others.
Staff inspiration. The authors note that MSC "provided local staff with tremendous inspiration seeing the impact of the CSC from the participants' perspectives." The Guide identifies this as an important and often underappreciated benefit: monitoring systems that generate rich qualitative accounts of change tend to motivate staff in ways that indicator dashboards do not.
Several lessons emerge from this case, some of which confirm long-standing guidance in the MSC Guide and others that are more specific to this application context.
On filtering structure. The horizontal filtering approach (simultaneous cross-stakeholder dialogue at a single event) differs fundamentally from the vertical approach described in the Guide (stories moving up through organisational or project hierarchy across time). Both approaches have validity, but the purposes they serve differ. Vertical filtering is particularly valuable for organisational learning — it surfaces the values held at different levels of authority and creates a communication loop across the hierarchy. Horizontal filtering within a single-site stakeholder event is better suited to building shared understanding and collective reflection within the group. Future users should be clear about which function they are prioritising, and design accordingly. In this case, neither the reasons for selection nor the feedback loop (Step 6 of the Guide) appears to have been systematically implemented — both are important methodological elements.
On composite stories. The composite story problem encountered here is a recognisable failure mode. Future implementations should prepare facilitators in advance for the possibility that stakeholders will resist choosing between stories. The Guide's options — selecting two stories, adding caveats, or declaring that no single story is adequate — are preferable to synthesis, which obscures individual voices and makes it harder to understand what specific experience generated the change.
On signalling story type. Proactively signalling at the outset that personal stories are valued, that observational accounts are insufficient, and that negative or mixed change is equally worth capturing, is not optional guidance but a prerequisite for valid MSC data. The MSC Guide emphasises that the profile of stories collected directly reflects the signals given by facilitators and organisational culture.
On frequency and memory. An 18-month reporting period is long, and the risk of recall bias is real. The Guide notes that low-frequency reporting runs the risk of staff and participants forgetting how things actually were earlier. Coupling the MSC exercise with regular review of previous cycle documentation (action plans, score card trends) could help anchor recall.
On domain design. The paper does not specify what domain question(s) were used to frame story collection. This is an important design element that the MSC Guide addresses at length. In the context of a CSC program, plausible domain questions might focus on changes in community-provider relationships, changes in participants' sense of agency, or changes in service access and quality. Making this explicit would strengthen methodological transparency and allow comparisons across sites and cycles.
On the meta-monitoring distinction. The use of MSC here to evaluate participants' experience of the CSC process — rather than health outcomes directly — is a legitimate application but one that needs to be clearly labelled. The stories generated are accounts of second-order change: how the CSC transformed community-provider relations, collective capacity, and social trust. These are not accounts of health improvement per se. Future reporting should make this level of analysis explicit to avoid sliding into claims about health outcomes that the MSC stories do not, by design, address.
On scale. Implementing MSC in only one of ten sites after five years of program operation limits the evidentiary value substantially. The Guide recommends piloting to test the technique in context, but then scaling systematically. The benefits documented here — staff inspiration, advocacy material, stakeholder empowerment — would be substantially amplified by consistent implementation across sites, and would allow comparative analysis of where change is occurring, what types of change are being valued, and whether the changes being selected as most significant vary by community context (e.g., urban versus rural versus Batey communities).
Overall Assessment
The paper provides a candid account of an MSC application that is recognisable, useful, and imperfect in ways that are well-understood in the MSC literature. The authors' willingness to document what went wrong (composite stories, observational framing, positive-only bias) and their plans for correction are methodologically honest. The core insight that MSC fills a qualitative gap that quantitative toolkit components cannot is well-founded. The main methodological weaknesses relative to the 2005 Guide are the absence of a documented domain question, the lack of a systematic feedback loop, the failure to capture and record the reasons for story selection (which the Guide identifies as arguably the most analytically valuable part of the exercise), and the limited single-site rollout. Addressing these in the next phase of implementation would substantially strengthen what is already a well-conceptualised use of MSC within a broader M&E system.
The section makes three main points.
First, domains are genuinely optional, not a required element of MSC. Stories can be collected and analysed as a single undifferentiated group. This is particularly practical in smaller-scale applications where the volume of stories is manageable without categorisation.
Second, when domains are used, there is flexibility about when they are introduced. They do not have to be specified in advance. The VSO example illustrates this: field staff collected stories without domain guidance, and categorisation happened later at country office level. The South Australia example goes further — domains emerged from participants' own experiences rather than being imposed by the organisation. The point is that pre-specifying domains anchors story collection to organisational objectives, whereas leaving them open (or generating them post-collection) gives more weight to what participants themselves find significant.
Third, the section raises a broader question about whether domains should be restricted to changes caused by the intervention, or whether they should also be open to significant changes arising from other sources. Most practitioners have focused on intervention-attributable change, but the Guide suggests this may reflect an unnecessarily narrow worldview, and that tracking both types simultaneously — using some domains for programme-caused change and others for contextual change — can provide a more complete picture.
In short: the section is a caution against treating domains as obligatory infrastructure. Their value is real but conditional on context, scale, and purpose. Their absence is not a methodological weakness.
That corrects my earlier conclusion. The Morse et al. paper's failure to document a domain question is not necessarily a weakness — given the small scale of the MSC implementation (a single site, one group event per 18-month cycle), operating without pre-specified domains is entirely consistent with the Guide's own guidance.
-------------------------------------