Prompt to Claude AI: Please read this attached paper and summarise the process and results, with a focus on:
1. Why MSC was used
2. How it was used,
3. Any challenges in how it was used and if and how these were overcome
4. What benefits its use delivered
5. Any possible lessons for future users or use of MSC.
Do all this in the light of knowledge provided by the 2005 MSC Guide, as provided in the project folder. View that document as a set of guidelines about a method that can, and perhaps should, continue to evolve in its design as it is applied to new and more varied circumstances
Full citation: Romero-Torres, A., Leroux, M.-P., Primeau, M.-D., Delisle, J., & Coulon, T. (2025). Toward Sustainable Project Management Practices: Lessons from the COVID-19 Pandemic Using the Most Significant Change Method. Sustainability, 17, 5999.
The authors' primary motivation was substantive rather than methodological: they wanted to understand how project management (PM) professionals in Quebec adapted their practices during the first COVID-19 lockdown (March–August 2020). Quantitative indicators — the standard currency of PM evaluation and benchmarking — could not capture the nuanced, contextually embedded experience of practitioners navigating an unprecedented disruption. The authors explicitly recognised that adaptive practice changes in communication, governance, decision-making, and resource management would be "overlooked by quantitative standardised analysis."
MSC was chosen because it foregrounds lived experience and perceptions, tolerates unanticipated change, and can surface both the content of change and the values practitioners attach to it. The authors also noted MSC's capacity to capture unexpected outcomes — both positive and negative — a property particularly relevant in a crisis context where outcomes were unplanned by definition.
There is a secondary motivation: the paper situates itself within sustainability and organisational resilience scholarship, and the authors use MSC partly to make a methodological contribution — demonstrating that MSC is applicable in management science, a field with essentially no prior documented MSC history.
Assessment against the Guide: The rationale is well aligned with the Guide's framing. The Guide describes MSC as particularly appropriate when programs have complex, unpredictable outcomes and when understanding why change is valued is as important as knowing what changed. A pandemic-driven transformation in professional practice is exactly the kind of context where MSC adds value that conventional evaluation cannot.
The application involved several adaptations from the canonical process described in the Guide, some of which are clearly disclosed by the authors.
Story collection: Stories were gathered via a self-administered online survey — a significant departure from the conversational, dialogical collection typical of MSC. The survey had three sections: a quantitative Likert-scale section identifying degree of change across PMBOK knowledge areas; an open-question section inviting stories of the most significant change in PM practice during confinement; and a socio-demographic section. Participants could describe up to three stories. In total, 47 of 113 respondents provided at least one story, yielding 114 stories.
Domains of change: Domains were predefined by the researchers. Rather than being derived from the participants' own framings, they were drawn from two external standards: the PMBOK Guide (for project management practices) and the PMI Standard for Organizational Project Management (for governance practices). This produced fifteen domains, of which five had zero stories (quality management, knowledge management, competencies management). Quantitative data from the Likert section were used to help identify which domains were most active, effectively using the survey architecture to pre-structure the story landscape.
Participants: Two distinct groups were sampled — strategic-level contributors (portfolio managers, project directors, PMOs) and operational-level contributors (project managers, agile coaches, scrum masters). This bi-level design was deliberate and analytically productive, enabling comparison of how differently positioned practitioners experienced the same disruptions.
Selection process: In the canonical Guide, stories are reviewed and iteratively selected by stakeholder panels through multiple rounds, with feedback sent back to story contributors. In this study, the selection panel comprised three researchers (not stakeholders), and the process was conducted in a single round. The researchers first independently coded and evaluated stories, then met to reach consensus. An inter-rater agreement of ~85% was reported, with discrepancies resolved through discussion.
Feedback loops: No feedback to story contributors is mentioned. The Guide identifies feedback loops as a fundamental element of MSC — one of only three steps it considers essential — making this the most significant structural omission.
The authors are candid about the constraints they faced and the adaptations they made. Three challenges stand out.
Challenge 1: Conducting research during an active pandemic. The population the authors wanted to reach — project management professionals — was under extreme time pressure during the very period being studied. Convening iterative stakeholder panels or conducting interviews was not practicable. The authors' response was to shift to an asynchronous self-administered online survey, which allowed broad reach without requiring synchronous participation. This preserved the core intent of story collection while sacrificing the dialogical richness of face-to-face data gathering.
Assessment: The Guide is explicit that MSC can and should be adapted to context. An online survey is a reasonable pragmatic response to the constraint. However, self-administered open-question responses tend to be shorter and less contextually detailed than stories elicited through conversation or structured facilitation. The 114 stories produced are described as story fragments in some cases rather than fully developed narratives, which limits what can be extracted from individual accounts.
Challenge 2: Stakeholder unavailability for iterative selection panels. The authors could not convene "a consistent and representative group of stakeholders for iterative deliberation." Their resolution was to use a researcher panel (three members) conducting a single-round selection with independent coding followed by group deliberation. The 85% initial agreement rate lent the process some inter-rater credibility.
Assessment: This is a substantive departure. The Guide's iterative selection process serves multiple purposes simultaneously: it identifies significant stories, but it also generates organisational learning, reveals what stakeholders value, and creates accountability for those judgments. Substituting researchers for stakeholders and collapsing multiple rounds into one means that all three of these secondary functions are lost. The authors acknowledge this limitation honestly. It is worth noting that this same adaptation — researcher panels replacing stakeholder panels — appears in several other papers in the corpus (notably Wells et al., P11), suggesting it may be a structural feature of MSC adoption in institutional research contexts rather than an exceptional workaround.
Challenge 3: Absence of feedback loops. The authors do not explicitly flag this as a challenge, but the survey architecture meant that participants received no feedback on which stories were selected and why. This is a straightforward omission rather than a managed trade-off.
Assessment: The Guide treats feedback as one of MSC's three fundamental steps, valuing it not only for learning but for maintaining participant engagement and trust. In this case, participants were PM professionals contributing their reflections to research — the relationship is more like respondents than programme participants — so the absence of feedback may be less consequential to the individual participants than it would be in a programme evaluation context. Nevertheless, it means that one of MSC's key mechanisms for generating mutual understanding across organisational levels was absent.
Despite the adaptations, the use of MSC delivered several concrete benefits — both substantive and methodological.
Substantive findings: The 114 stories produced a rich, multi-domain account of how project practitioners experienced pandemic disruption. Five substantively significant findings emerged:
Communication was the most-changed domain, with the shift to digital tools producing both benefits (efficiency, flexibility) and costs (loss of informal exchange, digital fatigue, reduced ability to read relational cues).
Decision-making revealed a structural tension: strategic-level actors moved toward centralised control, while operational-level contributors sought greater autonomy and argued they possessed the contextual knowledge needed for better decisions. MSC surfaced this governance misalignment in a way that survey scales could not.
Schedule management documented both pandemic-induced delays and the emergence of emergency management logics — expediting certain projects while deprioritising others.
Stakeholder engagement showed strong internal cohesion at the operational level but difficulty maintaining engagement with external stakeholders.
Resource management exposed challenges in re-allocating personnel and evaluating performance under conditions of constrained oversight.
Bi-level comparison: The deliberate stratification of participants by hierarchical level was analytically valuable. The contrast between operational adaptability and strategic rigidity — a finding the authors connect to governance theory — emerged clearly from the stories and would likely have been invisible to a single-level survey.
Methodological contribution: The paper demonstrates, as a proof of concept, that MSC can function in management science and organisational research. This is non-trivial: it opens a new domain of application for a method that has been predominantly confined to international development, health, and education. The study is described by the authors as, to their knowledge, the first use of MSC to explore change in project governance during a global crisis.
Novel domain framing: MSC's strength in surfacing the subjective valuation of change — not just documenting that practices shifted, but understanding why practitioners considered certain shifts significant — was particularly well suited to this context, where the goal was to distinguish reactive coping from the emergence of more durable resilience-oriented practices.
Several lessons emerge from this application, considered against the Guide's framework and in the context of the wider corpus.
On domain construction: The use of PMBOK knowledge areas and PMI governance domains as the classification framework for MSC stories is an interesting hybrid. On one hand, it grounded the analysis in standards familiar to the participant population and enabled structured comparison across domains. On the other, it is a strong form of pre-specification that almost certainly shaped which stories were elicited and how they were interpreted. The five zero-story domains (quality management, knowledge management, competencies management, accountability, and one other) may reflect genuine absence of change in those areas during the pandemic — or they may reflect the difficulty of generating stories within the confines of formal taxonomy categories. The Guide explicitly notes that predefined domains can constrain participant-led discovery, and recommends starting without domains where participant-led framing is a priority. Future users in management research contexts might consider a hybrid approach: allowing open-ended story collection in the first round and then mapping to frameworks post hoc.
On researcher-as-panel: The substitution of a researcher panel for a stakeholder panel effectively converts MSC from a participatory tool into a qualitative data analysis tool with a story-based data source. This is a legitimate research design, but it changes what the method does. Future users in research contexts should be clear with themselves and their readers about this shift, and consider whether any mechanism can be introduced to preserve at least some stakeholder involvement in story evaluation.
On the absence of feedback: Even in a survey-based adaptation, feedback loops can be designed in. Sending a brief summary of selected stories back to participants — with a note about why they were selected — would have preserved one of MSC's most distinctive features at modest cost. Future users in similarly constrained contexts should consider this as a minimum intervention rather than treating feedback as an optional extra.
On scale and story quality: With 47 participants generating 114 stories in an online survey format, the per-story depth is necessarily constrained. The Guide warns that very brief stories with poorly documented explanations risk failing to identify significant change in a meaningful way — the process may simply confirm existing views. Future adaptations using survey collection should consider minimum story-length guidance or follow-up prompts to elicit richer explanations of why the change was significant to the respondent.
On opening new domains: This paper's most durable contribution to the MSC community may be demonstrating that the method travels well into management science. Crisis contexts — where change is rapid, unanticipated, and experienced differently across organisational levels — are particularly good candidates for MSC. Future practitioners designing resilience-oriented organisational research should consider MSC as an alternative to post-hoc interview and survey designs, particularly where the goal is to capture the value dimension of change rather than just its occurrence.
On transparency about adaptation: The paper is admirably transparent about what was changed from the canonical approach and why. This matters for the MSC literature. Papers that adapt MSC silently — without noting departures from the Guide's process — make it harder for readers to assess the credibility of findings or learn from the adaptation choices. This paper provides a model for honest methodological disclosure.
P9 sits at a productive frontier for MSC: a high-income, management science context where the method's participatory and dialogical strengths were partially sacrificed to research constraints, but where its fundamental capacity to surface valued change across hierarchical levels delivered findings that conventional methods would have missed. The study's most important contribution to the method's development is not any specific finding about COVID-era PM practices, but the demonstration that MSC can generate credible, analysable data in professional organisational research when applied with care and transparency about its adaptations.
Follow-on questions worth considering:
The bi-level (strategic vs. operational) design produced some of the richest findings — does this suggest a general principle that MSC applications gain analytical power when story collection is stratified by organisational level rather than treating "participants" as homogeneous?
The PMBOK-based domains produced five empty cells. Does zero-story absence in a predefined domain carry analytical information (nothing significant changed there) or methodological noise (the domain category didn't invite stories)? This is relevant for the HCS matrix's interpretation of absent data points.