Quality of Evidence and MSC stories

33 views
Skip to first unread message

rick davies

unread,
Nov 26, 2020, 5:38:42 AM11/26/20
to mostsignificantchang...@googlegroups.com, Tom Aston
Hi all

Two recent documents have prompted me to do some thinking on this subject
If we view MSC stories as evidence of change (and what people think about those changes) what should we look for in terms of quality - what are the attributes of quality we should look for?

Some suggestions that others might like to edit or add to, or even delete...

1. There is clear ownership of an MSC story and the reasons for its selection by the story teller. Without this, there is no possibility of clarification of any elements of the story and its meaning, let alone more detailed investigation/verification

2. There was some protection against random / impulsive choice. The person who told the story was asked to identify a range of changes that had happened, before being asked to identify the one which was most significant 

3. There was some protection against interpreter/observer error. If another person recorded the story, did they read back their version to the storyteller, to enable them to make any necessary corrections?

4. There has been no violation of ethical standards: Confidentiality has been offered and then respected. Care has been taken not only with the interests of the storyteller but also of those mentioned in a story.

5. Have any intended sources of bias been identified and explained? Sometimes it may be appropriate to ask about " most significant changes caused by....xx..." or "most significant changes of ...x ...type"

6. Have any unintended sources of bias been anticipated and responded to? For example, by also asking about "most significant negative changes " or "any other changes that are most significant"?

7. There is transparency of sources. If stories were solicited from a number of people, we know how these people were identified and who was excluded and why so. If respondents were self-selected we know how they compare to those that did not self-select.

8. There is transparency of selection process: If multiple stories were initially collected then the most significant of these have been selected then reported and used elsewhere, the details of the selection process should be available, including (a) who was involved, (b) how choices were made, and (c) the reasons given for the final choice(s) made

9. Fidelity: Has the written account of why a selection panel chose a story as most significant done the participants' discussion justice? Was it sufficiently detailed, as well as being truthful?

10. Have potential biases in the selection processes been considered? Do most of the finally selected most significant change stories come from people of one kind versus another e.g. men rather than women, one ethnic or religious group versus others?

11.    your thoughts here on...

Please note that in focusing here on "quality of evidence" I am not suggesting that the only use of MSC stories is to serve as forms of evidence. Often the process of dialogue is immensely important and it is the clarification of values and who values what and why so, that is most important. And there are also bound to be other purposes also served

best wishes, rick

Rick Davies (Dr), Monitoring and Evaluation Consultant, Cambridge, United Kingdom | UK. Websites: http://www.mande.co.uk  and http://richardjdavies.wordpress.com/ | Twitter: @MandE_NEWS | rick....@gmail.com Skype: rickjdavies

Some publications:


Steve Powell

unread,
Nov 27, 2020, 9:19:03 AM11/27/20
to rick davies, mostsignificantchang...@googlegroups.com, Tom Aston, Simone Ginzburg
Hi Rick, that's really interesting. I'd never thought about quality criteria for MSC before. 
All the criteria you mention are great ways to, as you say, avoid bias. But these are in a sense negative criteria, do you have any positive criteria? Then, what is bias if we don't have any central criteria of validity/ accuracy in some sense, of an unbiassed resulting story? 
To be sure, MSC is a process for supporting an emergent product, of course you can't directly talk about accuracy as if it was numerical measurement. But surely you'd want to include some criteria about whether the emerging story is in fact in some sense a significant change, and in some sense the, or a, most significant change?
It seems like you're saying: we only have negative ("anti-bias") quality criteria because I just know that if you follow exactly this unbiassed process, you will in fact come up with something which warrants the name "most significant change story".
In fact, it's still a negative criterion but you could at least add "participants individually and collectively understood the instructions accurately, they understood what we meant by most significant and they were motivated and enabled to join in a search for that: basically, the process was such that with a bit of good luck they would arrive at story/stories which were in some intersubjective sense in fact the most significant change they had experienced, or at least were a good picture or illustration of such a change". 
Or is this going too far?
Best wishes,
Steve



--
If you have any concerns about any of the postings on this email list please email me directly at rick....@gmail.com
---
You received this message because you are subscribed to the Google Groups "MostSignificantChange (MSC) 2020+ email list" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mostsignificantchange-msc-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/mostsignificantchange-msc-2020-email-list/CAPfRy0LjqgcKXu39U91K-QfWRr%2BNvEfyeEwR3gBMcSLG%2BEOqaA%40mail.gmail.com.


--
Causal Map: Identify and visualise causal connections in speech and writing
_____________________________________________________________

independent social researcher
skype: stevepowell99
mobile: +44 75 1088 1300


rick davies

unread,
Nov 27, 2020, 9:28:12 AM11/27/20
to Steve Powell, mostsignificantchang...@googlegroups.com, Tom Aston, Simone Ginzburg
Hi Steve

I think the short answer is: No
It is quite contrary to the essence of how MSC works to try to set up an independent standard of what constitutes a "significant" or "most significant" story is. Those are criteria that have to be discovered/identified by MSC process participants, within their particular contexts

However, evidence criteria can and do relate to that process. Its transparency, etc

regards, rick

rick davies

unread,
Nov 27, 2020, 12:11:20 PM11/27/20
to Tom Aston, Steve Powell, mostsignificantchang...@googlegroups.com, Simone Ginzburg
Hi Tom

I think at least two interesting issues here:

1. Many of the standards examined in that study of eighteen different standards were to do with evidence generated through an evaluation, or even a synthesis/meta-analysis of evaluations. In contrast, MSC stories are closer to items of information rather than the results of extensive analyses. That is one reason for thinking that the evidence criteria might need to differ.

2. I think there's a lot to be said for minimum standards, as distinct from including more optimal and even ideal expectations. In evolution survival is a minimal standard, ...so long as that standard is met organisms including people have some freedom about what to do thereafter. I.e. go to war or build cathedrals, et cetera. I will be very reluctant to start proposing any other than minimum standards for MSC processes and products, because so much of MSC is about discovering what we value.

Regarding the four criteria which you have listed below, quite a few of these, if not all, are intuitively applicable to the use of MSC. I think they could be quite useful prompts to anyone looking at the result of an MSC process. But I'm not sure that I would want to give them the status of in more than a prompt, for the reasons given above.

Regards, Rick


On Fri, Nov 27, 2020 at 3:03 PM Tom Aston <thomas...@gmail.com> wrote:
Hi Rick,

I think a lot of what you included seem like useful quality criteria which I’d tend to agree with. However, Steve is probably right that there might be a more positive angle too. A lot of this is the kind of discussion you find in the book What Counts as Credible Evidence in Applied Research and Evaluation Practice? Not all of which I agree with, but I thought Scriven and Schwandt’s chapters are probably most appropriate here (e.g. credibility, relevance and probative value for Schwandt).

Most of what I had proposed fits within a fairly narrow view of rigour (albeit quite consistent with Schwandt), and perhaps for MSC, you might want something a bit more expansive. 

Hallie Preskill and Jewlya Lynn, for instance, argue that we should redefine rigour for evaluation in complex adaptive settings. These were there four proposed criteria:


  1. Quality of the Thinking:  The extent to which the evaluation’s design and implementation engages in deep analysis that focuses on patterns, themes, and values (drawing on systems thinking); seeks alternative explanations and interpretations; is grounded in the research literature; and looks for outliers that offer different perspectives.
  2. Credibility and Legitimacy of the Claims: The extent to which the data is trustworthy, including the confidence in the findings; the transferability of findings to other contexts; the consistency and repeatability of the findings; and the extent to which the findings are shaped by respondents, rather than evaluator bias, motivation, or interests.
  3. Cultural Responsiveness and Context: The extent to which the evaluation questions, methods, and analysis respect and reflect the stakeholders’ values and context, their definitions of success, their experiences and perceptions, and their insights about what is happening.
  4. Quality and Value of the Learning Process: The extent to which the learning process engages the people who most need the information, in a way that allows for reflection, dialogue, testing assumptions, and asking new questions, directly contributing to making decisions that help improve the process and outcomes.
Some of those might take you in a slightly different direction, leaving more space for self-defined criteria, even if it might be quite reasonable to have some independently defined criteria.


Hope that helps, and hope others find the rubrics useful (within limits).

Best

Tom.

Steve Powell

unread,
Nov 27, 2020, 12:11:22 PM11/27/20
to rick davies, mostsignificantchang...@googlegroups.com, Tom Aston, Simone Ginzburg
interesting ... but then if you rely 100% on the process, how do you know the process is pointing in the right direction? When you have translated the MSC handbook into different languages, I'm sure you had discussions about some of the words and phrases and metaphors which you use to make sure everyone is on the same page, and on the same page as you. Surely the content of those discussions is in itself a kind of outlining of what constitutes a correct MSC process in a way which is positive in content and not only bias-avoiding? 
I'm sure you've spent a lot more time thinking about this than any of the rest of us at least re MSC, but as Tom says this discussion is also interesting as it applies to other cases too beyond MSC.
cheers
steve 

rick davies

unread,
Nov 27, 2020, 12:24:07 PM11/27/20
to Steve Powell, mostsignificantchang...@googlegroups.com, Tom Aston, Simone Ginzburg
Hi Steve

I think if you look at the ten criteria listed, they are not all about process – some of them are about the actual reported most significant changes.

I agree it probably would not be a good idea just to focus on process, without also attending to the nature of the product

Regards, Rick
Reply all
Reply to author
Forward
0 new messages