Houston, we have an risk analysis quality problem.

39 views
Skip to first unread message

Apolonio Garcia

unread,
May 17, 2021, 4:28:25 PM5/17/21
to SiRA-public
[Duplicate of my post on the public-SIRA Slack channel post for non-paid members]

Back in 2018 I asked SIRAnauts the following question in a survey: "Do decision-makers or other stakeholders perceive a potential consequence associated with the use of faulty risk analysis methods?"  I got 17 responses, of which 65% of folks answered "No" or "Unsure". Not a great response rate, but the responses I got were consistent with my personal experience. 

Last week I sent out another informal survey to SIRA Paid and Google Group members, and a group of healthcare cybersecurity professionals, with a follow-up question: "Does your organization's risk analysis process include a formal peer review/quality control step?" This time I got 30 responses and had about 57% of folks that answer "No", which again supports my hunch.
While these are very small sample sizes, I would think that if anything, they are probably erroring on the conservative side (more favorable) given the populations I was sampling tends to be more "risk savvy".

So I would like to throw out a few questions for discussion/debate:
  • Does our industry/profession have a risk analysis quality problem? 
  • Do we need clearer / better-defined quality standards for (quantitative) risk analysis?
  • If we are the experts, and we don't have clear quality standards for risk analysis, how do we expect our stakeholders/customers/leaders to know good from bad (analysis)?
image.png

Apolonio 'Apps' Garcia
President/CEOHealthGuard




Jeff Lowder

unread,
Jul 6, 2021, 2:46:20 PM7/6/21
to SiRA-public
Hi Aps!

My answers to your questions:

Does our industry/profession have a risk analysis quality problem?

Yes! I think this is beyond reasonable doubt and part of the whole reason SIRA exists. Look at how many risk management frameworks and standards rely upon techniques which have already been empirically shown not to work.

Do we need clearer / better-defined quality standards for (quantitative) risk analysis?

In my opinion, the answer is either "no" or, at least, "it's not clear why." Adoption of FAIR would seem to go a long way towards solving the quality problem. 

If we are the experts, and we don't have clear quality standards for risk analysis, how do we expect our stakeholders/customers/leaders to know good from bad (analysis)?

Stakeholders/customers/leaders need to demand performance measurements of risk management processes. One crucial performance measurement is the Brier score. People should be demanding Brier scores from individual estimators and for risk forecasts as a whole.

Jeff

--
What's new, SIRAnaut? Check us out at http://societyinforisk.org & on twitter [@societyinforisk]
---
You received this message because you are subscribed to the Google Groups "SiRA-public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sira-public...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sira-public/CAEm0_tK2g2urHk77gMMHmm33Y1MCTbHdmdhVNXNZNe443Kgo5Q%40mail.gmail.com.

Gary Hinson

unread,
Jul 6, 2021, 7:00:25 PM7/6/21
to sira-...@googlegroups.com

My answers, FWIW:

 

  1. “The industry/profession problem” (there are many!) is not simply the “quality” of RA.  We need a variety of risk analysis methods because the analytical requirements and situations vary.  Despite what proponents claim, FAIR (or whatever) is not a universal solution: other techniques may be a better fit-for-purpose, for instance where, say, mathematical accuracy and traceability are less important than speed and cost, or simply because the users are more comfortable with other approaches (including safety, financial, engineering and market/product RA methods in various organizational contexts).  Sometimes (e.g. where the risks are existential and hence sound RA is vital), multiple approaches make sense, and differences between approaches are themselves of interest/concern: are the differences simply due to the methods themselves, issues in the analytical processes e.g. errors in the data/analysis/presentation/understanding of results/assumptions … or are there in fact relevant factors that the methods treat/weight differently? 

 

  1. I don’t think we need better RA quality standards or metrics but a better appreciation of the factors (quality parameters) that such standards or metrics might cover might be useful.  [I will mention that quality is a concept as tricky to define, analyse and measure as risk, so despite me having mentioned fitness-for-purpose, ‘quality standards’ or metrics open another can-o-worms.]

 

  1. “If we are the experts”?  Hey, what if we aren’t?  Maybe we should admit to ourselves that we are merely fallible humans, doing the best we can under trying circumstances 😊

 

As I said under point 1, differences, errors, discrepancies etc. are themselves of interest.  What are the factors that influence what we are doing and how we do it, the analytical results we generate, their utility and value etc.?  Of those factors, which are the most amenable to being improved in practice, and/or how much should we invest in such methodical improvements i.e. what are the net benefits less costs?  What about the factors that are hard or impossible to control?  How can/should we manage all of that in a systematic manner?

 

I realise this is getting a bit Zen but I challenge Jeff’s assertion that Brier score is a “crucial performance measurement” if that means utterly indispensable.  There are several factors here that could be measured in various ways: clarify what the measurement objectives are and we can come up with a shortlist of possible metrics to address them.  Alternatively, keep dancing around the objectives and avoiding the tricky questions for as long as you like and this will never be resolved.

 

Kind regards,

Gary

 

Logo

Gary Hinson

Ga...@isect.com

IsecT Limited

New Zealand

Information security

ISO/IEC 27001 standards

Security metrics

Security policies

 

 

From: sira-...@googlegroups.com <sira-...@googlegroups.com> On Behalf Of Jeff Lowder
Sent: 07 July 2021 06:46
To: SiRA-public <sira-...@googlegroups.com>
Subject: Re: [sira-public] Houston, we have an risk analysis quality problem.

 

Hi Aps!

 

My answers to your questions:

 

Does our industry/profession have a risk analysis quality problem?

 

Yes! I think this is beyond reasonable doubt and part of the whole reason SIRA exists. Look at how many risk management frameworks and standards rely upon techniques which have already been empirically shown not to work.

 

Do we need clearer / better-defined quality standards for (quantitative) risk analysis?

 

In my opinion, the answer is either "no" or, at least, "it's not clear why." Adoption of FAIR would seem to go a long way towards solving the quality problem. 

 

If we are the experts, and we don't have clear quality standards for risk analysis, how do we expect our stakeholders/customers/leaders to know good from bad (analysis)?

 

Stakeholders/customers/leaders need to demand performance measurements of risk management processes. One crucial performance measurement is the Brier score. People should be demanding Brier scores from individual estimators and for risk forecasts as a whole.

 

Jeff

 

On Mon, May 17, 2021 at 1:28 PM Apolonio Garcia <aga...@healthguardsecurity.com> wrote:

[Duplicate of my post on the public-SIRA Slack channel post for non-paid members]

 

Back in 2018 I asked SIRAnauts the following question in a survey: "Do decision-makers or other stakeholders perceive a potential consequence associated with the use of faulty risk analysis methods?"  I got 17 responses, of which 65% of folks answered "No" or "Unsure". Not a great response rate, but the responses I got were consistent with my personal experience. 

 

Last week I sent out another informal survey to SIRA Paid and Google Group members, and a group of healthcare cybersecurity professionals, with a follow-up question: "Does your organization's risk analysis process include a formal peer review/quality control step?" This time I got 30 responses and had about 57% of folks that answer "No", which again supports my hunch.

While these are very small sample sizes, I would think that if anything, they are probably erroring on the conservative side (more favorable) given the populations I was sampling tends to be more "risk savvy".

 

So I would like to throw out a few questions for discussion/debate:

·      Does our industry/profession have a risk analysis quality problem? 

·      Do we need clearer / better-defined quality standards for (quantitative) risk analysis?

·      If we are the experts, and we don't have clear quality standards for risk analysis, how do we expect our stakeholders/customers/leaders to know good from bad (analysis)?

--
What's new, SIRAnaut? Check us out at http://societyinforisk.org & on twitter [@societyinforisk]
---
You received this message because you are subscribed to the Google Groups "SiRA-public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sira-public...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sira-public/CAEm0_tK2g2urHk77gMMHmm33Y1MCTbHdmdhVNXNZNe443Kgo5Q%40mail.gmail.com.

--
What's new, SIRAnaut? Check us out at http://societyinforisk.org & on twitter [@societyinforisk]
---
You received this message because you are subscribed to the Google Groups "SiRA-public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sira-public...@googlegroups.com.

image001.jpg
image005.jpg
image006.png

Jack Whitsitt

unread,
Jul 6, 2021, 7:36:40 PM7/6/21
to Apolonio Garcia, SiRA-public
This is just gatekeeping right now, IMO, and we shouldn't engage in it. 

It's useful, though, to unpack why there might be a risk analysis "quality" problem:

1. We don't actually understand much about the problem space (loss, triggers, controls, and the relationship between the three), even after all these years. We treat infosec like a tactical problem (more guns, more armor, better food, etc.)....and it's not that at all.  Risk analysis of a goofed up problem model will systematically yield goofy results and we have VERY goofed up problem models. Happy to dive into this deeper in another thread. 

2. Our control specifications and common practice frameworks are, at best, tangential to the *actual* problem space (risk architectures) for infosec. For example: everyone should get high (literally or metaphorically) and then think about the relationship between Data Loss Protection and TSA agents. DLP doesn't do what 99.9999% (my made up number to mean "anecdotally, nearly everyone Ive ever spoken to) think it does. This means a lot of risk analysis work is mapping over one random set to something that should be a more ordered matrix and it's not the risk analysis' fault that the results come out whacky

3. We're dealing with human beings that, by and large, will prioritize subjective feeling over quantified data in their default operating mode and we need organizational governance frameworks to overcome this for a number of reasons, not just risk analysis, or the right operating models within which to execute good risk analysis will be far and few between

4. We have a bazillion (another data-driven word) of these goofy problem space models competing with each other and risk analysis needs some governance to organize these mental models or we can't do good risk analysis

5. There are multiple, legitimate, competing risk appetites and risk stakes running around any organization that has something provided and something consumed.  Creating a "risk analysis" to some standard that supports all of those different appetites and stakes is something that we are not yet very good at (we cant always get people to agree that orgs have multiple competing legitimate risk appetites)

6. Folks have not been taught and have not institutionalized into their adorable decision-making souls the ideas of uncertainty or calibration or even lewis carol-style logic matrices (http://www.math.hawaii.edu/~hile/math100/logice.htm for funsies).  If people consuming risk analyses cant generally think about risk, how can we create quality analysis that will be accepted?

Etc. etc. and more etc.

Instead of gatekeeping, let's broaden our definition of the problem and re-decompose? :)


--
What's new, SIRAnaut? Check us out at http://societyinforisk.org & on twitter [@societyinforisk]
---
You received this message because you are subscribed to the Google Groups "SiRA-public" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sira-public...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/sira-public/CAEm0_tK2g2urHk77gMMHmm33Y1MCTbHdmdhVNXNZNe443Kgo5Q%40mail.gmail.com.


--

Jack Whitsitt

unread,
Jul 6, 2021, 7:42:05 PM7/6/21
to Apolonio Garcia, SiRA-public
Or, TLDR: "We don't actually know what we should be doing yet or how to do it, so let's wait on gatekeeping quality" :)

kevin thompson

unread,
Jul 7, 2021, 2:48:28 PM7/7/21
to SiRA-public
There are certainly quality problems. I think a fair number of organizations don't care if there are quality problems in their current RM approach. I think that organizations are not seeing value from RM and do not want to invest in improving it. 

I was having a private conversation with someone in SIRA recently and I asked this person if quantitative risk management has failed. It has been ten years of SIRA and have we really seen an increase in adoption of quantitative methods? We have research to show that there are big flaws with the heat map style multiply column A by column B methodologies - however I think we have not been able to show that by using a quantitative framework there will be enough of an increase in information to make it worth the effort. We could say the same thing for having peer review of a purely qualitative approach - if we feel like what we are getting from the current process meets our needs then why invest in making improvements? Some organizations just want a risk management process so they can check that off in their SOC 2. We've got one, so we're good.

There might be a quality problem, but that's ok because the company doesn't REALLY use their current risk framework output anyway.
There might be a quality problem, and we use the output of our risk framework, but the quality problems aren't severe enough to warrant investing in a new approach

Phil

unread,
Jul 14, 2021, 3:42:19 AM7/14/21
to Apolonio Garcia, SiRA-public
In the organisations I work with currently (healthcare and social care) I am struck by the variety of risk process quality. The same organisation will run robust risk processes in a clinical or primary care setting and revert to very weak processes in operational risk or technology risk.

I saw similar variation previously in my time in financial services between credit risk and market risk modelling versus operational risk.

I wonder if the answers would have been different across the organisations surveyed and what we can do to bridge the quality gap within organisations.

Phil

Jeff Lowder

unread,
Jul 14, 2021, 11:52:19 AM7/14/21
to SiRA-public, Jack Whitsitt
Hi Jack,

My goal with this response is to seek understanding. I've read your email several times and I don't think I understand it. I think (or thought?) I understood what "gatekeeping" is, but I'm struggling to understand how to apply the concept of "gatekeeping" to risk management. Could you please unpack what you mean by "gatekeeping" in risk management and how Apps' email is an example of that? 

Sending this to the list, not just you privately, on the assumption that at least one other person is as confused as I am.

Regards,

Jeff
 

Apolonio Garcia

unread,
Jul 14, 2021, 12:08:27 PM7/14/21
to Jeff Lowder, SiRA-public, Jack Whitsitt

Jeff Lowder

unread,
Jul 15, 2021, 2:47:44 AM7/15/21
to kevin thompson, SiRA-public
I agree with almost everything Kevin wrote. 

At a former employer who shall remain nameless, I attempted to get the organization to drop its worse than useless risk assessment methodology and adopt quantitative methods. I failed. With the benefit of 20/20 hindsight, it is now clear to me that I ran into many obstacles -- many of the same obstacles Kevin describes at the end of his email: they weren't really using their risk assessment methodology for decision-making anyway and had no incentive to change the way they did things. 

It was partially because of this experience that I decided I needed to add something to the IRMBOK Guide which addresses getting organizational buy-in to the risk management approach. So the second "Task" in the book, section 3.2 (Determine Risk Management Approach), addresses this and other "risk governance" topics. In order to support that task, it became clear to me that the book needed to include a variety of techniques, some of which will be discussed in the final version and some of which will not be. These topics include:

- Business Case Analysis (this technique will be in final version of the IRMBOK Guide and the example sub-section will include a fully-worked example of a change management plan to pivot from qualitative to quantitative methods)
- Interviews (already in the IRMBOK Guide)
- Lessons Learned (will not be in the IRMBOK)
- Organizational Change Management (will not be in the IRMBOK Guide -- this is a huge topic and there are a ton of sources on this already)
- Reviews (will not be in the IRMBOK)
- Value-Focused Decision Making (already in the IRMBOK Guide)
- Workshops (already in the IRMBOK Guide)

This email thread leads me to believe it would be valuable for someone or some group (SIRA?) to conduct a poll, similar to what I proposed in my other thread, and gather empirical data about the rate of adoption of quantitative methods and, where such methods have been rejected, why they were rejected.

One last thought. Kevin asked a provocative question: "It has been ten years of SIRA and have we really seen an increase in adoption of quantitative methods?" That suggests that after 10 years of SIRA we might have expected to see a different outcome. Kevin can correct me if I'm putting words in his mouth, but I'm not sure that implied expectation of SIRA is entirely fair. SIRA has done amazing things in our short history and with very limited resources, but there's only so much SIRA can do. Maybe the publication of the IRMBOK Guide will help build some momentum? I hope so!

Jeff

Jeff Lowder

unread,
Jun 5, 2025, 9:00:33 PMJun 5
to kevin thompson, SiRA-public

Hi all,

I'd like to resurrect an email thread started by Apps on 05/17/2021. In my opinion, he raised a great question that remains just as relevant today as it was then: If we as experts don’t define what “good” risk analysis looks like, how can we expect decision-makers to distinguish good from bad?

 

At the time, I thought this thread was important enough to save. As I work to finalize the IRMBOK Guide, I revisited the full discussion to evaluate whether any additional topics raised in the thread should be addressed. In this email, I’d like to (a) describe the key issues identified, (b) explain their current treatment in the IRMBOK Guide, and (c) offer proposed next steps, including whether I think the topic belongs in scope for IRMBOK or SIRA.

 

1. Lack of Shared Quality Standards for Risk Analysis

(a) Description: Apps’ survey results suggest that formal peer review and QA processes are the exception, not the norm.

(b) Current IRMBOK Coverage: IRMBOK partially addresses this through Section 7.9 (Component Testing), which supports internal validation of probabilistic models. However, it lacks broader quality assurance criteria such as peer review processes or baseline documentation standards.

(c) Proposed Next Steps: I do not have a firm opinion on whether IRMBOK should address this or not. I’m open to suggestions.

 

2. Intra-Organizational Variability in Risk Process Quality

(a) Description: Phil Huggins observed that many organizations maintain high-quality risk processes in some domains (e.g., clinical or financial) but weak ones in others (e.g., IT or operational).

(b) Current IRMBOK Coverage: IRMBOK does not currently address this inconsistency.

(c) Proposed Next Steps: I also don’t have a firm view on whether this topic should be covered in the IRMBOK. I’m open to ideas.

 

3. Governance of Competing Risk Appetites

(a) Description: Jack Whitsitt raised the challenge of reconciling legitimate but conflicting risk appetites across internal stakeholders.

(b) Current IRMBOK Coverage: Section 3.4 of the IRMBOK covers risk governance in general but does not address this specific challenge.

(c) Proposed Next Steps: I’m tentatively of the view that this is outside the IRMBOK’s scope, but I remain open-minded and would welcome contrary views.

 

4. Misaligned Control Frameworks and Risk Causality

(a) Description: Jack noted that many commonly used controls (e.g., DLP) are poorly understood and ineffectively mapped to actual loss mechanisms.

(b) Current IRMBOK Coverage: The IRMBOK does not engage in epistemological critiques of control frameworks.

(c) Proposed Next Steps: I think this topic goes beyond the scope of what IRMBOK is designed to do.

 

5. Risk Management as Ritual Instead of Decision Support

(a) Description: Kevin Thompson described how some organizations treat RM as a checkbox exercise.

(b) Current IRMBOK Coverage: This concern is directly addressed throughout the IRMBOK. In particular, Chapter 4 (Risk Assessment)—especially Section 4.1 (Establish Decision Context)—emphasizes that risk assessments must be driven by decisions, not compliance rituals. This theme is also reinforced in the Preface and Chapter 1.

(c) Proposed Next Steps: I consider this topic fully addressed in the current Guide.

 

6. Limited Education in Reasoning and Logic

(a) Description: Jack highlighted that many consumers of risk analysis lack training in probabilistic reasoning and basic logic.

(b) Current IRMBOK Coverage: IRMBOK addresses this through Section 2.5 (Principles of Probabilistic Reasoning) and Section 7.8 (Calibration Training), which introduce foundational concepts in uncertainty, subjective probability, and reasoning under ambiguity.

(c) Proposed Next Steps: I believe this topic is adequately addressed in the current Guide.

 

7. Human Bias Toward Intuition Over Evidence

(a) Description: Jack also emphasized the difficulty of overcoming intuitive, affect-driven decision-making.

(b) Current IRMBOK Coverage: IRMBOK touches on this via calibration, but does not yet offer organizational or cultural strategies for dealing with it.

(c) Proposed Next Steps: I do not currently have an opinion on whether the IRMBOK should go further here. I’m open to suggestions.

 

8. Conflicting Conceptual Models Across the Enterprise

(a) Description: Jack noted that many organizations operate with multiple, inconsistent mental models of risk.

(b) Current IRMBOK Coverage: IRMBOK assumes a unified conceptual framework and doesn’t address the reconciliation of competing models.

(c) Proposed Next Steps: I don’t have a view yet on whether this should be addressed by the Guide. I welcome feedback from others.

 

9. Lack of Accountability for Flawed Risk Analysis

(a) Description: Apps’ survey showed that many stakeholders don’t perceive consequences from poor analysis.

(b) Current IRMBOK Coverage: IRMBOK does not currently address the concept of “risk-of-risk” or the downstream consequences of flawed RM outputs.

(c) Proposed Next Steps: I am open to including something on this topic, but I’m not sure what that would look like yet. I’d appreciate suggestions from the community.

 

10. Lack of Guidance on Organizational Change Management

(a) Description: From personal experience, I know how hard it is to replace entrenched RM practices—even when they’re clearly ineffective.

(b) Current IRMBOK Coverage: Section 3.2 addresses stakeholder engagement and includes Business Case Analysis as a technique, with an example that covers transitioning from qualitative to quantitative approaches. But the Guide intentionally excludes formal change management frameworks.

(c) Proposed Next Steps: I now believe IRMBOK should acknowledge the importance of change management and list it as a technique. That said, organizational change is not SIRA’s wheelhouse, and the most we can do is point readers to high-quality external resources.

 

Thanks again to everyone who contributed to this thread. Even though it’s from 2021, the issues raised are still deeply relevant. Also, if anyone would like a copy of the version of the IRMBOK, let me know and I can see to it that you get access. At my current pace, I expect to finish the IRMBOK in about 2.5 weeks.

 

Best regards,

Jeff Lowder

Editor, IRMBOK Guide

jeff....@societyinforisk.org

Reply all
Reply to author
Forward
0 new messages