presenting "quantifying the robustness of inferences" April 6 and 11

Skip to first unread message

Ken Frank

Apr 3, 2022, 10:06:08 AM4/3/22
to KonFound-it!

On April 6 (1pm-2:20pm eastern) and April 11 (1pm-2:20pm eastern) I will be presenting on Quantifying the Robustness of Inferences on my zoom

This is in the context of my course on use of multiple regression, so we’ll be starting with the regression-based approach on slide 106. The lectures are open, but priority goes to the students. Join if you like.

Quantifying the Robustness of Inferences 


Statistical inferences are often challenged because of uncontrolled bias.  There may be bias due to uncontrolled confounding variables or non-random selection into a sample.  We will turn concerns about potential bias into questions about how much bias there must be to invalidate an inference. For example, challenges such as “But the inference of a treatment effect might not be valid because of pre-existing differences between the treatment groups” are transformed to questions such as “How much bias must there have been due to uncontrolled pre-existing differences to make the inference invalid?”

By reframing challenges about bias in terms of specific quantities, this course will contribute to scientific discourse about uncertainty of causal inferences.  Critically, while there are other approaches to quantifying the sensitivity of inferences(e.g., Robins and Rotnitzky, 1995; Rosenbaum and Rubin 1983, Rosenbaum, 2000), the approaches presented in this workshop based on correlations of omitted variables (Frank, 2000) and the replacement of cases (Frank and Min, 2007; Frank et al, 2013) have greater intuitive appeal.  In this sense the techniques inform our understanding of bureaucracies of displacement because they allow a diverse set of voices to participate in conversations about causal inferences.


In part I, we quantify the robustness of causal inferences in terms of correlations associated with unobserved variables or in unsampled populations (e.g., Frank 2000).  In part II, we use Rubin’s causal model to interpret how much bias there must be to invalidate an inference in terms of replacing observed cases with counterfactual cases or cases from an unsampled population (e.g., Frank et al, 2013).  This includes application to logistic regression (Frank et al., 2021). 

Calculations will be presented using the open-source app with links to STATA and R modules. The format will be a mixture of presentation, individual hands-on exploration, and group work.  

Reply all
Reply to author
0 new messages