Statistical inferences are often challenged because of uncontrolled bias. There may be bias due to uncontrolled confounding variables or non-random selection into a sample. We turn concerns about potential bias into questions about how much bias there must be to invalidate an inference. For example, challenges such as “But the inference of a treatment effect might not be valid because of pre-existing differences between the treatment groups” are transformed to questions such as “How much bias must there have been due to uncontrolled pre-existing differences to make the inference invalid?” By reframing challenges about bias in terms of specific quantities, this course will contribute to scientific discourse about uncertainty of causal inferences. Critically, while there are other approaches to quantifying the sensitivity of inferences, the approaches presented in this workshop based on correlations of omitted variables (Frank, 2000) and the replacement of cases (Frank and Min, 2007; Frank et al, 2013) have great intuitive appeal. In this sense the techniques provide practicing statisticians a language for communicating with a broad audience about the uncertainty of inferences.
I will also be teaching my workshop materials within my graduate regression course most likely March 31 and April 1, from 1pm-2:20 eastern, on zoom: https://msu.zoom.us/j/783760435
Software
The KonFound-it! spreadsheet can now be found at:
spreadsheet for calculating indices [KonFound-it!©]