Abstract
Social scientists seeking to inform policy or public action must carefully consider how to identify effects and express inferences because actions based on invalid inferences will not yield the intended results. Recognizing the complexities and uncertainties of social science, we seek to inform inevitable debates about causal inferences by quantifying the conditions necessary to change an inference. Specifically, we review existing sensitivity analyses within linear regression and potential outcomes frameworks. We then present the Impact Threshold for a Confounding Variable (ITCV) based on the linear model and the Robustness of Inference to Replacement (RIR) based on the potential outcomes framework. We extend each approach to fully account for sampling variability represented by standard errors as well as bias. We exhort social scientists wishing to inform policy and practice to quantify the robustness of their inferences only after utilizing the best available data and methods to draw an initial causal inference.
ASA workshop
Fri, August 5, 12:00 to 4:00pm, JW Marriott, Floor: Gold Level, Gold Salon 1
Session Submission Type: Course
DescriptionRegistration is required for courses. For more details, visit
the course web page. Registration fee: $5
Statistical inferences are often challenged because of uncontrolled bias. There
may be bias due to uncontrolled confounding variables or non-random selection
into a sample. We will turn concerns about potential bias into questions about
how much bias there must be to invalidate an inference. For example, challenges
such as “But the inference of a treatment effect might not be valid because of
pre-existing differences between the treatment groups” are transformed to
questions such as “How much bias must there have been due to uncontrolled pre-existing
differences to make the inference invalid?”
By reframing challenges about bias in terms of specific quantities, this course
will contribute to scientific discourse about uncertainty of causal inferences.
Critically, while there are other approaches to quantifying the sensitivity of
inferences(e.g., Robins and Rotnitzky, 1995; Rosenbaum and Rubin 1983,
Rosenbaum, 2000), the approaches presented in this workshop based on
correlations of omitted variables (Frank, 2000) and the replacement of cases
(Frank and Min, 2007; Frank et al, 2013) have greater intuitive appeal. In this
sense the techniques align well issues of power and inequality because they
allow a diverse set of voices to participate in conversations about causal
inferences.
In part I, we use Rubin’s causal model to interpret how much bias there must be
to invalidate an inference in terms of replacing observed cases with
counterfactual cases or cases from an unsampled population (e.g., Frank et al,
2013). In part II, we quantify the robustness of causal inferences in terms of
correlations associated with unobserved variables or in unsampled populations
(e.g., Frank 2000). Calculations will be presented using the open-source app http://konfound-it.com with links to STATA
and R modules. The format will be a mixture of presentation, individual
hands-on exploration, and group work.