Workshop "What Would it take to Change your Inference" August 4 noon-4pm at American Sociological Association in Los Angeles

Skip to first unread message

Ken Frank

Jun 13, 2022, 8:50:37 AM6/13/22
to KonFound-it!
if you happen to be going to the annual meeting of the American Sociological Association or happen to be in Los Angeles on Aug 4, check out the workshop.  Details at the bottom of this note. 

Also, we have a paper under review for the 50th anniversary of Social Science Research that builds on the Duncan lecture I have in 2018.


Social scientists seeking to inform policy or public action must carefully consider how to identify effects and express inferences because actions based on invalid inferences will not yield the intended results.  Recognizing the complexities and uncertainties of social science, we seek to inform inevitable debates about causal inferences by quantifying the conditions necessary to change an inference.  Specifically, we review existing sensitivity analyses within linear regression and potential outcomes frameworks. We then present the Impact Threshold for a Confounding Variable (ITCV) based on the linear model and the Robustness of Inference to Replacement (RIR) based on the potential outcomes framework. We extend each approach to fully account for sampling variability represented by standard errors as well as bias. We exhort social scientists wishing to inform policy and practice to quantify the robustness of their inferences only after utilizing the best available data and methods to draw an initial causal inference. 

ASA workshop

0571 - What Would it take to Change your Inference? Quantifying the Discourse about Causal Inferences in the Sociology

Fri, August 5, 12:00 to 4:00pm, JW Marriott, Floor: Gold Level, Gold Salon 1

Session Submission Type: Course


Registration is required for courses. For more details, visit the course web page. Registration fee: $5

Statistical inferences are often challenged because of uncontrolled bias. There may be bias due to uncontrolled confounding variables or non-random selection into a sample. We will turn concerns about potential bias into questions about how much bias there must be to invalidate an inference. For example, challenges such as “But the inference of a treatment effect might not be valid because of pre-existing differences between the treatment groups” are transformed to questions such as “How much bias must there have been due to uncontrolled pre-existing differences to make the inference invalid?”
By reframing challenges about bias in terms of specific quantities, this course will contribute to scientific discourse about uncertainty of causal inferences. Critically, while there are other approaches to quantifying the sensitivity of inferences(e.g., Robins and Rotnitzky, 1995; Rosenbaum and Rubin 1983, Rosenbaum, 2000), the approaches presented in this workshop based on correlations of omitted variables (Frank, 2000) and the replacement of cases (Frank and Min, 2007; Frank et al, 2013) have greater intuitive appeal. In this sense the techniques align well issues of power and inequality because they allow a diverse set of voices to participate in conversations about causal inferences.

In part I, we use Rubin’s causal model to interpret how much bias there must be to invalidate an inference in terms of replacing observed cases with counterfactual cases or cases from an unsampled population (e.g., Frank et al, 2013). In part II, we quantify the robustness of causal inferences in terms of correlations associated with unobserved variables or in unsampled populations (e.g., Frank 2000). Calculations will be presented using the open-source app with links to STATA and R modules. The format will be a mixture of presentation, individual hands-on exploration, and group work.

Reply all
Reply to author
0 new messages