Falsification Testing for Causal Design Assumptions

Abstract

To justify the credibility of their causal designs, researchers are increasingly reporting the results of falsification analyses on the observable implications of their necessary causal assumptions. Traditional hypothesis testing procedures for these purposes are improperly formulated, therefore this work contributes to the growing body of research promoting equivalence-based tests for the falsification of causal assumptions (e.g. Hartman and Hidalgo, 2018; Hartman, 2020; Bilinski and Hatfield, 2018). To this end, I first present an empirical application with an emphasis on falsification testing (Chapter 2). In this chapter, augmented synthetic control methods are used to estimate the causal effect of a Los Angeles police reform policy, supplemented with falsification analyses to both strengthen the credibility of the design decisions and contextualize the significance of the results in context. The remainder of this work proposes equivalence-based tests. Chapter 3 develops an equivalence test for conditional independence hypotheses, paying particular attention to the interpretability and specification of the necessary equivalence range. Many causal designs require the conditional ignorability assumption, and thus balance testing is proposed as an application of this method. Chapter 4 takes a broader view of causal assumption evaluation. For a suite of common causal designs (selection on observables, mediation, difference-in-differences, regression discontinuity, synthetic control, and instrumental variables), I map the necessary causal assumptions to the testable observable implications, ultimately contrasting traditional falsification approaches to proposed equivalence-based alternatives. To conclude, Chapter 5 discusses the implications and limitations of this work with an emphasis on open questions in causal assumption evaluation.

Date