Evaluating Policy and Quantifying Uncertainty with Few (or One) Treated Unit(s): An Introduction to Synthetic Control Methods and Falsification Analyses

Abstract

Amidst the push for data-driven decision making, policymakers increasingly rely on statisticians to evaluate program effectiveness before allocating additional resources to policy expansion. To estimate the effect of a policy, one must infer what would have happened to the treated unit had it not received treatment. This causal inference problem is further complicated by the hallmarks of many policy problems: observational data, few or one treated unit(s), site-selection bias, and an imperfect pool of naturally-occurring controls. We introduce synthetic control methods, an important advancement that aims to alleviate these problems by estimating a synthetic control, a combination of control units constructed to mirror the treated unit in terms of pre-treatment characteristics. However, with so few treated units, researchers must carefully justify model-based decisions and quantify uncertainty in communicating final results to clients. Using a recent application in community policing, we implement the augmented synthetic control method and demonstrate how falsification tests can supplement model output to contextualize the substantive significance of results.

Date
Location
virtual