Research

Filter by type:

To justify the credibility of their causal designs, researchers are increasingly reporting the results of falsification analyses on the observable implications of their necessary causal assumptions. Traditional hypothesis testing procedures for these purposes are improperly formulated, therefore this work contributes to the growing body of research promoting equivalence-based tests for the falsification of causal assumptions (e.g. Hartman and Hidalgo, 2018; Hartman, 2020; Bilinski and Hatfield, 2018). To this end, I first present an empirical application with an emphasis on falsification testing (Chapter 2). In this chapter, augmented synthetic control methods are used to estimate the causal effect of a Los Angeles police reform policy, supplemented with falsification analyses to both strengthen the credibility of the design decisions and contextualize the significance of the results in context. The remainder of this work proposes equivalence-based tests. Chapter 3 develops an equivalence test for conditional independence hypotheses, paying particular attention to the interpretability and specification of the necessary equivalence range. Many causal designs require the conditional ignorability assumption, and thus balance testing is proposed as an application of this method. Chapter 4 takes a broader view of causal assumption evaluation. For a suite of common causal designs (selection on observables, mediation, difference-in-differences, regression discontinuity, synthetic control, and instrumental variables), I map the necessary causal assumptions to the testable observable implications, ultimately contrasting traditional falsification approaches to proposed equivalence-based alternatives. To conclude, Chapter 5 discusses the implications and limitations of this work with an emphasis on open questions in causal assumption evaluation.
2021

In 2011, the Los Angeles Police Department (LAPD), in conjunction with other governmental and nonprofit groups, launched the Community Safety Partnership (CSP) in several communities long impacted by multi-generational gangs, violent crime and a heavy-handed approach to crime suppression. Following a relationship-based policing model, officers were assigned to work collaboratively with community members to reduce crime and build trust. However, evaluating the causal impact of this policy intervention is difficult given the unique nature of the units and time period where CSP was implemented. In this paper, we use a novel data set based on the LAPD’s reported crime incidents and calls-for-service to evaluate the effectiveness of this program via augmented synthetic control models, a cutting-edge method for policy evaluation. We perform falsification analyses to evaluate the robustness of the results. In the public housing developments where it was first deployed, CSP reduced reported violent crime incidents, shots fired and violent crime calls, and Part I reported crime incidents. We do not find evidence of crime displacement from CSP regions to neighboring control regions. These results are promising for policy-makers interested in policing reform.
2020

Multiple imputation in business establishment surveys like BRDIS, an annual business survey in which some companies are sampled every year or multiple years, may enhance the estimates of total R&D in addition to helping researchers estimate models with subpopulations of small sample size. Considering a panel of BRDIS companies throughout the years 2008 to 2013 linked to LBD data, this paper uses the conclusions obtained with missing data visualization and other explorations to come up with a strategy to conduct multiple imputation appropriate to address the item nonresponse in R&D expenditures. Because survey design characteristics are behind much of the item and unit nonresponse, multiple imputation of missing data in BRDIS changes the estimates of total R&D significantly and alters the conclusions reached by models of the determinants of R&D investment obtained with complete case analysis.
US Census Bureau Center for Economic Studies Paper No. CES-WP-17-13., 2017

Predictions of climate variables like precipitation and maximum/minimum temperatures play crucial role in assessing the impact of decadal climate changes on regional water availability. This technical report describes a Graphical User Interface(GUI) called CMIViz developed as part of the 2016 REU program at UMBC. CMIViz is an R tool used for exploration and visualization of spatio-temporal climate data from the Missouri River Basin (MRB). The tool is developed using the R package ‘Shiny’, which facilitates access on a web browser. Since prediction of precipitation is more challenging than the prediction of maximum/minimum temperatures, CMIViz provides more visualization options for precipitation. Specifically, the tool provides an easy intercomparison of data from the Global Climate Models (GCM): MIROC5, HadCM3, and NCAR-CCSM4 in terms of bias relative to the observed data, root mean-squared error (RMSE), and other measures of interest for daily precipitation. The tool has options to explore the temporal trends and autocorrelation patterns given a location and spatial patterns using contour plots, surface plots, and semivariograms given a time point. CMIViz also provides visualization of canonical correlation analysis (CCA) to help find similarities between the models.
Technical Report HPCF–2016–12., 2016