Date of Award
Doctor of Philosophy
This dissertation consists of three chapters on program evaluation, or the estimation of treatment effects.
The first chapter discusses bootstrap methods for inference on matching estimators, a popular approach to program evaluation. Abadie & Imbens (2008) showed that the standard non-parametric bootstrap fails to provide valid inference with matching estimators, and conjectured that a wild bootstrap could solve the problem. Otsu & Rai (2017) confirmed this conjecture, providing a wild bootstrap procedure that is valid in general. Their bootstrap builds in a bias correction procedure that requires estimation of conditional mean functions, a procedure that is generally necessary for consistent matching estimation. However, this step also introduces a new source of estimation error, lessening the efficiency of the bootstrap. I show that even in a special case, when bias correction in the estimator is unnecessary, the conditional mean function estimation is a required element of any wild bootstrap for the matching estimator. This shows that the Otsu & Rai bootstrap cannot be modified to be more efficient even by leveraging much stronger assumptions. Simulations provide additional support for this conclusion.
The second chapter also deals with matching estimators. I consider the problem faced by a practitioner who wishes to use matching estimation to estimate a treatment effect - in particular, choosing from a large set of available matching procedures. I cast matching estimators as two-step procedures - a weight-generation step followed by a weighted difference in means - and derive weights that minimize mean-squared error (MSE) under certain conditions. Understanding why the optimal weights behave the way they do generates insights about which matching procedures are likely to minimize MSE, enabling practitioners to use their economic intuition, knowledge of the empirical context, and knowledge of the sampling process to choose an appropriate matching procedure. I develop a simple `augmented' matching procedure to illustrate, and through simulation confirm that the guidance I offer is correct.
In the final chapter, I apply my program evaluation expertise to a question in the economics of education - specifically, the effect of teacher gender on student test scores. Previous literature in this vein has focused on the estimation of average effects. By exploiting random assignment of students to teachers in a field experiment, I study heterogeneity in the impact of teacher gender on math and reading test outcomes for primary school students of differing ability. I find that assignment to a female teacher is generally positive for male students, while it has no significant effect for female students. In addition, I find very little heterogeneity in the effect of teacher gender along the ability axis, suggesting that average effect estimates from previous investigations do not mask significant heterogeneity. My results are consistent with differential teacher behavior based on gender stereotypes, and somewhat inconsistent with differential student behavior based on gender stereotypes.
Niklaus Carlton Julius
Julius, Niklaus Carlton, "Essays in program evaluation" (2020). Graduate Theses and Dissertations. 17880.