David Rhys Bernard , Paris School of Economics Gharad Bryan , London School of Economics Sylvain Chabé-Ferret , Toulouse School of Economics Jonathan de Quidt , Queen Mary University of London and Institute for International Economic Studies Jasmin Claire Fliegner , University of Manchester Roland Rathelot , Institut Polytechnique de Paris (ENSAE)
January 12, 2023
Download full paper
The use of observational methods remains common in program evaluation. How much should we trust these studies, which lack clear identifying variation? We propose adjusting confidence intervals to incorporate the uncertainty due to observational bias. Using data from 44 development RCTs with imperfect compliance (ICRCTs), we estimate the parameters required to construct our confidence intervals. The results show that, after accounting for potential bias, observational studies have low effective power. Using our adjusted confidence intervals, a hypothetical infinite sample size observational study has a minimum detectable effect size of over 0.3 standard deviations. We conclude that – given current evidence – observational studies are uninformative about many programs that in truth have important effects. There is a silver lining: collecting data from more ICRCTs may help to reduce uncertainty about bias, and increase the effective power of observational program evaluation in the future.
J.E.L classification codes:
Keywords: