Tuesday, January 29, 2013

Leaning against publication bias: about the experiments that do not work out

It is quite obvious that journals will only publish results that are significant in the statistical sense. If it turns out that X does not influence Y, in most circumstances there is little interest from editors and referees. Yet, this could be valuable, especially when this was rather costly to do. You want to avoid having someone else waste resources trying to do the same thing. And nowhere is this more important than in the outrageously costly experimental literature. (Yes I realize results could still be published as a working paper, but few people write working papers without the initial intention to publish).

Francisco Campos, Aidan Coville, Ana Fernandes, Markus Goldstein and David McKenzie report on seven such failed experiments, where it was not simply the lack of significant results that was causing trouble, but rather that they never came to the evaluation phase. While no designer of experiments will ever cite this paper, everyone should read it. Why did these experiments fail? Delays, hindrances from politicians, and low participation. The authors think this occurred because of some idealized vision of the experiment by the designers that cannot be translated into the field. In particular, one needs to cope with the local political economy, the incentive of field staff, and be a bit pragmatic in terms of program eligibility criteria. And in the end, if the experiment is not properly randomized, one cannot draw conclusions from it.

I wonder how many millions have been wasted on failed experiments that nobody hears about and nobody can learn from. And that is without counting the millions wasted on experiments that are impossible to generalize, and thus not useful even if "successful."

No comments: