The most important questions economists can address relate to economic development. How can we explain the immense differences in income across the world? Why is human capital so low in developping countries? What can policy do about it? Should policymakers care? But answering all these questions is severely hampered by the abysmal quality of the data. Macroeconomic data is spotty and unreliable, and microeconomic data is largely inexistent. The answer to this problem was for researchers to generate their own data.
Thus were born randomization studies, whereby some region was region was subject to an economic experiment. Randomly seleced people or villages are given some sort of incentive, and others not, and the impact of the intervention is studied. This procedure has now become extremely fashionable in the development economics community, where it is now a must to be working "in the field" gathering data. This approach has, however, become increasingly questioned, for several reasons.
First, randomization studies are terribly expensive. There is an increasing sentiment that these resources could be better used, both in terms of research and development aid. In some cases, such experiments have even been shown to be detrimental. I related earlier about one where the distribution of free anti-malarial bed nets killed a local industry.
Second, as Angus Deaton discusses, the data that is obtained in these randomization studies is not informative. The critical issue here is that these experiments are not designed with any theory in mind, thus they do not help us in understanding the underlying mechanisms. They are case studies, applicable only to the very situation they have been used in. This criticism is very similar to the Lucas critique. Unless you put structure in your data, there is nothing useful you can learn from an elasticity in a linear regression with a set of variables which happen to be those available. Add to this that randomization, if poorly performed, gives statistically very poor results.
Third, I have yet to see a study that would indicate anything about the cost effectiveness of a policy or treatment. Studies are all focused on dtermining whether there is a significant impact in a statistical sense, sometimes in an economic sense, but never discusses the cost of the policy. In fact, given the huge cost of these studies, one starts to wonder whether those researchers ever think about scarcity and budget constraints.
What you really want to learn from an experiment is what is generalizable, what can be applied to other situations that differ from the studied one. For this, development economics needs to refocus on theory and the use of theory in its empirical work. Theory can help us understand a surprising amount without needing much data. In fact, in an evironment that is data poor, theory should be the priority, and any quantification should be performed with data-economizing techniques, such as calibration.
A few examples: