Skip to content

Can we randomize development evaluation? A response to Jon Lomoy

7 May 2014
Click to see the book

Click to see the book

Diane Coyle, Professor of Economics, University of Manchester and Director of Enlightenment Economics replies to Jon Lomoy’s article.

Jon Lomoy highlights a real risk in the hunt in development economics for “what works”, or in other words interventions with an identifiable and measurable impact on development outcomes. Reflecting on his comments, I agree there is a risk that economists carrying out randomised controlled trials, or field experiments, simply transfer to their new techniques their old certainties without due humility about the complexity of real situations.

In fact, the desire to demonstrate ‘impact’, along with the belief that the trials or experiments offer the tools to be able to do so, could distort assistance towards simpler interventions where cause and effect can be identified. But there is no way to be sure that these constitute the best use of resources and effort. In a context where a specific ‘impact’ has been identified, then RCTs might well offer the best way to choose between means of delivery. However, we must be careful how to generalise their results. While the conclusion that peer comparisons affect behaviour, whether that is in microcredit loan repayments in South Africa or electricity consumption in London, might have wide applicability, the incentive effects of a bag of lentils are going to be culture specific. This is an extreme example to make the point but it illustrates the need for constant sensitivity to context in using any evaluation technique, including randomised control trials.

More fundamentally, though, trials or experiments cannot answer the wider questions about which development outcomes are the most important, nor can they unpick the uncertain and complicated chains of causality and feedback in any real-world setting. Interventions with demonstrable impact might be less important in contributing to social welfare than others whose impact is hard to quantify and isolate. And the techniques themselves offer no insight into ranking priorities.

Above all, economic and social development is not a technocratic issue, but also a question of society, culture and politics. Economists alone cannot address all the problems – I wholeheartedly agree with Mr Lomoy on the need for more interdisciplinary work. Although it is a very welcome step for the discipline of economics to have embraced new empirical techniques, there is an obvious corresponding danger that economists’ tendency to hubris will simply relocate itself and end in the insistence that this is the only way to evaluate development policies.

There is a parallel danger that what should be a political or democratic debate is disguised as a technical one. Politicians like to demand answers to problems and where there is a demand, it will be met with a supply. Sometimes, it is not an answer but a decision – with appropriate accountability – that is required.

Having set out all the reasons for not becoming over-enthusiastic about the increasing use of RCT and field experiment techniques in development economics, I continue to believe they represent a huge step forward, and one that economists working in other areas of the discipline should embrace. It would be encouraging to see policymakers everywhere, not just in lower income countries, embrace the idea of trials or pilots to see “what works”. For all the need for caution, it is better than not knowing what works.

Useful links

OECD work on evaluation of development programmes

Comments are closed.