By Satoko Okamoto -Visiting Scientist, Rural Research
We all know that the world is demanding hard evidence to show that development programs work. Across the globe, many initiatives have taken place to have development agencies properly measure their impact, or lack thereof, through counterfactuals. Even the celebrity development guru Jeffrey Sachs, the father of the Millennium Village Project, could not escape from the scrutiny by the “randomistas.”
Every year at IRRAD, we run many programs in marginalized communities in Haryana. Those programs range from medium- and small- scale infrastructure programs like constructions of check dams and toilets, to human development programs like capacity building training for individuals and public officials. We know those programs work because we see fresh water in the once dried-up wells and observe that toilets are properly utilized; we hear that the once tardy fair price shop owner now opens his shop on time with its shelves replenished.
Simply displaying those good things seems no longer an effective way of communicating in the increasingly competitive international development community. We feel that we need to show that we assess the impact of our programs in a scientifically sound manner, if not through randomisation. Yes, we do that by establishing treatment and control villages and collecting data in a double amount for comparison for most of our projects. It takes time; the processes are laborious; and after all, the results could be statistically insignificant. We sweat on small things.
Last week, I attended a conference on NGOs and naturally, the panellists representing their organizations talked about their impacts. Many of the impacts were outputs like the number of schools and hospitals built; several NGOs showed outcomes like improved school attendance and the number of patients rehabilitated. No one used the words treatment or control. After all, the panellists were using differing standards to show their impacts and the juxtapositions of these incomparable numbers were already confusing enough. Some speakers were marvellously articulate but many others were indistinguishable.
After all, the story of impact that remained in my mind was a simple story of an NGO making differences on individuals’ lives. At academic conferences the jargons of “randomistas” may entertain economists but at a conference like this, participated by development practitioners and social worker aspirants hungry for the evidence that talks to their heart, a simple powerful story of impact on one individual’s life seemed equally scientific.
For discussions on the applications of rigorous impact evaluations to aid programs, please see:
“RANDOMIZED TRIALS COULD HELP SHOW WHETHER AID WORKS” ECONOMIST 3 DEC 2011. (Click here)
“FIESTA DE LOS RANDOMISTAS” ECONOMIST 21 APR 2011. (Click here)
GARBITT, ANNE. “A NEW ROAD TO MEASURING DEVELOPMENT OR ANOTHER DEAD END?” SOCIETY FOR PARTICIPATORY RESEARCH IN ASIA (PRIA). GLOBAL PARTNERSHIP. VOLUME 1, ISSUE 4 (OCT-DEC.2011).