The Boston Review has a whole new set of articles on the movement of development economics towards randomized trials. The main article is Small Changes, Big Results: Behavioral Economics at Work in Poor Countries and the companion and criticism articles are here. They’re all worth reading, of course. I found them through Chris Blattman’s new post “Behavioral Economics and Randomized Trials: Trumpeted, Attacked, and Parried.”
I want to re-state a point I made in the comments there, because I think it’s worth re-wording to get it right. It’s this: I often see the new randomized trials in economics compared to clinical trials in the medical literature. There are many parallels to be sure, but the medical literature is huge, and there’s really one subset of it that offers better parallels.
Within global health research there are a slew of large (and not so large), randomized (and other rigorous designs), controlled (placebo or not) trials that are done in “field” or “community” settings. The distinction is that clinical trials usually draw their study populations from a hospital or other clinical setting and their results are thus only generalizable to the broader population (external validity) to the extent that the clinical population is representative of the whole population; while community trials are designed to draw from everyone in a given community.
Because these trials draw their subjects from whole communities — and they’re often cluster-randomized so that whole villages or clinic catchment areas are the unit that’s randomized, rather than individuals — they are typically larger, more expensive, more complicated and pose distinctive analytical and ethical problems. There’s also often room for nesting smaller studies within the big trials, because the big trials are already recruiting large numbers of people meeting certain criteria and there are always other questions that can be answered using a subset of that same population. [All this is fresh on my mind since I just finished a class called "Design and Conduct of Community Trials," which is taught by several Hopkins faculty who run very large field trials in Nepal, India, and Bangladesh.]
Blattman is right to argue for registration of experimental trials in economics research, as is done with medical studies. (For nerdy kicks, you can browse registered trials at ISRCTN.) But many of the problems he quotes Eran Bendavid describing in economics trials–”Our interventions and populations vary with every trial, often in obscure and undocumented ways”–can also be true of community trials in health.
Likewise, these trials — which often take years and hundreds of thousands of dollars to run — often yield a lot of knowledge about the process of how things are done. Essential elements include doing good preliminary studies (such as validating your instruments), having continuous qualitative feedback on how the study is going, and gathering extra data on “process” questions so you’ll know why something worked or not, and not just whether it did (a lot of this is addressed in Blattman’s “Impact Evaluation 2.0” talk). I think the best parallels for what that research should look like in practice will be found in the big community trials of health interventions in the developing world, rather than in clinical trials in US and European hospitals.