Public Randomization

A significant issue in conducting randomized control trials in a community is the issue of fairness. The idea behind RCTs is to mimic the medical model by scientifically ascertaining just how useful a treatment might be. In the case of development economics, this could be a subsidy or an extra year of education, for example. In order to eliminate (or at least reduce) the effect of confounding factors, the researcher randomizes over the population, picking a representative sample to receive the treatment and compares their results to those who did not receive the treatment.

While in theory this should give us the best answer as to how to combat poverty, or get children to school, or determine the effect on whatever outcome we hope to affect, it’s also problematic. The process of randomization necessarily leaves some people out, essentially denying them help that could be life-saving or life-transforming. It might also provide benefits that researchers view as small, but that are capable of creating divisions in a community, or perhaps jealousy, suspicion, or bitterness.

Different RCTs deal with this in different ways. Some do nothing. Some hope the treatment group doesn’t notice. Some tell the control group that they will get the treatment after the analysis is done, some take this course but without informing the treatment or control groups. All of these solutions have their issues, which are dependent on the type of treatment. In some cases, control respondents might change their answers to certain questions to appear more sympathetic, or deserving of the treatment. Or they might anticipate how the treatment is going to affect them in the future and have their answers reflect their hopes rather than their actual state.

As in all survey data, the mere act of asking the question affects the answer.

Last week, Kim Yi Dionne, a professor at TAMU, posted on her blog about making the randomization process public. While I don’t think it solves the problem of people changing their answers to what they think they should be (either to make the treatment look better or worse), it does deal with the bitterness and competition that can often arise out of randomly selected treatment groups.

I especially love the education component of it.

[A Malawian research supervisor] posed a question to the audience: if he wanted to know how the papayas in the village tasted, would he have to eat every papaya from every tree (pointing to the nearby papaya trees)? Some villagers laughed, many said “ayi” (no) aloud. He said, instead he would eat one or two from one tree, then take from another tree, but probably not take one from every tree in the village so that he could know more about the papayas in this village.*

Every mentor I have had for research in the developing world has been adamant that we share findings with the community whose participation was requisite to our success. But rarely do we take the opportunity to educate about how we came to our conclusions, hoping the conclusions themselves will suffice.

I think it’s brilliant.