Is grantee / beneficiary feedback a substitute for RCTs?

The short answer is no. At first sight, it seems that randomized controlled trials (RCTs) and Constituent Voice (CV: a good way of gathering feedback from programme beneficiaries or grantees) could substitute for each other because they both seek to ascertain a programme’s effect. In fact they’re not interchangeable at all. An RCT is an experimental design, a way of isolating the variable of interest, whereas CV is a ‘ruler’ – a way of gathering information that might be used in an experiment or in other ways.

Let’s look at an example RCT. Suppose we want to know the effect of Tostan’s human rights education programme in West Africa (which works on many things but is most famous for significant reductions in what its founder Molly Melching calls female genital cutting). The most rigorous test would be as follows. First, measure what’s going on in a load of villages. Then, choose some villages to have Tostan’s involvement and others not: choose them at random. (It’s no good to have villages opt in because maybe only the most progressive villages will opt in, meaning that we won’t know if changes result from their progressiveness – ‘a selection effect’ – or from the programme itself.) Finally, after the programme, measure again what’s going on in each village, and compare the change in the villages that got the programme with the change in those that didn’t.

CV and RCTs can – and I’d argue should – sit alongside each other. The classic uses of CV are to understand what people want and what they think of what they’re getting. Those are obviously important – and I champion work on both – but answers to these questions may not accurately identify the ‘impact’, which a well-run RCT would do.

Take, for example, two microfinance ‘village bank’ programmes that targeted poor people in north-east Thailand. It’s quite possible that people in these villages wanted to be less poor, and liked the microcredit programme they received. So the programme would have come out well if measured using CV. It came out well on some other measures too. But it fared badly when analysed with a well-run RCT (and there are plenty of ways that RCTs can be run badly): people who got microloans did do better than those who didn’t, but RCTs showed that those differences were entirely due to selection effects and had nothing to do with the microloans themselves.

Distinguishing selection effects from programme effects is hard – routinely foxing even highly trained doctors and researchers – and can’t be done by the naked eye alone. It’s quite possible that ‘beneficiaries’ might think that a programme is helping because they (like everyone else) conflate selection effects with programme effects. We can’t rely on CV to identify impact.

Well then, in a world of rigorous evaluations, why do we need CV?

First, why should we ask people what they want? Answer: because there are legion tales of donors plonking (say) a school in a community that really wanted a well. Rigorously evaluating the effect of the school totally misses that it wasn’t wanted, and the erosion of self-determination caused by non-consultative ‘donor plonking’. We can tell that consultation with ‘beneficiaries’ is complementary to rigorous research because they’re both used in evidence-based medicine (eg to establish what to research: see the article about the James Lind Alliance on p33).

Second, why should we ask people what they think of what they’re getting? Answer: again because they’ll tell us things that we didn’t know that could improve delivery. That staff are rude, often late. That the clinic should open half an hour earlier because that’s when the bus arrives. That the nurse giving the vaccines could be less scary.

Well-run RCTs are unparalleled in their ability to isolate a single factor and thereby identify the effect of that factor. But there are obviously instances where they’re inappropriate. They include: when controlling for that factor would be unethical or illegal; when you couldn’t get a sample big enough to yield statistically significant results; when the cost of conducting the study would outweigh the benefits; when the outcome is unmeasurable (such as measuring the effectiveness of alternative ways of honouring the dead); when a cheaper method is available (perhaps you have decent historical data and just need to analyse it). They are also inappropriate when you want to find out something other than the effect of a particular factor, eg users’ opinions or perceptions.

So no, CV is not a proxy for RCTs. As so often, the answer is ‘both’.

This article first published in Alliance Magazine (£). A PDF version is here.

This entry was posted in Effective giving, Impact & evaluation, meta-research. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s