This was first published by Third Sector, in Caroline Fiennes’ regular column.
It sounds pretty good – a programme that aims to break the cycle of poverty and illiteracy by improving the educational opportunities of low-income families through early childhood education, adult literacy support and parenting education. It has served more than 30,000 families and has run in more than 800 locations nationally. Would you put money into it? Might your organisation take it on? It sounds highly plausible and clearly has attracted considerable funding.
But research has shown that this programme had no effect at all. The gains made by children who were served by it were indistinguishable from those of children who weren’t. The money might as well not have been spent.
Let’s try another example – a policy that children who fall behind at school retake the year. Again, it sounds pretty sensible and is common practice in some countries. So should we do it?
Well, compared with this policy, the parenting early intervention programme mentioned above looks like a great idea: whereas it achieved nothing, the schooling policy achieved less than nothing by making things worse. Children who retook a year typically ended up about four months further behind than if they hadn’t.
These examples, and many others like them, show that our intuition about programmes or organisations is no guide. It might lead us to waste our time and efforts or even to make things worse. We do better when our decisions – as donors, managers or trustees – are based on evidence.
Now suppose that for some medical condition there are two competing drugs. Drug A solves your problem and has the side effect of reducing your risk of heart attack by 29 per cent; drug B also solves your problem but increases your risk of heart attack by about half. What do you say? It’s not a hard choice.
In fact, drug A and drug B are the same drug. Again, this example is real: it’s for hormone-replacement therapy. One type of test (observational non-randomised cohort studies using markers for heart attacks) showed that it reduced heart attacks by 29 per cent, whereas another (randomised studies that monitor actual heart attacks) showed that it increased fatal heart attacks by half. What do you say now?
Only one answer can be accurate, so you want to know which test to believe. The research method matters – indeed, your life might depend on it.
With social programmes, too, the answer depends on the research method. When a reading programme in India was evaluated using various research methods, one implied that it worked brilliantly, a second that it worked only a bit and a third that it was worse than nothing. They can’t all be right. So we need to ensure we make decisions not on any old evidence, but on sound evidence.
We should be on our guard against bad research that leads us to waste time and money – and possibly lives. The National Audit Office’s study of evaluations of government programmes found that where the research technique was weakest, the claims made about the programme’s effectiveness were strongest.
Smart decision-makers rely on evidence that is actually reliable, and know what that looks like. Don’t die of ignorance.
Want to see a crashing example of bad research by a charity? Here–>