‘Don’t just tell me what to do, come and help me do it!’ said an Indian government ofﬁcial to a researcher bearing results from studies into effective aid programmes. His response is salutary: there is much work now on increasing the use of evidence in public policy, so we need to understand what policymakers actually need and want, and what will help them be more evidence-driven. For foundations there is a clear message: it isn’t enough just to fund research. You have to make sure it reaches the relevant policymakers and in a form that is useful to them.
Over ten years, Innovations for Poverty Action (IPA) has run more than 350 studies in 51 countries to ﬁnd what works in alleviating poverty. We have had some success in inﬂuencing policies of governments, NGOs, foundations and others. Here’s what we have found.
The basic lesson is that there is often a disconnect between the people who produce evidence and those who use it. Though they may share a goal, the evidence ‘producers’ (researchers and academics) often work on a different timescale and in different technical language from the ‘users’ (government ofﬁcials and practitioners in NGOs, foundations, companies and elsewhere). Even within the same organization, they may not be used to dealing with each other.
Getting evidence into policy requires much more than producing evidence and publishing it. Rather, ‘diffusion [of ideas] is essentially a social process through which people talking to people spread an innovation’, said Everett Rogers, who studied the process (and who coined the term ‘early adopter’). This involves behavioural change, and we have found that it’s at least as difﬁcult as the research itself. Hence IPA works with both producers and users of evidence, facilitating, translating and supporting.
We use a structure articulated by Professor Richard Thaler of the University of Chicago, who developed behavioural economics. He wrote in the New York Times of his visits to the UK Government’s Behavioural Insights Team, which he advises:
‘. . . I make the rounds of government. We usually meet with a minister and some senior staff. In these meetings, I have found myself proposing two guidelines so often that they have come to be team mantras:
1) You can’t make evidence -based policy decisions without evidence.
2) If you want to encourage some activity, make it easy.’
IPA follows those guidelines. In fact, the second starts before the ﬁrst: we ﬁnd it useful to engage policymakers and practitioners right at the start, making a three -stage process.
First, work out what questions policymakers want answered. We are keen to solve problems that somebody actually has, and which they have budget, energy and permission to solve. These may not be the questions that interest researchers or campaigners or the press, but they are the problems where evidence is likely to make a difference. This can be seen as market research, since policymakers are the customers for the evidence.
For example, IPA’s work in Ghana led to conversations with the government which showed that they were concerned about low educational attainment, and potentially interested in solutions from elsewhere that might work. We are sometimes a ‘match -maker’ between policymakers with questions and researchers interested in answering them. Key to building these relationships is having a permanent presence in -country (IPA has ofﬁces in 12 countries).
Second, design programmes that may work, and test them rigorously. IPA works with leading researchers from top institutions such as Harvard, Yale, MIT and the London School of Economics. We often design programmes using behavioural insights which recognize that people aren’t perfectly rational, emotionless, net beneﬁt -calculating machines. They’re complicated, busy and more liable to copy their neighbours than to read endless small print, and they make bad decisions when they’re stressed or tired.
For example, if Kenyan farmers used more fertilizer they could increase crop yields and hence income: it’s available, but they don’t buy it when they need it, at planting time. Standard solutions involve giving out fertilizer free or subsidizing it, which are obviously expensive options. A behavioural solution is to sell it to them when they have money – right after harvest, before the money all gets eaten or lost or spent. So Thaler’s ‘make it easy’ mantra applies here too.
From the beginning of the research process, we try to involve in the design the policymakers and others who have a stake in the answers. This helps ensure that the research really provides what the ‘customer’ needs. Long term, in -country relationships with policymakers in government, NGOs, foundations and others are valuable here.
IPA produces evidence through randomized control trials (RCTs): for example, comparing the crop yields of Kenyan farmers who are offered fertilizer at harvest time with those of farmers who are offered it during the growing season. By choosing at random which farmers get which offer, we eliminate other differences between the groups, so we can be pretty sure that differences in crop yields result from the timing of the offer. RCTs are pretty easy to understand: they’re just a fair race between a programme and a control group, or two variants of a programme. A well -run RCT is the best way there is of determining the effect of a programme. (Plenty of RCTs are run badly: the BBC recently ran one with just seven participants, which is far too small to demonstrate anything reliably.)
Providing reliable evidence greatly assists in engaging policymakers and practitioners. However, rigorous research is often more difﬁcult, more time -consuming and more expensive than unreliable research. It’s not hard to give fertilizer at harvest time to the ﬁrst 20 farmers you ﬁnd and later ask them to recount how this changed their crop yields (or, worse, ask them hypothetically how it would change their behaviour and crops). But plenty of research shows that these answers are unreliable and riddled with errors: people aren’t very good at knowing how much they beneﬁted from something or anticipating how they would react to something new.
Finally, make it easy for policymakers to ﬁnd, understand and use the answers. To Thaler’s second point, we want policymakers to incorporate evidence, so we make it easy for them. We communicate in places and in language that policymakers use. Rather than just distributing copies of academic papers, we produce concise, plain -language, nicely designed summaries of each research project, and brieﬁngs about related research. We make sure they are ﬁndable through searches by country, research area or keyword. Our staff speak at meetings and conferences of policymakers, and our work is publicized in parts of the press that people read, such as The Economist , the Financial Times , here, and so on.
We have found that we need to support policymakers to understand how research applies to their contexts, since it can rarely be applied blindly. For instance, many children in Kenya have intestinal worms, which make them ill. So deworming reduces absenteeism in schools. But it won’t achieve much in Scotland, because there aren’t worms there. However, other, more general ﬁndings do apply in Scotland – for example, the ﬁnding that unless things are available when they have money, people won’t buy them – even things it might be in their interest to buy like fertilizer.
Even when ﬁndings are applicable, we are often asked to help with implementation. The opening quote was from an Indian state government ofﬁcial who had seen evidence about deworming. The government realized that the ﬁndings were relevant to them and wanted to trial it, but they faced many practical issues in doing so. We sent a deworming expert from our Deworm the World programme in Kenya to help them.
Notice the duality here: we need academics to run rigorous research, but we need different teams for communication and implementation support. Academics are generally trained and rewarded for publishing in specialized journals, not for reaching out to governments and practitioners.
The role for funders and practitioners:
Often foundations fund charities to produce research in the hope of inﬂuencing policy, but both foundations and charities effectively assume that policymakers will ﬁnd it, understand it and apply it. This rarely happens. Like everyone else, policymakers are much more likely to value evidence or innovations from people they know and trust. It ‘makes it easy’ to ﬁnd and believe the evidence. This experience isn’t unique to IPA nor to less -developed countries. The Institute for Government, for instance, a UK think – and do -tank, recognizes that research alone will not achieve its mission of improving government effectiveness. So it devotes time, energy and resources to building personal relations with the people it needs to inﬂuence and constantly interacts with them through events, blog posts, private meetings and joining government working groups.
This work requires dedicated time and people. They of course need funding and resourcing. The upside is that they leverage not only the research budget but also the signiﬁcant government spending that it inﬂuences.
For more information about subscribing to Alliance, please visit www.alliancemagazine.org/subscribe