Are we relying on unreliable research?

Ask an important question and answer it reliably” is a fundamental tenet of clinical research. And you’d hope so: you’d hope that medics don’t waste time on questions that don’t matter or which have been answered already, and you’d hope that their research yields robust guidance on how to treat us*. Does research in our sector aimed at understanding the effects of our interventions adhere to that tenet?

We suspect not. It’s a problem because poor quality research leads us to use our
resources badly
. The example of microloans to poor villagers in Northeastern Thailand illustrates why.  In evaluations which compared the outcomes (such as the amounts that households save, the time they spend working or the amount they spend on education) of people who got loans with those of people who didn’t, the loans looked pretty good. But those evaluations didn’t take account of possible ‘selection bias’ in the people who took the loans: perhaps only the richer people or better networked people wanted them or were allowed to have them. A careful study which did correct for selection bias found that in fact the loans made no difference. The authors conclude that ‘‘‘naive’’ estimates significantly overestimate impact.”

Such examples are rife. There is one in the current edition of Stanford Social Innovation Review, about a back-to-work programme, discussed here. Another example is from a reading programme in India. Five different evaluation methods produce five quite different estimates of its impact: at least four of them must be wrong and might lead us to misuse our money:

IPA research methods

Spotting unreliable research requires assessing research against a quality standard. Though foundations fund masses of research – through charities’ M&E, sometimes conducted by the charities themselves and sometimes done independently – to my knowledge, only one has ever assessed the quality of the research it sees. It didn’t look pretty. The Paul Hamlyn Foundation looked at the research it received from grantees between Oct 2007 and Mar 2012: only 30% was ‘good’, and even that was using a rather generous quality scale. It even found ‘some instances of outcomes being reported with little or no evidence’.

Assessing the quality of research is bog-standard in medicine and increasingly common in the public sector. The Education Endowment Foundation already does it (in its toolkit) and the government’s other What Works Centres will too. The National Audit Office (NAO) recently published analysis of the quality of almost 6,000 government evaluations, which contains a salutary nugget. Buried on page 25 is the finding below that the strongest claims about effectiveness are based on the weakest research. This (probably) isn’t because the researchers are wicked, but rather because you can infer almost anything from a survey of two people: most social interventions have quite small effects, and robust research won’t let you show anything bigger.

NAO graph

Let’s put that the finding other way round. Charities competing for funding have an incentive to show that their impact is sizeable. The NAO’s finding implies that that is easier if they do bad research. So funders who rely on charities’ own research inadvertently encourage them to produce unreliable research.

Funders should look carefully – and independently – at the quality of evidence that we fund and use, let we be misled to ineffective work. As mentioned, the methods and tools for assessing research quality (‘meta-research’) are established and proven in other disciplines. Giving Evidence is exploring work to assess the reliability of research produced by charities and used by funders. [Upate! We’re now doing such a thing, in outdoor education. See here.] We are in discussion with relevant academics who would run the analysis. We would like to talk to funders who are interested in understanding the quality of the research that they commission and use with a view to improving it.  If you are a funder or impact investor and are interested in being involved, please get in touch.

This article was first published by the Association of Charitable Foundations and London Funders. Indian data are from Innovations for Poverty Action. 

*Update: Actually research published at around the same time as this piece suggests that about 85% of medical research doesn’t achieve this 😦

What is decent quality evidence? –>

Why most charities shouldn’t evaluate their own work–>

This entry was posted in Uncategorized and tagged , , , , , , , , . Bookmark the permalink.

5 Responses to Are we relying on unreliable research?

  1. Pingback: Making research findable and useful | Giving Evidence

  2. Pingback: Don’t Die of Ignorance | Giving Evidence

  3. Pingback: Giving Evidence

  4. Pingback: Kids Company shows a general problem | Giving Evidence

  5. Pingback: Is grantee / beneficiary feedback a substitute for RCTs? | Giving Evidence

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s