This article was first published by our friends at The Life You Can Save.
It’s hard to make evidence-based decisions if much of the evidence is missing, ropey, unclear, or you can’t find it. This has become Giving Evidence’s unofficial slogan as we aim for charitable giving to be based on sound evidence.
Charities produce masses of evidence about their effectiveness. Evidence is a key component of how organisations like The Life You Can Save, GiveWell, and foundations assess charities. But much of that research is missing (unpublished), ropey (uses poor research methods), unclear (so you can’t tell whether it’s ropey or not), or is hard to find because it’s only published on the website of an organisation you’ve never heard of. (There are virtually no central indexed repositories.)
This damages beneficiaries in two ways: first, donors and other operational charities can’t see what works and therefore what to fund or replicate, so may implement something avoidably suboptimal; and second, the research consumes resources which could perhaps be better spent on delivering something which does work. Hence Giving Evidence works to increase quality, quantity and availability of research.
Giving Evidence is just now starting to study missing (non-published) research by charities: our new project on this topic is, to our knowledge, the first ever study of whether and why charities’ research is unpublished. We already know that much charity research is unpublished. When I was a charity CEO, we researched our impact, and when the results were good, we published them, and when they weren’t we didn’t. I’d never heard of publication bias (of which this is an egregious example) but I had noticed that bad results make for bad meetings with funders…which led to us having to cut staff. In our defense, we weren’t being evil. We were just responding rationally to badly-designed incentives. Fewer than half the US foundations surveyed who conduct evaluations publish them. We also know that non-publication of research is a major problem in science and in medical research.
We suspect three reasons why charities don’t publish their research. First, incentives, as outlined. Second, they may think that nobody’s interested. By analogy, a campaign in the UK to get grant-makers to publish details of all their grants (which few do) found that many foundations were open to doing this but simply hadn’t realised that anybody would want them. And third, it’s unclear where to publish even if you want to. There are few repositories, journals, or standard ways of ‘tagging’ research online to make it findable. So charities may (rightly) think that the traffic to material published exclusively on their own websites won’t justify the work in sprucing up the research to publish it.
This first study focuses on UK charities supporting people with mental health issues: what research do they do, what do they not publish, and why not. The aim is to figure out what could be done to get more of it published – and by whom. We’re interested in, for example:
- How much research is unpublished? We’ll try to estimate the proportion of the research budget whose fruits are never publicly available.
- Is published research consistently more positive than non-published research? That would suggest the incentive problem to which I personally fell prey.
- Does the chance of publication depend on whether the research is done in-house versus by an outsider? Or depend on who pays for it? Possibly some funders prevent charities from publishing research.
On research by charities being hard to find and unclear, Giving Evidence is working with charities in criminal justice. We’re creating a standardised, structured abstract to sit atop any research report by charities (detailing, for example, what the intervention was, what kinds of people were served and where, what the research was (sample size, how they were selected), what outcomes were measured, the results, the unit cost). This borrows heavily from the checklists for reporting medical research which are thought to have markedly improved the usefulness and quality of medical research. We’re also looking at creating, not a central repository as such, but open meta-data to allow charities to tag their research online and a central search ‘bot’ (rather like this) through which donors, charities, practitioners, and policy-makers can rapidly find it.
And on charities research being ropey, we’re working with a foundation to assess the quality of research that their grantees are producing – requested by that foundation and requested by other funders and initiated by the charity themselves. Quality of charities’ research has also barely been researched. We know of just one study, by a UK foundation which found that about 70% of research it received from grantees was what it called ‘good’ and some appeared to be totally fabricated.
Medicine has made great strides by enabling front-line practitioners to make decisions based on sound evidence – since in their world, like ours, the right best course of action isn’t always evident. Hence they devote considerable resource to figuring out how much research is ropey, and why, and fixing it. They have whole teams devoted to improving research reporting, to make it clearer. Other teams look at ‘information infrastructure’ to ensure that evidence can be rapidly found; and many people study non-publication and selective publication of clinical research and work on rooting it out. Thus meta-research – research about research – is essential to improving decisions. Far from just technical and dry, good meta-research can help improve real beneficiaries’ lives.
It’s thought that fully 85% of all medical research is wasted – on research which goes ‘missing’, is too ropey, unclear or unfindable. We need to know if and where charitable resources are being similarly wasted, and make haste to fix it. Giving Evidence’s meta-research and work on the information infrastructure are, we think, important steps.
We’ll report back later on what we find.
The issues and our work: