First, missing data. Philanthropic donors, operational charities and others often have to deal with this. Hence unearthing the missing data is a theme in Giving Evidence’s work:
- Massive emergency aid is now flowing to the Philippines following Typhoon Haiyan. Operational NGOs and government aid agencies can only make evidence-based decisions about what’s needed and where to prioritise if they all share data about their activities and plans – in real-time and in a machine-readable format. Generally they don’t: so after the Asian tsunami, for example, some children got vaccinated three times and others (presumably) not at all. Owen Barder who invented such a format, myself and others had a letter in the Financial Times requesting that these data be shared in order to avoid such nonsense in the Philippines. Owen then was on BBC Radio 4’s PM programme and Newsnight.
- International development in general could be better based on evidence of need and supply if donors disclosed what they’re doing and where. Hence my rant in The Economist in support of Publish What You Fund, which finds that most donors don’t.
- Many foundations run programmes which fail, but they don’t tell anybody and hence that evidence is missing. This is self-created publication bias. Giving Evidence is currently working with a foundation to share the tale and lessons from one failed programme. We hope it’ll be the first of many such ‘confessions’, and that these will be used by donors. (We’re inspired by Engineers Without Borders’ annual Failure Report. When I asked their CEO why they do it, he said ‘Well, engineers are attuned to failure. It’s pretty bad if your bridge collapses’!)
- We support registries where social scientists register their trials before they start, in order that we can all see if some don’t get published (e.g., this one).
- We suspect that many charities withhold impact data which are unflattering (I inadvertently did it myself as a charity CEO, before I’d even heard of publication bias). The charity/philanthropy sectors have no mechanism for spotting or avoiding it. Probably all charities are in the top quartile – a miracle! This is a major hole, and Giving Evidence has some remedies currently in the incubator.
- We support the AllTrials campaign to force pharmaceutical companies to disclose results of all trials they run. Ludicrously they currently don’t have to. Estimates are that about half of all trials are unpublished: and you can guess which half. Ben Goldacre says that ‘as a result, the effect of most prescribed medicines are essentially unknown’. Actually I’ve spoken about AllTrials in several recent press pieces but it gets chopped out 🙂 Irony not unnoticed.
Second, data which are dismal quality. It’s also hard to make evidence-driven decisions if the data-quality is awful, which it often is, in charities and philanthropy. The sole study we’ve ever seen of charity-sector data quality found that 15% was poor, only 30% was good, and some claims had no evidence at all. The technical term for this latter is ‘fiction’. Part of the problem is the conflicting incentives (and lack of social science research skills) in the common situation where charities evaluate themselves: hence we’ve written about why most charities shouldn’t evaluate their work. We’ll write more on this soon.
If you’re a funder and interested in publishing tales of failure, or enabling work to sort out data-quality and/or data-hiding in charities/philanthropy, get in touch.
A few charities and do donors do fess up. Here’s what they say—>
I posted a similar blog on the same day (TODAY) — but with more of the statistical reasons and how to fix the mistakes of the past: http://ht.ly/qNipg
Pingback: Making charities’ research more findable and useful | Giving Evidence