This webinar, given with the Campbell Collaboration and Porticus, is a great introduction to rigorous evidence in general, and specifically the rigorous evidence around ‘what works’ in institutional responses to child abuse. It is part of our work on that latter, explained in more detail here. including producing an Evidence and Gap Map which shows what evidence exists, and a ‘Guidebook’ which summarises what it says.
We start from the beginning, explaining:
A foundation’s perspective on why rigorous evidence about ‘what works’ matters
What a fair trial is, and why they matter
How to find all the fair trials on a particular topic: what a systematic review is
What an Evidence & Gap Map (EGM) is: how we structured our frame and how to codify studies
Issues that can affect the reliability of studies (i.e., introduce risks of bias), why those matter and how we handled them
The findings of our particular EGM
What we found when we match up Porticus’ grantees against evidence on the EGM
What evidence synthesis is and why it matters
How we synthesised and summarised the evidence on our EGM, and what that evidence says
The implications of this evidence-base for Porticus, for other funders / practitioners / policy-makers, and what we’re doing next.
Most charities should not evaluate their own impact. Funders should stop asking them to evaluate themselves. For one thing, asking somebody to mark their own homework was never likely to be a good idea.
This article explains the four very good reasons that most charities should not evaluate their own impact, and gives new data about how many of them are too small.
Most operational charities should not (be asked to) evaluate themselves because:
1. They have the wrong incentive. Their incentive is (obviously!) to make themselves look as great as possible – impact evaluations are used to compete for funding – so their incentive is to produce research that is flattering. That can mean rigging the research to make it flattering and/or burying findings that doesn’t flatter them. I say this having been a charity CEO myself and done both.
Non-profits respond to that incentive. For example, a rigorous study* offered over 1400 microfinance institutions the chance to have their intervention rigorously evaluated. Some of the invitations included a (real) study by prominent authors indicating that microcredit is effective. Other invitations included information on (real) research – by the same authors using a very similar design – indicating the ineffective. A third set of invitations did not include research results. Guess what? The organisations whose invitations implied that the evaluation would find their intervention to be effective were twice as likely to respond and agree to be evaluated than those whose invitation implied the danger of find their intervention to be ineffective. This suggests that the incentive creates a big selection bias even in what impact evaluations happen.
2. They lack the necessary skills in impact evaluation. Most operational charities are specialists in, say, supporting victims of domestic violence or delivering first aid training or distributing cash in refugee camps. These are completely different skills to doing causal research, and one would not expect expertise in these unrelated skills to be co-located. {NB, this article is about impact evaluation. Other types of evaluation, e.g., process evaluation, may be different. Also a few charities do have the skills to do impact evaluation – the IRC, Give Directly come to mind. But they are the exceptions. Most don’t.}
3. They often lack the funding to do evaluation research properly. One major problem is that a good experimental evaluation may involve gathering data about a control group which does not get the programme or which gets a different programme, and few operational charities have access to such a set of people.
A good guide is a mantra from evidence-based medicine, that research should “ask an important question and answer it reliably”. If there not enough money (or sample size) to answer the question reliably, don’t try to answer it at all.
Caroline talking about exactly this to a group of major donors
4. They’re too small. Specifically, their programmes are too small: they do not have enough sample size for evaluations of just their programmes to produce statistically meaningful results, i.e., to distinguish the effects of the programme from that of other factors or random chance, i.e., results of self-evaluations by operational charities are quite likely to be just wrong. For example, when the Institute of Fiscal Studies did a rigorous study of the effects of breakfast clubs, it needed 106 schools in the sample: that is way more than most operational charities providing breakfast clubs have.
Giving Evidence has done some proper analysis to corroborate this view that many operational charities’ programmes are too small to reliably evaluate. The UK Ministry of Justice runs a ‘Data Lab’, which any organisation running a programme to reduce re-offending can ask to evaluate that programme: the Justice Data Lab uses the MoJ’s data to compare the re-offending behaviour of participants in the programme with that of a similar (‘propensity score-matched’) set of non-participants. It’s glorious because, for one thing, it shows loads of charities’ programmes all evaluated in the same way, on the same metric (12-month reoffending rate) by the same independent researchers. It is the sole such dataset of which we are aware, anywhere in the world.
In the most recent data (all its analyses up to October 2020), the JDL had analysed 104 programmes run by charities (‘the voluntary and community sector’), of which fully 62 prove too small to produce conclusive results. 60% of the charity-run programmes were too small to evaluate reliably.
The analyses also show the case for reliable intervention and not just guessing which charity-run programmes work or assuming that they all do:
a. Some charity-run programmes create harm: they increase reoffending, and
b. Charity-run programmes vary massively in how effective they are:
Hence most charities should not be PRODUCERS of research. But they should be USERS of rigorous, independent research – about where the problems are, why, what works to solve them, and who is doing what about them. We’ve written about this amply elsewhere.
* I particularly love this study because of how I came across it. It was mentioned by a bloke I got talking to in a playground while looking after my godson. The playground happens to be between MIT and Harvard, so draws an unusual crowd, but still. Who needs research infrastructure when you can just chat to random strangers in the park?…
What aids and impedes donors using evidence to make their giving more effective? This question motivated a two researchers at the University of Birmingham to do a wide search of the academic and non-academic literature to find studies that provide answers. The findings are in a systematic review published last month. It’s remarkable. It finds that we – the human race – don’t really know yet what aids and impedes donors using evidence – because nobody has yet investigated properly.
This matters because some interventions run by charities are harmful. Some produce no benefit at all. And even interventions which do succeed vary hugely in how much good they do. So it can be literally vital that donors choose the right ones. Only sound evidence of effectiveness of giving can reliably guide donors to them.
A note about Royal charity patronages re the passing of HM Queen Elizabeth II:
HM Queen had ~600 patronages. They were not all charities: many were parts of the military, cities, trade guilds etc.
Giving Evidence found that Her Majesty was patron of 198 UK registered charities. (The information published by the Royal Family about “charities and patronages” (their term) was incomplete, inconsistent and sometimes just wrong. So we had to construct that list: it took us about six person-weeks.) The charities of which HMQ was patron are listed here.
Giving Evidence’s data and analysis below show the types of charities of which the Royals are patrons. They are disproportionately large charities.
It is unclear how patronages are decided or how they are passed between Royals. It seems that the Royals decide amongst themselves: e.g., it was reported that Prince Andrew had returned some to the Queen to be redistributed. So presumably at least some of HMQ’s patronages – and possibly some of Prince/King Charles’ – will be redistributed (though it will be a bit difficult because the exits of Princes Philip, Harry & Andrew mean that there are fewer adult royals than previously.)
In short, we found that charities should not seek or retain Royal patronages expecting that they will help much.
74% of charities with Royal patrons did not get any public engagements with them last year. We could not find any evidence that Royal patrons increase a charity’s revenue (there were no other outcomes that we could analyse), nor that Royalty increases generosity more broadly. Giving Evidence takes no view on the value of the Royal family generally. The findings are summarised in this Twitter thread.
How are donors responding to the pandemic? What should they be doing? What will the long-term effects be on #philanthropy?
Giving Evidence’s Director Caroline Fiennes discussed all this with The Business Of Giving in this interview.
We also discussed Giving Evidence’s research about the effect of various ‘ways of giving’: a long-standing interest (see article in the scientific journal Nature and our ground-breaking research)
Much attention is paid to what donors fund, but very little is paid to how they fund. Questions about how to fund include whether/when to give with restrictions, whether to give a few large grants vs. many smaller ones, what application and reporting processes to have, how to make the decisions about what to fund. The lack of attention to these ‘how to fund’ questions is despite the facts that (i) they evidently affect funders’ effectiveness, and (ii) they can be investigated empirically. We have written about this in the top scientific journal Nature, and with the University of Chicago.
Weirdly, the COVID19 crisis creates a (rather morbid) opportunity to investigate empirically the effects of some funder behaviours.
Giving Evidence is starting a new project to do this empirical research. Continue reading →
Clearly, communities and charities are under great strain at the moment. A vast number of people in the UK have less than one week’s savings. Charities are doing all manner of work, and the crisis is expected to cost them at least £4 billion(!)
Many people believe that charities waste money on ‘administration’, and hence that the best charities spend little on administration. A strong form of this view is that the best charities are by definition those which spend little on administration, i.e., you can tell how good a charity is just by looking at their admin costs: one sometimes hears this view.
It’s nonsense. The amount that charities spend on administration is (probably) totally unrelated to whether they’re intervention is any good. If I have an intervention which, to take a real example, is supposed to decrease the number of vulnerable teenagers who get pregnant, but in fact does the opposite and increases it, then it doesn’t matter how low the administrative costs are: the fact is that the intervention doesn’t work. As Michael Green, co-author of Philanthrocapitalism: How Giving Can Save The Worldsays: ‘A bad charity with low administration costs is still a bad charity’. Continue reading →
I know. You’ve never heard of the Flemish Red Cross. You realise that such a thing probably must exist but you’d never hitherto realised it, right?
Well, you should know about it because it’s amazing. Of all the operational charities I’ve encountered, it is easily the most sophisticated in terms of use and production of decent evidence – and seeing as I’ve been in this sector now for >18 years, I’ve seen a lot. A clue is that it has 12 post-doctoral researchers on payroll, most of whose output goes into the peer-reviewed academic literature. Continue reading →
Apparently ~3000 organisations have Royal patrons. About 200 have this week lost their relationship with Prince Andrew. Securing and maintaining a relationship with a Royal is work, and is it worth it? It seems that nobody knows. Giving Evidence is going to investigate. {Update: the results of this analysis are here, published in summer 2020.}
This is a question about donor effectiveness: the patrons probably think that they are helping the charities, but donors are often rather less helpful than they think they are. It’s reasonable – and possible – to assess the effectiveness of donors, as we have said elsewhere. It is also a question about charity effectiveness: how should charities best allocate their scarce resources? We will specifically be looking at whether & how much & when Royals patrons – and with luck other celeb patrons – help charities. Continue reading →