Announcing a rating of UK foundations on their transparency, accountability and diversity

UK charitable foundation staff and trustees are very white and very male. They’re also often senior in years, and pretty posh. None of those characteristics is necessarily a problem of itself, but (a) the homogeneity creates risk of lacking diversity of views, experiences and perspectives. Increasing diversity has been shown by many studies and circumstances to lead to better decisions and better results. And (b) foundation staff and boards may collectively have little experience of (and hence, good understanding of) the problems they seek to solve, and insights into the communities they seek to serve. 

Funded by a group of UK grant-making foundations, Giving Evidence will rate UK foundations on their diversity. We will also look at their transparency – e.g., how easy it is to find out what they fund and when – and their accountability practices – e.g., whether they have a complaints process, whether they cite the basis on which they assess applications and make decisions, whether they publish any analyses of their own effectiveness. 

Continue reading
Posted in Uncategorized | Leave a comment

Why most ratings of charities are useless: the available information isn’t important and the important information isn’t available

A Which? Magazine-type reliable rating of a wide range of charities would indeed be helpful. Unfortunately it’s currently impossible.

Most months, somebody contacts me saying that they’re setting up some website / app to rate loads of charities – to help donors choose charities and/or to ‘track their impact’. I ask what research and information the thing uses to assess charities’ performance; they always turn out to be using basically the charity’s report and accounts. Those are no good for this purpose.

A charity’s accounts are about money: how much came in, where it went and how much is left. Sometimes they say where it all came from (charity accounts always delineate categories of income, such as donations vs. earned income, but it’s optional to specify who made the donations). That’s it. You can’t identify effectiveness (‘impact’*) by looking at the accounts: for example, here we show the relative effects of various charities’ work to reduce reoffending. Those data are great but they’re not in the accounts.

A charity’s annual report has relative few requirements, beyond stating who the trustees are, the charity’s purposes, the auditor’s report if the charity is above a particular size. Some charities say a lot about what they have done; others don’t. Some say why they chose to do what they do and how and where; other don’t. {I’m talking about the UK. Other countries’ requirements are different, though most require even less public disclosure than we do, I think.}

Charities’ reports and accounts rarely say much about effectiveness. This is because most charities don’t know much about their effectiveness. That is because establishing effect is hard, expensive and requires sample size that few of them have and also the incentives on them are all wrong (see here.) Charities’ reports and accounts also rarely say much about need, and particularly not about the relative sizes of different needs nor how the intended beneficiaries prioritise those needs.

Charities accounts do say some stuff about the proportion of costs that are spent on administration and on fundraising. It is a mistake to assume that high spend on these costs means that an organisation is ineffective. Giving Evidence produced the first ever empirical data which support that statement, and anyway it’s obvious if you think about false economies of employing cheap people or having cheap equipment. This BBC interviewer figured that out live on-air. Also:

  • If a programme doesn’t work, it doesn’t matter how much or how little you spend on admin. It doesn’t work. But you can’t tell that it doesn’t work by looking at the accounts.
  • FYI, the rules around what costs gets classed as ‘administration’ are much vaguer than you might think, so charities probably vary quite widely in what they mean by them.

And even if charities’ reports and accounts do explain the need that the charity serves and/or its effectiveness at doing so, they are most unlikely to say much which enables the charity to be compared to other charities. That of course is what rational donors want to know. This lack of comparative information is partly because charities can each choose what impact measures they use, when they use them, and they often have interventions which are ostensibly unique to them.

The charities also normally choose research methods they use: even if two charities run the same programme and evaluate it with the same tool (say, the Goodman Strengths & Difficulties Questionnaire), they are likely to get quite different estimates of impact if one does a simple pre-post study, one does a randomised controlled trial, and one does a non-randomised controlled trial. (The fact that different methods produce different results is precisely why it is important to understand research methods and choose the right one.)

Charities’ reports and accounts do not include this information because that is not what they are for. The accounts are regulatory filings: the regulator’s remit does not include effectiveness. Accounts are about money, and you cannot identify impact by looking at where money comes from nor where it goes. And the annual report is partly about that money and partly what might loosely be called marketing material. That is completely different to a rigorous, independent assessment of effect.

So, nobody should assess a charity’s effectiveness – or quality, or the extent to which people should support it – on just its reports and accounts. Even though those are the sole data that are readily available.

Let’s turn then to what a donor does need in order to assess a charity. Well, as mentioned that includes, data about:

  • The scale, nature and location of the need being served.
  • How the intended beneficiaries prioritise those needs.
  • The evidence underlying the proposed intervention/s
  • How the intended beneficiaries feel about those intervention/s. (If people believe that the chlorine you want them to add to their water to purify it and reduce incidence of diarrhoea, which can be fatal, is in fact a poison, they will not add it to their water when you are not looking and intervention will fail.)
  • A robust and independent assessment of effectiveness of the intervention/s as delivered by that charity.
  • A comparison of the effectiveness – and ideally, of cost-effectiveness – of various organisations’ solutions to that need.

For many charities, those data simply don’t exist. And for the ones where they do exist, they are far from readily available: one needs to dig them out, normally from multiple sources. It is complex work, and expensive. It exists for some sectors.

Charity Navigator, which has ratings of probably the world’s largest set of charities, which are in the US, uses financial filings and adds other information where possible.

Hence my view: that the available information (reports and accounts) are not what you need to assess charities; and the information that you do need to assess charities is normally not information.

What to do?

For one thing, don’t produce whizzy graphics and platforms that re-cook irrelevant and unimportant data. That is, don’t try to assess charities using just their reports and accounts. Ever.

There are some decent independent analysts who use the kind of described above and which get to more reliable answers: they include GiveWell (minus the recommendations about deworming), ImpactMatters (now part of Charity Navigator). 

For everything else? There is sometimes a way round, and Giving Evidence is working on creating a solution that uses it. It will be wider than what currently exists, but still only cover the small set of charities for which the relevant information is available. Hopefully that will grow over time.

But in the meantime, let’s not effectively train donors to use some platform that will in fact mislead them.

*I happen to dislike the term ‘impact’ because I grew up in physics where an impact means a collision, normally between two objects which are inanimate, and which sometimes destroys one or both of them.

Posted in Uncategorized | Leave a comment

Webinar: intro to evidence, and the evidence about child abuse

This webinar, given with the Campbell Collaboration and Porticus, is a great introduction to rigorous evidence in general, and specifically the rigorous evidence around ‘what works’ in institutional responses to child abuse. It is part of our work on that latter, explained in more detail here. including producing an Evidence and Gap Map which shows what evidence exists, and a ‘Guidebook’ which summarises what it says.

We start from the beginning, explaining:

  • A foundation’s perspective on why rigorous evidence about ‘what works’ matters
  • What a fair trial is, and why they matter
  • How to find all the fair trials on a particular topic: what a systematic review is
  • What an Evidence & Gap Map (EGM) is: how we structured our frame and how to codify studies
  • Issues that can affect the reliability of studies (i.e., introduce risks of bias), why those matter and how we handled them
  • The findings of our particular EGM
  • What we found when we match up Porticus’ grantees against evidence on the EGM
  • What evidence synthesis is and why it matters
  • How we synthesised and summarised the evidence on our EGM, and what that evidence says
  • The implications of this evidence-base for Porticus, for other funders / practitioners / policy-makers, and what we’re doing next.
  • Then we answered audience questions.

Loads more about this work is here.

Posted in Uncategorized | Leave a comment

Many (many!) charities are too small to measure their own impact

Most charities should not evaluate their own impact. Funders should stop asking them to evaluate themselves. For one thing, asking somebody to mark their own homework was never likely to be a good idea.

This article explains the four very good reasons that most charities should not evaluate their own impact, and gives new data about how many of them are too small.

Most operational charities should not (be asked to) evaluate themselves because:

1. They have the wrong incentive. Their incentive is (obviously!) to make themselves look as great as possible – impact evaluations are used to compete for funding – so their incentive is to produce research that is flattering. That can mean rigging the research to make it flattering and/or burying findings that doesn’t flatter them. I say this having been a charity CEO myself and done both.

Non-profits respond to that incentive. For example, a rigorous study* offered over 1400 microfinance institutions the chance to have their intervention rigorously evaluated. Some of the invitations included a (real) study by prominent authors indicating that microcredit is effective. Other invitations included information on (real) research – by the same authors using a very similar design – indicating the ineffective. A third set of invitations did not include research results. Guess what? The organisations whose invitations implied that the evaluation would find their intervention to be effective were twice as likely to respond and agree to be evaluated than those whose invitation implied the danger of find their intervention to be ineffective. This suggests that the incentive creates a big selection bias even in what impact evaluations happen.

2. They lack the necessary skills in impact evaluation. Most operational charities are specialists in, say, supporting victims of domestic violence or delivering first aid training or distributing cash in refugee camps. These are completely different skills to doing causal research, and one would not expect expertise in these unrelated skills to be co-located. {NB, this article is about impact evaluation. Other types of evaluation, e.g., process evaluation, may be different. Also a few charities do have the skills to do impact evaluation – the IRC, Give Directly come to mind. But they are the exceptions. Most don’t.}

3. They often lack the funding to do evaluation research properly. One major problem is that a good experimental evaluation may involve gathering data about a control group which does not get the programme or which gets a different programme, and few operational charities have access to such a set of people.

A good guide is a mantra from evidence-based medicine, that research should “ask an important question and answer it reliably”. If there not enough money (or sample size) to answer the question reliably, don’t try to answer it at all.

Caroline talking about exactly this to a group of major donors

4. They’re too small. Specifically, their programmes are too small: they do not have enough sample size for evaluations of just their programmes to produce statistically meaningful results, i.e., to distinguish the effects of the programme from that of other factors or random chance, i.e., results of self-evaluations by operational charities are quite likely to be just wrong. For example, when the Institute of Fiscal Studies did a rigorous study of the effects of breakfast clubs, it needed 106 schools in the sample: that is way more than most operational charities providing breakfast clubs have.

Giving Evidence has done some proper analysis to corroborate this view that many operational charities’ programmes are too small to reliably evaluate. The UK Ministry of Justice runs a ‘Data Lab’, which any organisation running a programme to reduce re-offending can ask to evaluate that programme: the Justice Data Lab uses the MoJ’s data to compare the re-offending behaviour of participants in the programme with that of a similar (‘propensity score-matched’) set of non-participants. It’s glorious because, for one thing, it shows loads of charities’ programmes all evaluated in the same way, on the same metric (12-month reoffending rate) by the same independent researchers. It is the sole such dataset of which we are aware, anywhere in the world.

In the most recent data (all its analyses up to October 2020), the JDL had analysed 104 programmes run by charities (‘the voluntary and community sector’), of which fully 62 prove too small to produce conclusive results. 60% of the charity-run programmes were too small to evaluate reliably.

The analyses also show the case for reliable intervention and not just guessing which charity-run programmes work or assuming that they all do:

a. Some charity-run programmes create harm: they increase reoffending, and

b. Charity-run programmes vary massively in how effective they are:

Hence most charities should not be PRODUCERS of research. But they should be USERS of rigorous, independent research – about where the problems are, why, what works to solve them, and who is doing what about them. We’ve written about this amply elsewhere.

* I particularly love this study because of how I came across it. It was mentioned by a bloke I got talking to in a playground while looking after my godson. The playground happens to be between MIT and Harvard, so draws an unusual crowd, but still. Who needs research infrastructure when you can just chat to random strangers in the park?…

 

More about why charities shouldn’t evaluate themselves–>

Posted in Uncategorized | Leave a comment

We don’t know how to get donors to use more evidence to improve their giving

This article first published in Alliance Magazine.

What aids and impedes donors using evidence to make their giving more effective? This question motivated a two researchers at the University of Birmingham to do a wide search of the academic and non-academic literature to find studies that provide answers. The findings are in a systematic review published last month. It’s remarkable. It finds that we – the human race – don’t really know yet what aids and impedes donors using evidence – because nobody has yet investigated properly.

This matters because some interventions run by charities are harmful. Some produce no benefit at all. And even interventions which do succeed vary hugely in how much good they do. So it can be literally vital that donors choose the right ones. Only sound evidence of effectiveness of giving can reliably guide donors to them.

Continue reading
Posted in Uncategorized | 1 Comment

Royal patronages of charities don’t seem to help charities much

Giving Evidence today publishes research about Royal patronages of charities: what are they, who gets them, and do they help? This fits within our work of providing robust evidence so that charities and donors can be as effective as possible.

In short, we found that charities should not seek or retain Royal patronages expecting that they will help much. 

74% of charities with Royal patrons did not get any public engagements with them last year. We could not find any evidence that Royal patrons increase a charity’s revenue (there were no other outcomes that we could analyse), nor that Royalty increases generosity more broadly. Giving Evidence takes no view on the value  of the Royal family generally. The findings are summarised in this Twitter thread.

Continue reading
Posted in Uncategorized | 2 Comments

How is philanthropy responding to Covid19? How should it respond?

How are donors responding to the pandemic? What should they be doing? What will the long-term effects be on #philanthropy?

Giving Evidence’s Director Caroline Fiennes discussed all this with The Business Of Giving in this interview.

We also discussed Giving Evidence’s research about the effect of various ‘ways of giving’: a long-standing interest (see article in the scientific journal Nature and our ground-breaking research)

Posted in Uncategorized | Leave a comment

Identifying the Effects of Various Ways of Giving: Using the ‘Opportunity’ of the Covid19 Crisis

New project!

Much attention is paid to what donors fund, but very little is paid to how they fund. Questions about how to fund include whether/when to give with restrictions, whether to give a few large grants vs. many smaller ones, what application and reporting processes to have, how to make the decisions about what to fund. The lack of attention to these ‘how to fund’ questions is despite the facts that (i) they evidently affect funders’ effectiveness, and (ii) they can be investigated empirically. We have written about this in the top scientific journal Nature, and with the University of Chicago.

Weirdly, the COVID19 crisis creates a (rather morbid) opportunity to investigate empirically the effects of some funder behaviours.

Giving Evidence is starting a new project to do this empirical research. Continue reading

Posted in Uncategorized | Leave a comment

Giving during COVID-19

Clearly, communities and charities are under great strain at the moment. A vast number of people in the UK have less than one week’s savings. Charities are doing all manner of work, and the crisis is expected to cost them at least £4 billion(!)

Please give. Continue reading

Posted in Uncategorized | Leave a comment

We tried to update our analysis of charities’ performance and their admin costs, and you won’t BELIEVE what happened next!

Many people believe that charities waste money on ‘administration’, and hence that the best charities spend little on administration. A strong form of this view is that the best charities are by definition those which spend little on administration, i.e., you can tell how good a charity is just by looking at their admin costs: one sometimes hears this view.

It’s nonsense. The amount that charities spend on administration is (probably) totally unrelated to whether they’re intervention is any good. If I have an intervention which, to take a real example, is supposed to decrease the number of vulnerable teenagers who get pregnant, but in fact does the opposite and increases it, then it doesn’t matter how low the administrative costs are: the fact is that the intervention doesn’t work. As Michael Green, co-author of Philanthrocapitalism: How Giving Can Save The World says: ‘A bad charity with low administration costs is still a bad charity’. Continue reading

Posted in Admin costs, Effective giving, Impact & evaluation, Uncategorized | Leave a comment