Reducing the Administrative Burden Placed on UK Charities by UK Donors and Funders

New project!

Giving Evidence is delighted to be starting a project to study funders’ application processes – to try to figure out how to reduce the costs that funders create for operational nonprofits. This is a hugely important topic, so we have written about it publicly, and have been seeking for a while to work on it: we have now teamed up with the Law Family Commission on Civil Society, run by Pro Bono Economics, which exists to ‘unleash the full potential of civil society’ because a considerable ‘leash’ (constraint) on civil society organisations is the costs they bear from charitable funders through application and reporting processes.

What is the issue here?

Charities and civil society organisations (CSOs) spend masses of time (=money) applying to funders. If they do not get the funding, most of that cost is wasted: specifically, it reduces the amount of work and good that they can do with their available resources. So we can think of it in terms of the efficiency of the process (we mean ‘efficiency’ in the mechanical, engineering-type sense, i.e., the amount of output achieved for a given amount of input, vs the amount that is wasted.) In economics-speak, application costs raise the cost of capital for CSOs.

Application processes are created by funders. Some of the costs are borne by them (e.g., their staff time reading the forms) but other costs fall on non-profits: they are ‘externalised’ by the funders, and so are invisible to the funders, and rarely actively managed by them. {There are some honourable exceptions: BBC Children in Need is one.} Hence, in a bad case, it can happen that a funders’ process creates so much work for other organisations that its costs exceed the amount being given – without the funder even realising. We have seen instances of this.

Most funders have their own application forms and processes. That increases work and wastage. And some funders invite way more applications than they need. So we seek to reduce this wastage. (Some analysis here by Time to Spare of the scale of that wastage. It thinks that 46% of UK grants cost more than they’re worth.)

Johann Sebastian Bach:

Oh, that’s my application to the court. If I get it, I’ll have spent all of it on that very application.

From Bach and Sons, a play by Nina Raine[1]

But it’s not trivial, for various reasons. First, some application processes are helpful to applicants, even if they do not get the funding. Second, the application process may serve some useful purpose for the funder, such that ditching it would be an error. And third, we are – always! – alive to the possibility of unintended consequences.

Now is a particularly good time to work on this topic because many funders changed their practices in response to the pandemic, becoming faster and leaner – so may be open to embedding new practices. For instance, 67 funders in London collectively created one-stop shop application processes with a slimmed-down application form, and funders in Jersey also collectively created a new collaborative process.

Our approach

Is to treat this as a behaviour change exercise. That is, we are not looking simply to document these costs, but rather to understand why and where they arise, the benefits to the various players as well as costs, and to identify the likely effects of approaches which might reduce them.

We seek to really understand the likely effects of possible fixes, as well as their likely take-up. This seems to be an innovation in discussions and work on this issue.

What we will do

Clearly we will start with existing work on this issue and seek to augment it. So our workstreams are:

Understanding what the current behaviours are and why they arise. We will interview foundations – and crucially also some operational nonprofits. We seek to understand what funders are trying to achieve with these processes, e.g., to their reduce costs, to identify the strongest applications / organisations / approaches, to reduce costs to applicants, limiting work for their teams, limiting applications to reduce costs to applicants. We will investigate how operational charities decide whether it is worth applying to a particular funder – the extent to which they take into account the costs of applying and the chances of success (the ‘expected value’ of their application). As far as possible, in these interviews, we will look at other aspects of funding practice, such as restrictions and grant duration.

For example, here is the Hewlett Foundation describing how it designed its processes to minimise burden / maximise effectiveness of its grantees. (Yet another reason that I have a massive crush on the Hewlett Foundation…)

Discussing possible fixes with foundations and experts. We will interview foundations and sector experts about potential fixes, whether new or already attempted. We are aware of, for example, the #FixTheForm initiative; a study a while ago by NPC called Turning the Tables; a study by the University of Bath; the feedback about funders being gathered by GrantAdvisor; Project Streamline in the US; and various attempts at shared applications and shared reporting. The goal is to gain insights into the dynamics that they have encountered, and their views on drawbacks and feasibility of various proposed fixes.

Economic modelling: Understanding the likely effects of potential fixes. What people say they will do and what they actually do are often different! As well as listening to foundations and charities, we will do some economic modelling, to identify the behavioural changes that can realistically be expected, the scale of savings that potential fixes might have, and to whom – and also to uncover unintended consequences (including adverse effects) which might arise from changes to the system. This approach is different from much of the research in the charity sector: we hope that it will bring additional insight.

If you have worked on this issue before – in any country – please get in touch! We would love to hear from you.


[1] This play is new and only on-stage and the script not yet published: I may have misremembered this quote a bit.

Posted in Uncategorized | Leave a comment

Letter in The Economist about anti-malarial bednets

Giving Evidence’s Director, Caroline Fiennes, has a letter in The Economist this week.

Giving Evidence’s existence is about directing philanthropic resources to effective & cost-effective work. So we were horrified by a letter in The Economist two weeks ago which appeared to claim (it was rather unclear) that anti-malarial bednets “fail” because they [all?] get used for fishing. It’s just not true. Masses of high-quality research evidence shows that bednets reduce incidences of malaria and thus save lives. (Plus, as you will know if you have ever slept in a room with mosquitoes in, save much annoyance.)

Many philanthropic donors fund bednets – from people donating £2 right through to the Gates Foundation. It is not acceptable for them to be deterred from supporting that life-saving work by erroneous information. So Caroline wrote in The Economist to put the record straight – along with long-time ally Professor Paul Garner of the (relevant!) Liverpool School of Tropical Medicine, and Co-ordinating Editor of the Cochrane Infectious Diseases Group. (Paul invited Caroline to give a talk (which is here) at the Liverpool School, having seen her on BBC News talking about charities and evidence (here) in the wake of the charity Kids Company collapsing.)

Our letter says:

Alex Nicholls rightly warns against focusing on outputs rather than outcomes in philanthropic programmes (Letters, October 16th). But his example, that antimalarial bednet schemes “failed”, is incorrect. Contrary to some reporting, few bednets get used for fishing. A four-country study of over 25,000 bednets found fewer than 1% were being misused. A comprehensive analysis by Cochrane, an independent network of researchers, of 23 medical trials encompassing nearly 300,000 people showed that bednets reduced deaths by a third. A study at Oxford concluded that they averted around 663m cases of malaria in Africa between 2000 and 2015. These important outcomes, by charities and others, should be applauded.

Here are the studies that we cite:

There is a persistent trope about bednets getting used for fishing. Here are two relevant facts about that:

  • Fishing communities are only 1% of Africa’s communities at risk of malaria, according to Professor Pascaline Dupas, a development economist at Stanford. (https://web.stanford.edu/~pdupas/Dupas_letter_editor_NYT_malaria_nets.pdf).
  • Bednets don’t last forever – they get holes etc. So even if some are used for fishing, that doesn’t prove that that was INSTEAD of being used over beds. (None of which is a comment on the effect of bednets on fish-stocks. We’re only talking here about whether bednet ‘fail’ at their primary goal of preventing malaria – which they don’t.) 

To be clear, Giving Evidence has no professional or commercial interest in bednets. We don’t work in that – other than as an illustration of cost-effective work. We are just trying to keep people alive.

We don’t know why Alex Nicholls, an Oxford professor of social enterprise, wrote that letter. Nor what evaluation/s he was using. We have asked but he hasn’t replied. We also asked which specific bednet programme he was referring to: he hasn’t answered that either, but in fact the sole bednet programme cited in the article to which he referred is hypothetical, so it makes no sense for him to claim to know its results(!)

Sometimes the term ‘social enterprise’ is used to mean ‘pro-social or pro-environmental organisations that charge money’ so as to be financially viable. Bednets may sit badly with that philosophy because the evidence (see graph below) is that charging for bednets massively reduces usage and hence results – i.e., there’s a trade-off between earned income and impact. 

In fact, in every instance that we’ve examined, it turns out that charging for the product reduces uptake and results (including bednets, soap, solar lamps). ‘Impact investors’ beware. Caroline wrote in the Financial Times about that, here.

Making giving decisions based on sound evidence, rather than random anecdote, ensures that our resources are best used – and keeps people alive. This is Giving Evidence’s work. If you would like to talk to us about your giving, please get in touch.

(Source: JPAL, The Price Is Wrong, 2011 here.)

Posted in Uncategorized | Leave a comment

Announcing a rating of UK foundations on their transparency, accountability and diversity

UK charitable foundation staff and trustees are very white and very male. They’re also often senior in years, and pretty posh. None of those characteristics is necessarily a problem of itself, but (a) the homogeneity creates risk of lacking diversity of views, experiences and perspectives. Increasing diversity has been shown by many studies and circumstances to lead to better decisions and better results. And (b) foundation staff and boards may collectively have little experience of (and hence, good understanding of) the problems they seek to solve, and insights into the communities they seek to serve. 

Funded by a group of UK grant-making foundations, Giving Evidence is rating UK foundations on their diversity. We are also looking at their transparency – e.g., how easy it is to find out what they fund and when – and their accountability practices – e.g., whether they have a complaints process, whether they cite the basis on which they assess applications and make decisions, whether they publish any analyses of their own effectiveness. (Read an article in Alliance Magazine about this project.)

Continue reading
Posted in Uncategorized | Leave a comment

Why most ratings of charities are useless: the available information isn’t important and the important information isn’t available

A Which? Magazine-type reliable rating of a wide range of charities would indeed be helpful. Unfortunately it’s currently impossible.

Most months, somebody contacts me saying that they’re setting up some website / app to rate loads of charities – to help donors choose charities and/or to ‘track their impact’. I ask what research and information the thing uses to assess charities’ performance; they always turn out to be using basically the charity’s report and accounts. Those are no good for this purpose.

A charity’s accounts are about money: how much came in, where it went and how much is left. Sometimes they say where it all came from (charity accounts always delineate categories of income, such as donations vs. earned income, but it’s optional to specify who made the donations). That’s it. You can’t identify effectiveness (‘impact’*) by looking at the accounts: for example, here we show the relative effects of various charities’ work to reduce reoffending. Those data are great but they’re not in the accounts.

A charity’s annual report has relative few requirements, beyond stating who the trustees are, the charity’s purposes, the auditor’s report if the charity is above a particular size. Some charities say a lot about what they have done; others don’t. Some say why they chose to do what they do and how and where; other don’t. {I’m talking about the UK. Other countries’ requirements are different, though most require even less public disclosure than we do, I think.}

Charities’ reports and accounts rarely say much about effectiveness. This is because most charities don’t know much about their effectiveness. That is because establishing effect is hard, expensive and requires sample size that few of them have and also the incentives on them are all wrong (see here.) Charities’ reports and accounts also rarely say much about need, and particularly not about the relative sizes of different needs nor how the intended beneficiaries prioritise those needs.

Charities accounts do say some stuff about the proportion of costs that are spent on administration and on fundraising. It is a mistake to assume that high spend on these costs means that an organisation is ineffective. Giving Evidence produced the first ever empirical data which support that statement, and anyway it’s obvious if you think about false economies of employing cheap people or having cheap equipment. This BBC interviewer figured that out live on-air. Also:

  • If a programme doesn’t work, it doesn’t matter how much or how little you spend on admin. It doesn’t work. But you can’t tell that it doesn’t work by looking at the accounts.
  • FYI, the rules around what costs gets classed as ‘administration’ are much vaguer than you might think, so charities probably vary quite widely in what they mean by them.

And even if charities’ reports and accounts do explain the need that the charity serves and/or its effectiveness at doing so, they are most unlikely to say much which enables the charity to be compared to other charities. That of course is what rational donors want to know. This lack of comparative information is partly because charities can each choose what impact measures they use, when they use them, and they often have interventions which are ostensibly unique to them.

The charities also normally choose research methods they use: even if two charities run the same programme and evaluate it with the same tool (say, the Goodman Strengths & Difficulties Questionnaire), they are likely to get quite different estimates of impact if one does a simple pre-post study, one does a randomised controlled trial, and one does a non-randomised controlled trial. (The fact that different methods produce different results is precisely why it is important to understand research methods and choose the right one.)

Charities’ reports and accounts do not include this information because that is not what they are for. The accounts are regulatory filings: the regulator’s remit does not include effectiveness. Accounts are about money, and you cannot identify impact by looking at where money comes from nor where it goes. And the annual report is partly about that money and partly what might loosely be called marketing material. That is completely different to a rigorous, independent assessment of effect.

So, nobody should assess a charity’s effectiveness – or quality, or the extent to which people should support it – on just its reports and accounts. Even though those are the sole data that are readily available.

Let’s turn then to what a donor does need in order to assess a charity. Well, as mentioned that includes, data about:

  • The scale, nature and location of the need being served.
  • How the intended beneficiaries prioritise those needs.
  • The evidence underlying the proposed intervention/s
  • How the intended beneficiaries feel about those intervention/s. (If people believe that the chlorine you want them to add to their water to purify it and reduce incidence of diarrhoea, which can be fatal, is in fact a poison, they will not add it to their water when you are not looking and intervention will fail.)
  • A robust and independent assessment of effectiveness of the intervention/s as delivered by that charity.
  • A comparison of the effectiveness – and ideally, of cost-effectiveness – of various organisations’ solutions to that need.

For many charities, those data simply don’t exist. And for the ones where they do exist, they are far from readily available: one needs to dig them out, normally from multiple sources. It is complex work, and expensive. It exists for some sectors.

Charity Navigator, which has ratings of probably the world’s largest set of charities, which are in the US, uses financial filings and adds other information where possible.

Hence my view: that the available information (reports and accounts) are not what you need to assess charities; and the information that you do need to assess charities is normally not information.

What to do?

For one thing, don’t produce whizzy graphics and platforms that re-cook irrelevant and unimportant data. That is, don’t try to assess charities using just their reports and accounts. Ever.

There are some decent independent analysts who use the kind of described above and which get to more reliable answers: they include GiveWell (minus the recommendations about deworming), ImpactMatters (now part of Charity Navigator). 

For everything else? There is sometimes a way round, and Giving Evidence is working on creating a solution that uses it. It will be wider than what currently exists, but still only cover the small set of charities for which the relevant information is available. Hopefully that will grow over time.

But in the meantime, let’s not effectively train donors to use some platform that will in fact mislead them.

*I happen to dislike the term ‘impact’ because I grew up in physics where an impact means a collision, normally between two objects which are inanimate, and which sometimes destroys one or both of them.

Posted in Uncategorized | Leave a comment

Webinar: intro to evidence, and the evidence about child abuse

This webinar, given with the Campbell Collaboration and Porticus, is a great introduction to rigorous evidence in general, and specifically the rigorous evidence around ‘what works’ in institutional responses to child abuse. It is part of our work on that latter, explained in more detail here. including producing an Evidence and Gap Map which shows what evidence exists, and a ‘Guidebook’ which summarises what it says.

We start from the beginning, explaining:

  • A foundation’s perspective on why rigorous evidence about ‘what works’ matters
  • What a fair trial is, and why they matter
  • How to find all the fair trials on a particular topic: what a systematic review is
  • What an Evidence & Gap Map (EGM) is: how we structured our frame and how to codify studies
  • Issues that can affect the reliability of studies (i.e., introduce risks of bias), why those matter and how we handled them
  • The findings of our particular EGM
  • What we found when we match up Porticus’ grantees against evidence on the EGM
  • What evidence synthesis is and why it matters
  • How we synthesised and summarised the evidence on our EGM, and what that evidence says
  • The implications of this evidence-base for Porticus, for other funders / practitioners / policy-makers, and what we’re doing next.
  • Then we answered audience questions.

Loads more about this work is here.

Posted in Uncategorized | Leave a comment

Many (many!) charities are too small to measure their own impact

Most charities should not evaluate their own impact. Funders should stop asking them to evaluate themselves. For one thing, asking somebody to mark their own homework was never likely to be a good idea.

This article explains the four very good reasons that most charities should not evaluate their own impact, and gives new data about how many of them are too small.

Most operational charities should not (be asked to) evaluate themselves because:

1. They have the wrong incentive. Their incentive is (obviously!) to make themselves look as great as possible – impact evaluations are used to compete for funding – so their incentive is to produce research that is flattering. That can mean rigging the research to make it flattering and/or burying findings that doesn’t flatter them. I say this having been a charity CEO myself and done both.

Non-profits respond to that incentive. For example, a rigorous study* offered over 1400 microfinance institutions the chance to have their intervention rigorously evaluated. Some of the invitations included a (real) study by prominent authors indicating that microcredit is effective. Other invitations included information on (real) research – by the same authors using a very similar design – indicating the ineffective. A third set of invitations did not include research results. Guess what? The organisations whose invitations implied that the evaluation would find their intervention to be effective were twice as likely to respond and agree to be evaluated than those whose invitation implied the danger of find their intervention to be ineffective. This suggests that the incentive creates a big selection bias even in what impact evaluations happen.

2. They lack the necessary skills in impact evaluation. Most operational charities are specialists in, say, supporting victims of domestic violence or delivering first aid training or distributing cash in refugee camps. These are completely different skills to doing causal research, and one would not expect expertise in these unrelated skills to be co-located. {NB, this article is about impact evaluation. Other types of evaluation, e.g., process evaluation, may be different. Also a few charities do have the skills to do impact evaluation – the IRC, Give Directly come to mind. But they are the exceptions. Most don’t.}

3. They often lack the funding to do evaluation research properly. One major problem is that a good experimental evaluation may involve gathering data about a control group which does not get the programme or which gets a different programme, and few operational charities have access to such a set of people.

A good guide is a mantra from evidence-based medicine, that research should “ask an important question and answer it reliably”. If there not enough money (or sample size) to answer the question reliably, don’t try to answer it at all.

Caroline talking about exactly this to a group of major donors

4. They’re too small. Specifically, their programmes are too small: they do not have enough sample size for evaluations of just their programmes to produce statistically meaningful results, i.e., to distinguish the effects of the programme from that of other factors or random chance, i.e., results of self-evaluations by operational charities are quite likely to be just wrong. For example, when the Institute of Fiscal Studies did a rigorous study of the effects of breakfast clubs, it needed 106 schools in the sample: that is way more than most operational charities providing breakfast clubs have.

Giving Evidence has done some proper analysis to corroborate this view that many operational charities’ programmes are too small to reliably evaluate. The UK Ministry of Justice runs a ‘Data Lab’, which any organisation running a programme to reduce re-offending can ask to evaluate that programme: the Justice Data Lab uses the MoJ’s data to compare the re-offending behaviour of participants in the programme with that of a similar (‘propensity score-matched’) set of non-participants. It’s glorious because, for one thing, it shows loads of charities’ programmes all evaluated in the same way, on the same metric (12-month reoffending rate) by the same independent researchers. It is the sole such dataset of which we are aware, anywhere in the world.

In the most recent data (all its analyses up to October 2020), the JDL had analysed 104 programmes run by charities (‘the voluntary and community sector’), of which fully 62 prove too small to produce conclusive results. 60% of the charity-run programmes were too small to evaluate reliably.

The analyses also show the case for reliable intervention and not just guessing which charity-run programmes work or assuming that they all do:

a. Some charity-run programmes create harm: they increase reoffending, and

b. Charity-run programmes vary massively in how effective they are:

Hence most charities should not be PRODUCERS of research. But they should be USERS of rigorous, independent research – about where the problems are, why, what works to solve them, and who is doing what about them. We’ve written about this amply elsewhere.

* I particularly love this study because of how I came across it. It was mentioned by a bloke I got talking to in a playground while looking after my godson. The playground happens to be between MIT and Harvard, so draws an unusual crowd, but still. Who needs research infrastructure when you can just chat to random strangers in the park?…

 

More about why charities shouldn’t evaluate themselves–>

Posted in Uncategorized | 1 Comment

We don’t know how to get donors to use more evidence to improve their giving

This article first published in Alliance Magazine.

What aids and impedes donors using evidence to make their giving more effective? This question motivated a two researchers at the University of Birmingham to do a wide search of the academic and non-academic literature to find studies that provide answers. The findings are in a systematic review published last month. It’s remarkable. It finds that we – the human race – don’t really know yet what aids and impedes donors using evidence – because nobody has yet investigated properly.

This matters because some interventions run by charities are harmful. Some produce no benefit at all. And even interventions which do succeed vary hugely in how much good they do. So it can be literally vital that donors choose the right ones. Only sound evidence of effectiveness of giving can reliably guide donors to them.

Continue reading
Posted in Uncategorized | 1 Comment

Royal patronages of charities don’t seem to help charities much

Giving Evidence today publishes research about Royal patronages of charities: what are they, who gets them, and do they help? This fits within our work of providing robust evidence so that charities and donors can be as effective as possible.

In short, we found that charities should not seek or retain Royal patronages expecting that they will help much. 

74% of charities with Royal patrons did not get any public engagements with them last year. We could not find any evidence that Royal patrons increase a charity’s revenue (there were no other outcomes that we could analyse), nor that Royalty increases generosity more broadly. Giving Evidence takes no view on the value  of the Royal family generally. The findings are summarised in this Twitter thread.

Continue reading
Posted in Uncategorized | 7 Comments

How is philanthropy responding to Covid19? How should it respond?

How are donors responding to the pandemic? What should they be doing? What will the long-term effects be on #philanthropy?

Giving Evidence’s Director Caroline Fiennes discussed all this with The Business Of Giving in this interview.

We also discussed Giving Evidence’s research about the effect of various ‘ways of giving’: a long-standing interest (see article in the scientific journal Nature and our ground-breaking research)

Posted in Uncategorized | Leave a comment

Identifying the Effects of Various Ways of Giving: Using the ‘Opportunity’ of the Covid19 Crisis

New project!

Much attention is paid to what donors fund, but very little is paid to how they fund. Questions about how to fund include whether/when to give with restrictions, whether to give a few large grants vs. many smaller ones, what application and reporting processes to have, how to make the decisions about what to fund. The lack of attention to these ‘how to fund’ questions is despite the facts that (i) they evidently affect funders’ effectiveness, and (ii) they can be investigated empirically. We have written about this in the top scientific journal Nature, and with the University of Chicago.

Weirdly, the COVID19 crisis creates a (rather morbid) opportunity to investigate empirically the effects of some funder behaviours.

Giving Evidence is starting a new project to do this empirical research. Continue reading

Posted in Uncategorized | Leave a comment