Surprising churn in the top UK foundations

How much churn is there amongst the largest UK grant-making foundations (by giving budget)? One might expect basically none, because huge foundations don’t get created very often, and foundations don’t compete for resources. Giving Evidence looks at these data each year for our work on the Foundation Practice Rating, and we find that there is a surprisingly high amount of churn. These are the data for the last few years.

Why is that churn there? We haven’t investigated so can’t say. Maybe it’s related to investment income – because success there might enable larger giving budgets. Maybe it’s related to other income, e.g., BBC Children in Need and Comic Relief, which were both in the top 5 in 2019, raise their grant budgets from the public, which one might expect to rise and fall.

Posted in Uncategorized | Leave a comment

Why the system for charities applying to foundations is so expensive, and what can be done about it

The system by which charities apply to charitable foundations for funding is dreadful. When I ran an
operational charity, I and the team spent far too long on it – often simply re-writing and re-
formatting the same information multiple times into multiple forms, many of them badly designed. Happily our organisation, Giving Evidence, was recently able to study why the system is as it is, and what might improve it. Download our report here. An appendix is here with further detail, including about the economic modelling. Several findings were pretty striking.


First, the analysis confirmed experience: the “cost of capital” for charities securing funds from
foundation is very high. We found it to be at least 5.6% across all UK charities. And particularly high for small charities, costing them at least 17.5% of funds raised from foundations. (I say ‘at least’ because all the assumptions in our modelling were conservative.) Companies don’t pay anything like that when they raise capital (e.g., through loans or issuing bonds or equities). The total cost to UK charities is at least £900m every year – about 1.5 times the revenue of the National Trust.


It’s a silly problem. Foundations are charities, so charities spending resources dealing with each other is friction, entirely of charities’ own creation. We should be able to solve this amongst ourselves. It’s not an external problem, like global poverty or encroaching authoritarianism, which is where our resources should go.


Sadly, the problem of talent and resources being wasted isn’t unique to charities:

Professor Sarah Gilbert, who devised the Oxford / Astra Zeneca Covid19 vaccination:
“Actually, raising funds had been my main activity for years… I trained for years to become really good at ‘doing science’…but… what I actually spend my time doing these days is, mostly, bringing
in the money. This [system] can be counterproductive for the cause of scientific research itself… ” (Vaxxers, by Professor Sarah Gilber & Dr Cath Green)

Second, we found foundations more willing to give up time to discuss this topic than charities. That was a surprise because the pain and costs are borne by charities.


Third, building on the above, not all our charity interviewees considered the cost of dealing with
foundations to be a problem at all. Some – particularly in large charities – just felt that “the pain is part of the job”: it’s inevitable so no point fretting about it. Perhaps they have that view because large charities can afford specialist teams to deal with foundations – people rarely speculate on abolishing their own job – and to some extent, large charities compete on their ability to navigate this maze.


Fourth, the problematic system arises mainly because each foundation designs its own process to suit itself. They typically make little use of other foundations’ existing processes: e.g., few new foundations simply copy another foundation’s application form, which would reduce the re-formatting. And few foundations design their process to minimise workload for charities. I have often seen this in my work advising foundations. Indeed, most new foundations create written application forms precisely because they see other foundations having them, and they assume that best practice is good practice. On many issues, my book of guidance for donors urges them: “don’t just copy”.

Fifth, we studied in detail the economic effects of various initiatives which aim to reduce the costs of this application system. For each initiative, we found non-mad circumstances in which it would help, and non-mad circumstances in which it would hinder. Hence reformers should be cautious. We analysed: shared application forms; shared application forms with pooled funds; and online systems for matching foundations with charities (somewhat like online dating systems). Such initiatives will help only if:

  1. They save more money than they cost, and
  2. Any other effects are tolerable.

These conditions are not always met. For instance, there are sensible circumstances in which a
matching system will cost more to create, promote and run than it saves. The same for setting up pooled funds between multiple funders, negotiating which can be famously complicated. On the second condition, sometimes a pooled fund will alter where funds go – some areas will gain and others will lose – and that change must be acceptable. It took sophisticated economic modelling to reveal these effects.


Lastly, despite all the above, this looks like a solvable problem. It is solved elsewhere:

  • UK universities have long shared a single application form.
  • In the US, over 900 higher education institutions share an application process: Common App was created expressly to increase equality.
  • A system called Lightning is shared by various UK charities and public sector institutions to get funding to people in financial hardship.
  • And the UK government is piloting with four departments an online system for SMEs and charities to apply for government funding. It is designed to solve the twin problems of (i) discovering what funding streams exist and (ii) avoiding duplicated information requests.

The requests from fundraisers are relatively straightforward. The campaign #FixTheForm found big demand simply for more clarity on foundations’ eligibility criteria, and forms which you can save part-way through completing and return to later.

This problem arises from both an information problem – foundations do not see or know the costs they create – and an incentive problem – it is not a problem for foundations. Some foundations care a lot, and are working to reduce it. The prize here is releasing substantial resources for improving society and the environment, and that is surely worth considerable work.

This work was enabled by the Law Family Commission on Civil Society which exists to “unleash the potential of civil society”. A considerable ‘leash’ on civil society is this kind of inefficiency and cost of raising funds.

An article in Civil Society (a magazine) about this research is here.



							
Posted in Uncategorized | Leave a comment

Getting evidence to influence public policy

Many researchers want their research to influence public policy Many charitable donors also want to influence / improve public policy and often fund the production of research and other activities to that end. Sometimes it works, other times it doesn’t. What raises the chances of success? And how can a donor or researcher predict which opportunities or approaches are likely to be fruitful? Giving Evidence was hired by a large foundation to find out. We worked with On Think Tanks, and are here sharing some of what we found and learnt. We hope that it is helpful to you!

The foundation funds the production of research and ‘evidence into policy’ (EIP) activities. It focuses on low-income countries. Most of the researchers whom it funds are based in high-income countries. Often those researchers form partnerships with public sector entities they seek to influence: those can be national government departments (e.g., department of education), central government functions (e.g., office of the president), other national public bodies (e.g., central bank), regional or local governments or municipalities. Those partnerships take many forms: varying in their resource-intensivity, cost and duration.

Our research comprised:

  • Review of the literature about evidence-into-policy. This was not a systematic review, but rather we looked for documents, and insights within those documents, that are particularly relevant to the types of partnerships described. Our summary of the literature is here.
  • Interviews with both sides: with various people in research-producing organisations (universities, non-profit research houses, think tanks and others), and some of their counterparts in governments and operational organisations. Summary of insights from our interviews is here.

We also did a lot of categorising and thinking.

First, all evidence-into-policy efforts must have these three steps:

A. Decide the research question

B. Answer the question. i.e., produce research to answer the question

C. Supporting / enabling implementation: e.g., help policymakers and decision-makers to find, understand and apply the implications of the
research; disseminate the findings; support with implementation.

We have found this categorisation useful in various ways, including:

  • Checking that there is activity at all three stages. For example, if somebody does A and B but not C, the research is unlikely to have much effect. Equally, there are sometimes initiatives to identify unanswered or important research questions (A) but no capacity to then answer them (i.e., no proceeding to B or C).
  • Research seems to be much more likely to influence policy or practice if policymakers or practitioners (‘the demand side’) are involved at A, i.e., in specifying the problem.
  • But in much academic research, there are few/no policymakers or practitioners involved at A: the research question is decided purely based on what is ‘academically relevant’ or the academics’ interests. In that model, the researchers’ first main contact with people who might use the research is at C: and the research might be into a question which is of no interest to anybody else. We have sometimes called this approach: “here’s that report that you didn’t ask for”. It is hardly surprising if this model does not create much influence.
  • Clearly, an organisation’s choice of what it does at each stage of ABC is a way of articulating its theory of change.

Some key findings
A first comment is that we see many organisations which run interventions at a small scale (and funders who fund them). We often advise funders to support more systemic work, which, if successful, will influence much larger, existing budgets and programs. We liken this to how a little tug can direct a massive tanker. Much good philanthropy is about tugs and tankers. This project was a welcome and important opportunity to think about the relative effectiveness of various types of tug. We found that:

  • Organizations have diverse approaches / models for evidence-into-policy (i.e., theories of change) and therefore many different forms of partnerships. The organizations in the set vary considerably in their ABCs: for instance, at A, who is consulted and involved in determining the research questions, and whose priorities are involved? At B, who is involved in producing the research, eg., what countries are they from? At C, what dissemination channels are used, and what engagement is there with potential users?
  • The most substantial partnerships that we found are between research-producers. Many of those (e.g., research networks) involve more frequent contacts than do partnerships between research-producers and policy-makers.
  • Evidence of outcomes (the benefits) is scarce and patchy: Clearly, we understand that working on change at scale and/or doing system-related work often cannot be formally measured, in the sense of doing rigorous, well-controlled studies which indicate causation. Yet there could be more routine collation of outcomes (i.e., changes in the world which can reasonably be argued to be related to their work: we call these their ‘outcomes’.)
  • It is hard to be precise about the costs (inputs) of the various organizations’ EIP work
  • Unrestricted funding makes a big and positive difference. This is mainly because opportunities to influence policy often come up with quite short windows, so flexibility is key.
  • We found considerable interest in becoming better at EIP – across funders, researchers, synthesers, distributors. This may be sign of growing sophistication in the field: whereas 10 years ago, the main focus was on producing research (rightly, as there was then so little), now it is more on influence and change and improving lives.

Much more detail is in the reports. We hope that they are useful to funders, research-producers, research-translators, and research-users!

Giving Evidence director Caroline Fiennes talked about these topics at the Global Evidence & Implementation Summit in Australia in 2018: video below. To watch, click on the photo and wait a second. You may need to log in – any email address is fine. Excuse the didgeridoo interruption!

Speaking (2)

The Evidence System (framed by Prof Jonathan Shepherd):

Posted in Uncategorized | Leave a comment

The curious relationship between the number of staff and number of trustees in foundations

UK charities, including foundations, are unusual organisations in that it is pretty common to have more trustees than staff. The trustees are non-executive directors, they are almost invariably unpaid, and collectively comprise the board. It is rare in businesses and the public sector for organisations to have more non-executive directors than staff.

Giving Evidence examined 100 UK charitable grant-making foundations. Among other things, we looked each foundation’s number of staff and its number of trustees. Amongst the 100 foundations we assessed, having more non-executives than executives is nearly twice as common as the converse:

  • 61 foundations have more trustees than staff
  • 33 foundations have more staff than trustees
  • 4 foundations have as many staff as trustees

As the graphs below show, the variation in the ratio is huge: from having ten trustees and just one staff member, to having nearly 200 staff per trustee. That latter is Wellcome, which is an outlier, but even the second highest ratio has nearly 60 staff per trustee.

________

We gathered these data for the purposes of the Foundation Practice Rating: that is outlined below, though these particular data and analyses do not relate to the FPR’s core purpose. However, there is no existing data-set on the number of staff and trustees per foundation: to get that, you have to gather the data yourself by reading annual reports etc. That is laborious. We did it for FPR, because we needed the data for scoring foundations. So, having an unusual data-set, we thought that we would analyse it a bit and publish it.

We hope this is a useful contribution to the field 🙂

The Foundation Practice Rating is an independent assessment of UK grant-making charitable foundations. It assesses foundations’ practices on diversity, accountability and transparency, and does so using only their public materials. It is funded by 10 UK foundations, and is repeated annually. The sample of 100 foundations comprises: the 10 foundations who fund it, the UK’s five largest foundations, and the rest are selected randomly. Foundations have no choice of whether they are included. The criteria are based on precedent elsewhere, and a public consultation. They do not assess what a foundation funds, nor its effectiveness as such. Each foundation is rated A (top), B, C, or D on each of the three areas, and is also given an overall rating. 2022 was the rating’s first year: the results were released in March 2022. More information is at http://www.foundationpracticerating.org.uk

These are the foundations with the conventional arrangement of more staff than non-executives:

and these are the foundations in our sample with more non-executives than staff. Notice how they are more in number than the set above.

[Note that four foundations are marked here as having just one trustee each. In those cases, ‘the trustee’ is an institution (and often the foundations are old, with links to the Corporation of London). For example, The Mercers’ Charitable Foundation’s one trustee is the Mercers’ Company (a City of London livery company). The Drapers’ Charitable Fund’s one trustee is The Drapers’ Company (another City of London livery company). The Resolution Trust’s one trustee is The Resolution Trust (Trustee) Limited, a company with four directors. An example from outside our sample is Bridge House Estates Trust, whose one trustee is ‘the Major and Commonalty and Citizens of the City of London’.]

Posted in Uncategorized | Leave a comment

Having Too Few Personnel Compromises Foundations’ Performance on Key Issues

A clear finding from the Foundation Practice Rating Year One research – which assessed 100 UK-based charitable grant-making foundations – is that foundations with few trustees, or few staff, tend to perform poorly on diversity, accountability and transparency. This matters because UK charities told us that those three areas are important. Foundations with no staff perform particularly poorly on these issues.

The Foundation Practice Rating is an independent assessment of UK grant-making charitable foundations. It assesses foundations’ practices on diversity, accountability and transparency, and does so using only their public materials. It is funded by 10 UK foundations, and is repeated annually. The sample of 100 foundations comprises: the 10 foundations who fund it, the UK’s five largest foundations, and the rest are selected randomly. Foundations have no choice of whether they are included. The criteria are based on precedent elsewhere, and a public consultation. They do not assess what a foundation funds, nor its effectiveness as such. Each foundation is rated A (top), B, C, or D on each of the three areas, and is also given an overall rating. 2022 was the rating’s first year: the results were released in March 2022.

Nearly two-thirds of foundations with no staff scored the lowest grade of D. By contrast, of foundations with staff, only 10% scored a D. No foundation with more staff than 50 staff scored D.
There is a similar pattern in the number of trustees. No foundation with more than 10 trustees scored D, and Ds were much more common among foundations with few trustees. The graphs below show the distribution of scores by a foundation’s number of staff (left), and its number of trustees (right).

Our hypothesis is that foundations with too few people don’t have enough person-power to do the work required to have and disclose good practice in these areas. (The rating only uses publicly-available materials, so policies need to be disclosed in order to be included.) It takes people to create and publish policies, or to clearly disclose investment policies, gather and publish grantee feedback and the actions arising from it, to publish clear grant criteria, create ways for people with disabilities to contact the foundation and apply, to maintain whistle-blower systems and complaints processes, etc.

We appreciate some foundations’ concern about minimizing their internal costs (e.g., on staff) in order to maximize the amount available for grants. But the other factors captured in the rating also matter: charities told us this in the consultation, and the rating’s criteria reflects what they value in foundations and what they want from them. Charities (and other social-purpose organisations) are surely foundations’ main stakeholder (one might call them ‘clients’ or ‘users’), so it is important that foundations listen to them. The data above suggest that having too few staff in a foundation may be a false economy: it may save money, so increase the amount available for grants, but at the expense of being able to operate well and transparently.

In some foundations, the work is done by the staff and the trustees (who collectively comprise the board), provide oversight and direction. In other foundations – notably those with few staff or no staff – the trustees may do much of ‘the work’ themselves.

Having too few staff and trustees may also inhibit effectiveness: it seems quite possible that having more staff and trustees provides a larger network from which to source ideas (about anything) as well as potential grantees; and that having too few trustees makes for inadequately diverse experience involved in the board’s decision-making, resulting in sub-optimal decisions.

That said, it is clearly possible to score (pretty) well with few staff. One of the only three foundations which scored ‘A’ overall has only five staff; and five of the foundations which scored ‘B’ overall have no staff or only one staff member. Perhaps a foundation’s culture and intention to be open / external orientation influence its practice and hence its score. Any foundation can decide to disclose the items that the rating seeks.

Giving budget per staff member

Looking at this through another lens, we examined the giving budget per staff member across the 100 foundations. This figure varied widely: from over £7m/ staff member (Leverhulme Trust) to ~£30,000/ staff member (Franciscan Missionaries of the Divine Motherhood Charitable Trust). Clearly, for the 33 foundations which have no staff, we cannot calculate this number. Of course, we realise that giving models vary considerably and therefore that comparing foundations on this ratio is not a perfect like-for-like comparison.

The graph below shows the range of giving budget per staff member, and each foundation’s overall rating.

Working upwards from the bottom of the graph below, the foundations with no staff generally rated pretty poorly: of the 33 foundations with no staff, 21 scored D, nine scored C, and only three scored B.

Then let’s continuing upwards to foundations which do have staff. Amongst those foundations with small giving budget per staff member, performance is pretty good: none of the 43 foundations with the smallest giving budget per staff member scored a D. The first D that we encounter working upwards – i.e., the foundation with the smallest giving budget per staff member which scored a D is Yesamach Levav, which gives £930,000/staff member.

As giving per staff member increases from there, performance tends to weaken: of the ten foundations with largest giving budget per staff member (i.e., the top ten in the graph below), five scored D, four scored C, there is only one B and no As.

This again shows that having staff who are stretched across large budgets correlates with poor practice.

Overall rating of the FPR Year One sample of 100 foundations, by giving budget per staff member:

Posted in Uncategorized | Leave a comment

One donor’s fantastic work to encourage use of evidence, and production of more, to fight factory farming

This article appeared in Alliance Magazine’s special edition about food systems. It shows a powerful approach to using and producing evidence which donors could use an any sector.

Moving to a sustainable and fair food system is a giant challenge, and the organisations driving it are small compared to the problems. So it is crucial that they are as effective as possible. That requires basing their work on sound evidence: about precisely where each problem is, why it arises, the relative sizes of the various problems, and what approaches work to address them.

Giving Evidence has been working on this for some years, with a US-based foundation, Tiny Beam Fund. Strikingly, its settlor, Dr Carmen Lee, is an academic librarian so knows about making information findable! Much of the work that we have done together seems to be ground-breaking but could usefully be replicated in almost any sector. I’ll here explain our various endeavours.

The impacts of industrial farm animal production, where animals spend their entire lives in pens too small to even turn around, could lead to the next global pandemic.

Carmen’s focus is tackling negative effects (especially in low-income countries) of industrial farm animal production (IFAP) – battery hens, pigs, cattle etc., Many of them spend their entire lives in pens too small to even turn around, stuffed with antibiotics which get into the water system, are thus consumed by all of us, which raises antibiotic resistance, ‘widely considered to be the next global pandemic‘.

Carmen says: ‘Tackling IFAP is poorly understood. It is not an established field with well-travelled paths. It is more like a mediaeval map with lots of space marked ‘here be dragons’. We don’t want to fund daredevils who just jump in to grapple with the monsters. Instead, it is hugely important for everyone to gain a deep, nuanced understanding of the complex contexts and problems. So we decided to fund the systematic acquisition of this understanding.

‘How to acquire this knowledge? Here’s where academic researchers enter the scene. Their training and skill set is well-suited to this task. Academics are also increasingly interested in studying industrial animal agriculture’s impacts in low-income countries for their own scholarly purposes.’

Identifying the ‘burning questions’

The unanswered questions are numerous, dwarfing the available research capacity to answer them. So it is essential to focus on the most important questions.

To do this, we borrowed and adapted a process developed in healthcare /medical research, by the James Lind Alliance. It elicits from patients, their carers and primary physicians (the intended beneficiaries of research) the questions that they most want researched, and has them collectively prioritise them. Obviously, we couldn’t ask the pigs, but we involved anti-IFAP campaigners, observers and researchers.

Tackling industrial farm animal production is poorly understood. It is not an established field with well-travelled paths. It is more like a mediaeval map with lots of space marked ‘here be dragons’.

We invited under-researched questions of many types: where the intensive animal farming is, how it has arisen, who is doing what about it where, what is effective in what circumstances, ‘basic research’ about animal preferences, how the funding flows, what laws are in place and how /whether/ where they are enforced, etc. That is, the research could be ‘just’ gathering data, and/or it could be evaluating potential fixes.

The resulting list of prioritised questions is published. The questions rated highest priority are about systems (e.g., policies of the World Bank, World Trade Organisation; and the effects of free-trade negotiations) and what works against various goals, e.g., changing consumer behaviour.

We call this The Burning Questions Initiative (BQI), and it repeats every two years. Tiny Beam Fund then offers fellowships and research planning grants for academic researchers to answer those ‘burning questions’.

We have talked with other funders to encourage them to also fund research into these priority questions.

Using existing research, as well as producing new research

As a librarian, Carmen suspected that research exists which practitioners could use more / better against IFAP. But it is not always easy for practitioners to find, interpret and apply academic research: it lives behind paywalls, is written in academic-speak, rarely includes enough about how to run the intervention, etc. And academics’ incentives are to publish and teach, not to engage deeply with practitioners.

Perhaps money would help

We dreamt up a fund to help operational nonprofits to make better use of academic research and researchers. Funds might be used to hire an academic a day a week for six months to help find and interpret research relevant to strategic planning, or to run an academic advisory panel, or to buy access to academic journals. We ran a feasibility study – and about five minutes after reading our report, Tiny Beam launched such a fund. It is called the Fuelling Advocates Initiative (ie., fuelling them to be better by using research to improve effectiveness). TBF has made various grants in this programme: an example one allows an NGO which is setting up a regional office in Asia to use academic experts to help define its scope of work in Malaysia, Indonesia and Thailand.

The dragons

There are structural impediments to collaborations between practitioners and researchers. Carmen reports learning ‘that getting practitioners /advocacy NGOs and academic researchers to work together on factory farming issues is not simple, especially in LMICs. Researchers and scientists in these countries often shy from NGOs /practitioners involved with advocacy and activism. When NGOs ask researchers to help answer urgent questions, or when researchers hear that their research will support organisations concerned with factory farming, many of them say ‘no, sorry, I can’t help’ – even though the research would be purely scientific (e.g. collecting and analysing data about the use of antibiotics in livestock farms). One reason is that the researchers or their universities receive considerable funding from the animal agriculture industry. Other reasons include that academics find working with NGOs to be frustrating because NGOs don’t know how to communicate with academics (who operate in a different universe); and very different time-frames (for an academic, a year is not long, but NGOs think that’s ages). That said, some academics are willing to help, and some NGOs engage well with academics. One should be mindful of these issues, and should not be surprised to hit brick walls.’

As mentioned, almost any sector could use these methods to increase the quality and use of evidence, and thereby make practitioners more effective.

Posted in Uncategorized | Leave a comment

Reducing the Administrative Burden Placed on UK Charities by UK Donors and Funders

Giving Evidence is delighted to be studying funders’ application processes – to try to figure out how to reduce the costs that funders create for operational nonprofits. This is a hugely important topic, so we have written about it publicly, and have been seeking for a while to work on it: we have now teamed up with the Law Family Commission on Civil Society, run by Pro Bono Economics, which exists to ‘unleash the full potential of civil society’ because a considerable ‘leash’ (constraint) on civil society organisations is the costs they bear from charitable funders through application and reporting processes.

What is the issue here?

Charities and civil society organisations (CSOs) spend masses of time (=money) applying to funders. If they do not get the funding, most of that cost is wasted: specifically, it reduces the amount of work and good that they can do with their available resources. So we can think of it in terms of the efficiency of the process (we mean ‘efficiency’ in the mechanical, engineering-type sense, i.e., the amount of output achieved for a given amount of input, vs the amount that is wasted.) In economics-speak, application costs raise the cost of capital for CSOs.

Application processes are created by funders. Some of the costs are borne by them (e.g., their staff time reading the forms) but other costs fall on non-profits: they are ‘externalised’ by the funders, and so are invisible to the funders, and rarely actively managed by them. {There are some honourable exceptions: BBC Children in Need is one.} Hence, in a bad case, it can happen that a funders’ process creates so much work for other organisations that its costs exceed the amount being given – without the funder even realising. We have seen instances of this.

Most funders have their own application forms and processes. That increases work and wastage. And some funders invite way more applications than they need. So we seek to reduce this wastage. (Some analysis here by Time to Spare of the scale of that wastage. It thinks that 46% of UK grants cost more than they’re worth.)

Johann Sebastian Bach:

Oh, that’s my application to the court. If I get it, I’ll have spent all of it on that very application.

From Bach and Sons, a play by Nina Raine[1]

But it’s not trivial, for various reasons. First, some application processes are helpful to applicants, even if they do not get the funding. Second, the application process may serve some useful purpose for the funder, such that ditching it would be an error. And third, we are – always! – alive to the possibility of unintended consequences.

Now is a particularly good time to work on this topic because many funders changed their practices in response to the pandemic, becoming faster and leaner – so may be open to embedding new practices. For instance, 67 funders in London collectively created one-stop shop application processes with a slimmed-down application form, and funders in Jersey also collectively created a new collaborative process.

Our approach

Is to treat this as a behaviour change exercise. That is, we are not looking simply to document these costs, but rather to understand why and where they arise, the benefits to the various players as well as costs, and to identify the likely effects of approaches which might reduce them.

We seek to really understand the likely effects of possible fixes, as well as their likely take-up. This seems to be an innovation in discussions and work on this issue.

What we will do

Clearly we are learning from existing work on this issue and seeking to augment it. So our workstreams are:

Understanding what the current behaviours are and why they arise. We have interviewed foundations – and crucially also some operational nonprofits. We seek to understand what funders are trying to achieve with these processes, e.g., to their reduce costs, to identify the strongest applications / organisations / approaches, to reduce costs to applicants, limiting work for their teams, limiting applications to reduce costs to applicants. We have investigated how operational charities decide whether it is worth applying to a particular funder – the extent to which they take into account the costs of applying and the chances of success (the ‘expected value’ of their application). As far as possible, in these interviews, we have looked at other aspects of funding practice, such as restrictions and grant duration.

For example, here is the Hewlett Foundation describing how it designed its processes to minimise burden / maximise effectiveness of its grantees. (Yet another reason that I have a massive crush on the Hewlett Foundation…)

Discussing possible fixes with foundations and experts. We have interviewed foundations and sector experts about potential fixes, whether new or already attempted. We are aware of, for example, the #FixTheForm initiative; a study a while ago by NPC called Turning the Tables; a study by the University of Bath; the feedback about funders being gathered by GrantAdvisor; Project Streamline in the US; and various attempts at shared applications and shared reporting. The goal is to gain insights into the dynamics that they have encountered, and their views on drawbacks and feasibility of various proposed fixes.

Economic modelling: Understanding the likely effects of potential fixes. What people say they will do and what they actually do are often different! As well as listening to foundations and charities, we are doing some economic modelling, to identify the behavioural changes that can realistically be expected, the scale of savings that potential fixes might have, and to whom – and also to uncover unintended consequences (including adverse effects) which might arise from changes to the system. This approach is different from much of the research in the charity sector: we hope that it will bring additional insight.

If you have worked on this issue before – in any country – please get in touch! We would love to hear from you.


[1] This play is new and only on-stage and the script not yet published: I may have misremembered this quote a bit.

Posted in Uncategorized | Leave a comment

Letter in The Economist about anti-malarial bednets

Giving Evidence’s Director, Caroline Fiennes, has a letter in The Economist this week.

Giving Evidence’s existence is about directing philanthropic resources to effective & cost-effective work. So we were horrified by a letter in The Economist two weeks ago which appeared to claim (it was rather unclear) that anti-malarial bednets “fail” because they [all?] get used for fishing. It’s just not true. Masses of high-quality research evidence shows that bednets reduce incidences of malaria and thus save lives. (Plus, as you will know if you have ever slept in a room with mosquitoes in, save much annoyance.)

Many philanthropic donors fund bednets – from people donating £2 right through to the Gates Foundation. It is not acceptable for them to be deterred from supporting that life-saving work by erroneous information. So Caroline wrote in The Economist to put the record straight – along with long-time ally Professor Paul Garner of the (relevant!) Liverpool School of Tropical Medicine, and Co-ordinating Editor of the Cochrane Infectious Diseases Group. (Paul invited Caroline to give a talk (which is here) at the Liverpool School, having seen her on BBC News talking about charities and evidence (here) in the wake of the charity Kids Company collapsing.)

Our letter says:

Alex Nicholls rightly warns against focusing on outputs rather than outcomes in philanthropic programmes (Letters, October 16th). But his example, that antimalarial bednet schemes “failed”, is incorrect. Contrary to some reporting, few bednets get used for fishing. A four-country study of over 25,000 bednets found fewer than 1% were being misused. A comprehensive analysis by Cochrane, an independent network of researchers, of 23 medical trials encompassing nearly 300,000 people showed that bednets reduced deaths by a third. A study at Oxford concluded that they averted around 663m cases of malaria in Africa between 2000 and 2015. These important outcomes, by charities and others, should be applauded.

Here are the studies that we cite:

There is a persistent trope about bednets getting used for fishing. Here are two relevant facts about that:

  • Fishing communities are only 1% of Africa’s communities at risk of malaria, according to Professor Pascaline Dupas, a development economist at Stanford. (https://web.stanford.edu/~pdupas/Dupas_letter_editor_NYT_malaria_nets.pdf).
  • Bednets don’t last forever – they get holes etc. So even if some are used for fishing, that doesn’t prove that that was INSTEAD of being used over beds. (None of which is a comment on the effect of bednets on fish-stocks. We’re only talking here about whether bednet ‘fail’ at their primary goal of preventing malaria – which they don’t.) 

To be clear, Giving Evidence has no professional or commercial interest in bednets. We don’t work in that – other than as an illustration of cost-effective work. We are just trying to keep people alive.

We don’t know why Alex Nicholls, an Oxford professor of social enterprise, wrote that letter. Nor what evaluation/s he was using. We have asked but he hasn’t replied. We also asked which specific bednet programme he was referring to: he hasn’t answered that either, but in fact the sole bednet programme cited in the article to which he referred is hypothetical, so it makes no sense for him to claim to know its results(!)

Sometimes the term ‘social enterprise’ is used to mean ‘pro-social or pro-environmental organisations that charge money’ so as to be financially viable. Bednets may sit badly with that philosophy because the evidence (see graph below) is that charging for bednets massively reduces usage and hence results – i.e., there’s a trade-off between earned income and impact. 

In fact, in every instance that we’ve examined, it turns out that charging for the product reduces uptake and results (including bednets, soap, solar lamps). ‘Impact investors’ beware. Caroline wrote in the Financial Times about that, here.

Making giving decisions based on sound evidence, rather than random anecdote, ensures that our resources are best used – and keeps people alive. This is Giving Evidence’s work. If you would like to talk to us about your giving, please get in touch.

(Source: JPAL, The Price Is Wrong, 2011 here.)

Posted in Uncategorized | Leave a comment

Rating UK foundations on their transparency, accountability and diversity

UK charitable foundation staff and trustees are very white and very male. They’re also often senior in years, and pretty posh. None of those characteristics is necessarily a problem of itself, but (a) the homogeneity creates risk of lacking diversity of views, experiences and perspectives. Increasing diversity has been shown by many studies and circumstances to lead to better decisions and better results – including on climate. And (b) foundation staff and boards may collectively have little experience of (and hence, good understanding of) the problems they seek to solve, and insights into the communities they seek to serve. 

Funded by a group of UK grant-making foundations, Giving Evidence has rated UK foundations on their diversity. We also looked at their transparency – e.g., how easy it is to find out what they fund and when – and their accountability practices – e.g., whether they have a complaints process, whether they cite the basis on which they assess applications and make decisions, whether they publish any analyses of their own effectiveness. (Read an article in Alliance Magazine announcing this project, and a second article with some early findings.)

The results from the first year (2022) and the second year (2023) are now public!

2022 results: are summarised here, and the full report is here. Watch the launch event in which we discuss the results, here.

2023 results: are here. Again the launch event in which we discuss the results, here.

We will run the analysis again during Autumn 2023 and will publish the Year Three results in Spring 2024.

The criteria for Year Two (research in 2022) are the same as in 2021: details here.

Continue reading
Posted in Uncategorized | Leave a comment

Why most ratings of charities are useless: the available information isn’t important and the important information isn’t available

A Which? Magazine-type reliable rating of a wide range of charities would indeed be helpful. Unfortunately it’s currently impossible.

Most months, somebody contacts me saying that they’re setting up some website / app to rate loads of charities – to help donors choose charities and/or to ‘track their impact’. I ask what research and information the thing uses to assess charities’ performance; they always turn out to be using basically the charity’s report and accounts. Those are no good for this purpose.

A charity’s accounts are about money: how much came in, where it went and how much is left. Sometimes they say where it all came from (charity accounts always delineate categories of income, such as donations vs. earned income, but it’s optional to specify who made the donations). That’s it. You can’t identify effectiveness (‘impact’*) by looking at the accounts: for example, here we show the relative effects of various charities’ work to reduce reoffending. Those data are great but they’re not in the accounts.

A charity’s annual report has relative few requirements, beyond stating who the trustees are, the charity’s purposes, the auditor’s report if the charity is above a particular size. Some charities say a lot about what they have done; others don’t. Some say why they chose to do what they do and how and where; other don’t. {I’m talking about the UK. Other countries’ requirements are different, though most require even less public disclosure than we do, I think.}

Charities’ reports and accounts rarely say much about effectiveness. This is because most charities don’t know much about their effectiveness. That is because establishing effect is hard, expensive and requires sample size that few of them have and also the incentives on them are all wrong (see here.) Charities’ reports and accounts also rarely say much about need, and particularly not about the relative sizes of different needs nor how the intended beneficiaries prioritise those needs.

Charities accounts do say some stuff about the proportion of costs that are spent on administration and on fundraising. It is a mistake to assume that high spend on these costs means that an organisation is ineffective. Giving Evidence produced the first ever empirical data which support that statement, and anyway it’s obvious if you think about false economies of employing cheap people or having cheap equipment. This BBC interviewer figured that out live on-air. Also:

  • If a programme doesn’t work, it doesn’t matter how much or how little you spend on admin. It doesn’t work. But you can’t tell that it doesn’t work by looking at the accounts.
  • FYI, the rules around what costs gets classed as ‘administration’ are much vaguer than you might think, so charities probably vary quite widely in what they mean by them.

And even if charities’ reports and accounts do explain the need that the charity serves and/or its effectiveness at doing so, they are most unlikely to say much which enables the charity to be compared to other charities. That of course is what rational donors want to know. This lack of comparative information is partly because charities can each choose what impact measures they use, when they use them, and they often have interventions which are ostensibly unique to them.

The charities also normally choose research methods they use: even if two charities run the same programme and evaluate it with the same tool (say, the Goodman Strengths & Difficulties Questionnaire), they are likely to get quite different estimates of impact if one does a simple pre-post study, one does a randomised controlled trial, and one does a non-randomised controlled trial. (The fact that different methods produce different results is precisely why it is important to understand research methods and choose the right one.)

Charities’ reports and accounts do not include this information because that is not what they are for. The accounts are regulatory filings: the regulator’s remit does not include effectiveness. Accounts are about money, and you cannot identify impact by looking at where money comes from nor where it goes. And the annual report is partly about that money and partly what might loosely be called marketing material. That is completely different to a rigorous, independent assessment of effect.

So, nobody should assess a charity’s effectiveness – or quality, or the extent to which people should support it – on just its reports and accounts. Even though those are the sole data that are readily available.

Let’s turn then to what a donor does need in order to assess a charity. Well, as mentioned that includes, data about:

  • The scale, nature and location of the need being served.
  • How the intended beneficiaries prioritise those needs.
  • The evidence underlying the proposed intervention/s
  • How the intended beneficiaries feel about those intervention/s. (If people believe that the chlorine you want them to add to their water to purify it and reduce incidence of diarrhoea, which can be fatal, is in fact a poison, they will not add it to their water when you are not looking and intervention will fail.)
  • A robust and independent assessment of effectiveness of the intervention/s as delivered by that charity.
  • A comparison of the effectiveness – and ideally, of cost-effectiveness – of various organisations’ solutions to that need.

For many charities, those data simply don’t exist. And for the ones where they do exist, they are far from readily available: one needs to dig them out, normally from multiple sources. It is complex work, and expensive. It exists for some sectors.

Charity Navigator, which has ratings of probably the world’s largest set of charities, which are in the US, uses financial filings and adds other information where possible.

Hence my view: that the available information (reports and accounts) are not what you need to assess charities; and the information that you do need to assess charities is normally not information.

What to do?

For one thing, don’t produce whizzy graphics and platforms that re-cook irrelevant and unimportant data. That is, don’t try to assess charities using just their reports and accounts. Ever.

There are some decent independent analysts who use the kind of described above and which get to more reliable answers: they include GiveWell (minus the recommendations about deworming), ImpactMatters (now part of Charity Navigator). 

For everything else? There is sometimes a way round, and Giving Evidence is working on creating a solution that uses it. It will be wider than what currently exists, but still only cover the small set of charities for which the relevant information is available. Hopefully that will grow over time.

But in the meantime, let’s not effectively train donors to use some platform that will in fact mislead them.

*I happen to dislike the term ‘impact’ because I grew up in physics where an impact means a collision, normally between two objects which are inanimate, and which sometimes destroys one or both of them.

Posted in Uncategorized | Leave a comment