We tried to update our analysis of charities’ performance and their admin costs, and you won’t BELIEVE what happened next!

Many people believe that charities waste money on ‘administration’, and hence that the best charities spend little on administration. A strong form of this view is that the best charities are by definition those which spend little on administration, i.e., you can tell how good a charity is just by looking at their admin costs: one sometimes hears this view.

It’s nonsense. The amount that charities spend on administration is (probably) totally unrelated to whether they’re intervention is any good. If I have an intervention which, to take a real example, is supposed to decrease the number of vulnerable teenagers who get pregnant, but in fact does the opposite and increases it, then it doesn’t matter how low the administrative costs are: the fact is that the intervention doesn’t work. As Michael Green, co-author of Philanthrocapitalism: How Giving Can Save The World says: ‘A bad charity with low administration costs is still a bad charity’.

Giving Evidence’s ground-breaking analysis

The merits and demerits of assessing charities on their admin costs have been debated for years. Then, in 2013, Giving Evidence ran and published some *data* (rather than just opinion). It was, to our knowledge, the first ever empirical data to be published about what the correlation between administration costs and charities’ performance.

It indicated that high-performing charities spend more on administration costs than weaker ones do. In other words, the correlation goes the other way around from the popular notion. (Warning: we could only get a small sample for our analysis).

That was six years ago, so we have tried to repeat and update our analysis. (We are scientists!)

What happened when we tried to repeat it

We found that the method we had used last time virtually collapsed so we couldn’t see anything from it 😦

This analysis requires getting: (i) some objective measure of how good a charity is, and (ii) the amount that it spends on administration. We then compare the two. The difficult part is the first.

Last time, we used the ratings by GiveWell as our measure of how effective charities are: GiveWell is a US non-profit run by former New York hedge fund analysts, which produces and publishes independent analysis of charities (in particular sectors) and recommends to donors some of the charities it has analysed. Last time, we compared the average admin spend of charities recommended by GiveWell with that of charities that it had assessed and not recommended. The recommended one spent on average 11.5% of their costs on administration, whereas the charities which GiveWell reviewed and didn’t recommended spent only 10.8% on average. We found the same patterns for its recommendations the previous year; and it was even more pronounced if we used the four levels of ranking that it had in 2009: the gold-rated charities spent more on average on admin than did the silver-rated ones, and they more than the bronze-rated ones, and they more than the non-recommended ones.

GiveWell has since changed its method, and the sample size is now so small that we can’t use it to say anything reliably.

GiveWell now has:

  • Eight top charities. For five of them, it recommends only one programme, i.e., part of the org. This is a problem for our purposes because charities’ admin cost data is published for the whole org. Hence we don’t know the admin spend of the recommended part. So we could only include the charities where GiveWell recommends the whole organisation, and there are only three of those. (For what it’s worth, which is not much, their reported admin costs vary between 0.1% and 17%.)
  • Eight ‘stand-out’ charities that it recommends. Again some are parts of organisations. The number of charities where GiveWell recommends the whole organisation and we could get admin cost data was again just three. (For what it’s worth, their reported admin costs vary between 2.7% and 23%.)
  • By comparison, we looked at orgs that GiveWell considered but did not recommend. We included only those which it has reviewed since 2015 (b/c obv otherwise GW’s view could be out of date). We excluded ones whom GW contacted but who didn’t respond or otherwise declined to engage with it. The number of charities where GiveWell considered and analysed the whole organisation but didn’t recommend it, for which we could get admin cost data was five. (For what it’s worth, their reported admin costs vary between 2.2% and 12%.)

So we would end up with just:

  • Recommended charities: 6 in total:
    • 3 ‘top’ charity recommendations
    • 3 ‘stand out’ charity recommendations
  • Non-recommended charities: five.

Clearly that sample size is too small to be useful.

There may be other ways of getting quantitative, systematic data to shed light on the relationship between admin costs and performance – but using GiveWell’s recommendations is not one of them.

Nonetheless, there are masses of arguments and examples that show / suggest that money spent on ‘admin’ is not (always) wasted – the term ‘false economy’ comes to mind. We have rehearsed many before, and will continue to do so. Some are in Caroline Fiennes’ book, and others in the paper we published on admin costs in 2013.

_________

Notes:

[The reason that we only considered as comparator only charities that had been considered by GiveWell, rather than, say, a wider set is that GiveWell is quite particular about the charities that it will even consider. It only considers charities that operate within one or more of the areas that GW deems to be priorities. In other words, there is a selection effect which defines the set of charities considered, and if we took a general set of charities, or even of international development charities, it would be partially or wholly outside those areas and hence perhaps different in relevant ways.

To be clear, we are interested only in examples where we can get admin cost data for the entity that GiveWell recommends, and that means where it recommends the whole charity, not just specific programme/s that it runs. As an aside, there is one charity which appears in all three lists – it runs one programme which is a GW Top Recommendation, one which is a Stand Out recommendation, and one which was considered by not recommended. The admin cost data is for the whole organisation. It hardly seems useful to have a dataset in which one entity is in all three of the buckets that are being compared.]

*In fact, in this example, since the intervention is harmful, if more money is ‘wasted’ on administration, then less will be spent on the intervention, so less harm will be done. So here, more admin spend would be better.

We used GiveWell’s recommendations as they were in October/November 2019, i.e, before any update in December 2019. The raw data are here.

This entry was posted in Admin costs, Effective giving, Impact & evaluation, Uncategorized. Bookmark the permalink.

Leave a comment