Why most ratings of charities are useless: the available information isn’t important and the important information isn’t available

A Which? Magazine-type reliable rating of a wide range of charities would indeed be helpful. Unfortunately it’s currently impossible.

Most months, somebody contacts me saying that they’re setting up some website / app to rate loads of charities – to help donors choose charities and/or to ‘track their impact’. I ask what research and information the thing uses to assess charities’ performance; they always turn out to be using basically the charity’s report and accounts. Those are no good for this purpose.

A charity’s accounts are about money: how much came in, where it went and how much is left. Sometimes they say where it all came from (charity accounts always delineate categories of income, such as donations vs. earned income, but it’s optional to specify who made the donations). That’s it. You can’t identify effectiveness (‘impact’*) by looking at the accounts: for example, here we show the relative effects of various charities’ work to reduce reoffending. Those data are great but they’re not in the accounts.

A charity’s annual report has relative few requirements, beyond stating who the trustees are, the charity’s purposes, the auditor’s report if the charity is above a particular size. Some charities say a lot about what they have done; others don’t. Some say why they chose to do what they do and how and where; other don’t. {I’m talking about the UK. Other countries’ requirements are different, though most require even less public disclosure than we do, I think.}

Charities’ reports and accounts rarely say much about effectiveness. This is because most charities don’t know much about their effectiveness. That is because establishing effect is hard, expensive and requires sample size that few of them have and also the incentives on them are all wrong (see here.) Charities’ reports and accounts also rarely say much about need, and particularly not about the relative sizes of different needs nor how the intended beneficiaries prioritise those needs.

Charities accounts do say some stuff about the proportion of costs that are spent on administration and on fundraising. It is a mistake to assume that high spend on these costs means that an organisation is ineffective. Giving Evidence produced the first ever empirical data which support that statement, and anyway it’s obvious if you think about false economies of employing cheap people or having cheap equipment. This BBC interviewer figured that out live on-air. Also:

  • If a programme doesn’t work, it doesn’t matter how much or how little you spend on admin. It doesn’t work. But you can’t tell that it doesn’t work by looking at the accounts.
  • FYI, the rules around what costs gets classed as ‘administration’ are much vaguer than you might think, so charities probably vary quite widely in what they mean by them.

And even if charities’ reports and accounts do explain the need that the charity serves and/or its effectiveness at doing so, they are most unlikely to say much which enables the charity to be compared to other charities. That of course is what rational donors want to know. This lack of comparative information is partly because charities can each choose what impact measures they use, when they use them, and they often have interventions which are ostensibly unique to them.

The charities also normally choose research methods they use: even if two charities run the same programme and evaluate it with the same tool (say, the Goodman Strengths & Difficulties Questionnaire), they are likely to get quite different estimates of impact if one does a simple pre-post study, one does a randomised controlled trial, and one does a non-randomised controlled trial. (The fact that different methods produce different results is precisely why it is important to understand research methods and choose the right one.)

Charities’ reports and accounts do not include this information because that is not what they are for. The accounts are regulatory filings: the regulator’s remit does not include effectiveness. Accounts are about money, and you cannot identify impact by looking at where money comes from nor where it goes. And the annual report is partly about that money and partly what might loosely be called marketing material. That is completely different to a rigorous, independent assessment of effect.

So, nobody should assess a charity’s effectiveness – or quality, or the extent to which people should support it – on just its reports and accounts. Even though those are the sole data that are readily available.

Let’s turn then to what a donor does need in order to assess a charity. Well, as mentioned that includes, data about:

  • The scale, nature and location of the need being served.
  • How the intended beneficiaries prioritise those needs.
  • The evidence underlying the proposed intervention/s
  • How the intended beneficiaries feel about those intervention/s. (If people believe that the chlorine you want them to add to their water to purify it and reduce incidence of diarrhoea, which can be fatal, is in fact a poison, they will not add it to their water when you are not looking and intervention will fail.)
  • A robust and independent assessment of effectiveness of the intervention/s as delivered by that charity.
  • A comparison of the effectiveness – and ideally, of cost-effectiveness – of various organisations’ solutions to that need.

For many charities, those data simply don’t exist. And for the ones where they do exist, they are far from readily available: one needs to dig them out, normally from multiple sources. It is complex work, and expensive. It exists for some sectors.

Charity Navigator, which has ratings of probably the world’s largest set of charities, which are in the US, uses financial filings and adds other information where possible.

Hence my view: that the available information (reports and accounts) are not what you need to assess charities; and the information that you do need to assess charities is normally not information.

What to do?

For one thing, don’t produce whizzy graphics and platforms that re-cook irrelevant and unimportant data. That is, don’t try to assess charities using just their reports and accounts. Ever.

There are some decent independent analysts who use the kind of described above and which get to more reliable answers: they include GiveWell (minus the recommendations about deworming), ImpactMatters (now part of Charity Navigator). 

For everything else? There is sometimes a way round, and Giving Evidence is working on creating a solution that uses it. It will be wider than what currently exists, but still only cover the small set of charities for which the relevant information is available. Hopefully that will grow over time.

But in the meantime, let’s not effectively train donors to use some platform that will in fact mislead them.

*I happen to dislike the term ‘impact’ because I grew up in physics where an impact means a collision, normally between two objects which are inanimate, and which sometimes destroys one or both of them.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s