Wishlist

There is much more to do to towards Giving Evidence’s goal of charitable giving based on sound evidence. There are various projects that we believe would help, and which we would love to do. They just need funding. If you are interested in enabling this work, please do contact us!

In no particular order:

Reducing charities’ wasted expenditure on bad evaluations: Assessing the quality of ‘impact evaluations’ produced by operational charities – in order to dissuade funders from forcing charities to waste resources producing bad ones.

“Ask an important question and answer it reliably”. Caroline talking to a group of donors about this principle for good research

We have noticed that many charities are asked to produce “monitoring and evaluation”, mainly to give to funders. Firstly, monitoring is completely different to evaluation. And second, most operational charities should not do impact evaluations: they lack the skills, money, sample size and incentive to do it properly. We showed this here. Consequently, many charity-produced ‘impact evaluations’ are poor quality. And the other way around, we notice that very few rigorous studies are produced by operational charities. (This is discussed in this talk.)

These ‘impact evaluations’ waste resources. And worse, they can mislead people, because bad evaluations often give the wrong answer: they imply that an intervention works when actually it doesn’t – so may lead to harm. (Real examples of that here.)

We also suspect that some of these ‘impact evaluations’ are unnecessary: that they ask questions that are already adequately answered in the existing rigorous literature. For example, in our work on child abuse, we found a proposed study by a UK charity which answers a question – in fact, the only question in that whole terrain – which has already been answered well (it has been studied in ~30 completed RCTs already).

So we would like to get these ‘impact evaluations’ to stop. Specifically, the low-quality and/or unnecessary studies by operational charities. The principle of good research is that it should “ask an important question and answer it reliably” and we suspect that many charity-produced ‘impact evaluations’ do neither.

We would like to do a study to test our assumptions, and, if they’re right, to highlight the problem to donors to get them to stop asking for these studies. This is part of our (unannounced) Campaign Against Crap Evaluation!

The study would involve taking a group of operational charities / their ‘impact evaluations’, and, for each:

a. assessing its reliability. We would use existing scales for this, e.g., the Maryland Scale, and

b. looking at whether the question that it asks is already answered in the rigorous literature, e.g., existing systematic reviews.

If you would like to enable this important landmark study, please get in touch.

Pulling together everything already known about ‘how to fund’ in various circumstances

Bits are known about ‘how to fund’: both from a few rigorous experiments and also from odd-bod experiences and stories. This study would bring all of that together to make it accessible and useful for any funder. It seems like a really basic building block for improving philanthropy across the board.

The existing data and examples are scattered. For example, we have encountered some around funding academic research (different from the example below), and one around funding medical research.

So this would be a systematic review, but with a creative search strategy to include grey literature and unlikely examples.

Reducing charities’ waste on funder processes: Figuring out how to get funders to reduce the costs they impose on non-profits / grantees – their externalised costs.

Charities normally have to apply to funders and also report to them. This creates costs. Those costs are created by funders but not felt by them. Hence few funders really manage them. The costs can be massive: sometimes a good chunk of the amount given, and on occasion the whole of what’s given or even more. We wrote about that here.

We seek to reduce these transaction costs. We first need to understand why funders run the application and reporting processes that they do. Then to model the costs of those various processes. And third, devise some alternative processes, and model the costs of those. And finally try to influence funders to reduce costs where possible. (Yes, we understand that some types of funding are more expensive than others – funding unconnected grass-roots org, for instance – and we understand that some costs are unavoidable. But we also observe that many of these costs are avoidable.)

We currently have a project on application costs, though that is a small project and there is masses more to do, incl. implementing the good ideas that arise from it!

Optimising funders’ decisions about what to fund

We are interested in working with any funder interested in optimising their decision-making.

If you are a funder which gets applications – as many are – then selecting which applications to fund is central to your success. Getting good at those yes/no decisions is crucial. Yet we have seen very few such funders focus on how to make those decisions well.

For example, are those decisions best made by: the staff; the board; outside experts; an algorithm; random chance? (We have written about the merits of using random chance, and its increasing use in allocating scarce resources – though not yet used much in philanthropy.) There is a whole field of decision science to draw on for optimising this process.

Almost all funders we’ve encountered (=hundreds) use human judgement for these decisions. This is despite masses of evidence that it’s not very good: we make mistakes, and our inconsistent, e.g., the same person may have a different opinion based on the exact same data from one day to the next.

We only know of one funder who has investigated their own decision-making process. They fund academic research, where any output can be judged on how often and where it is cited. (These bibliometrics are imperfect, but at least they’re the same for all outputs.) They found:

  • That the scores given by staff to applicants were a better predictor of a project’s eventual success than were the scores given by external experts. Obviously that implies that the external experts are unhelpful and should be dropped, and
  • That, once they’ve weeded out the bad applications, most of the ones that they reject do get funded by somebody else, and that the difference in success of the things that they fund vs. those that they reject is…. nothing. That implies that (a) after weeding out the bad ones, they might as well choose at random, and (b) they are not adding any value to their grantees beyond the money.

One option is to create an algorithm (Nobel Laureate Daniel Kahneman says in his book that one can normally produce a decent one in about a morning – and we did create one for a funder once, based on a reasonable amount of research and thought). Then, run some applications through both the normal process and the algorithm. Fund the ones selected by the normal process, but note what the algorithm said about them. After ~12-24 months, look at the success of the funded projects, and see whether / how the actual success differs from the algorithm’s predictions. (Obviously, in the interests of not upsetting the apple cart by abandoning the normal process immediately, the data-set will exclude grants that the algorithm said to make but which the normal process said to include.) If the algorithm turns out to out-perform the normal process, then maybe the funder should move to that.