A welcome public row about donor effectiveness

Well done Malcolm Gladwell. On Wednesday this week, Harvard announced its biggest gift ever, $400m from the American hedge fund manager John Paulson for its school of engineering and applied sciences. Gladwell ridiculed it: ‘It came down to helping the poor or giving the world’s richest university $400 mil it doesn’t need. Wise choice John!’ Various other financial overlords sprang to Paulson’s defence: ‘My first thought was: ‘Wait a minute, pal, how much have you given?’’ said one; ‘Would they criticize him if he just sat on his wealth and ‘compounded it’ like certain others?’ said another; and a third said ‘Who the f— can criticize a guy who donated $400 million to his alma mater?!”… What’s to criticize? Extremely generous and he is to be applauded.’

Opportunity cost, that’s what to – well not criticize but to question – and effectiveness along with it. Charities vary wildly in how effective they are: with the same amount of resource, some achieve results, some achieve nothing, some achieve masses, some make things worse. The choices which donors make – like the one Gladwell is calling out – are highly consequential.

Yet media coverage and public discourse around giving are almost entirely about inputs: cooing over the sums given. This is the first public ‘row’ I can remember about whether a particular gift is any good, and it’s about time. Donors are rarely (ever?) asked to explain their choice of organisation or what the donation might achieve, and never challenged on what that money could have achieved elsewhere.

This matters not only for the public good. A chunk of the donation will be tax-relief so the US taxpayer is chipping in – involuntarily – and might wish to know that their money is being well-used. [One hedge fund manager defended Paulson with the common response that ‘it’s his money’. Assuming that Paulson pays tax, that just isn’t entirely true.]

So well done Malcolm Gladwell for questioning whether a particular gift is sensible. To see whether it is sensible, let’s think about the opportunity cost. Antimalarial bednets are famously cheap and effective, costing about $3 to buy and saving a life for about $3,340 (including distribution and the fact that not every single net saves a life). If Paulson’s $400m had gone to Against Malaria Foundation, it could have saved nearly 120,000 lives. To see the opportunity cost, we need to subtract from that the ‘value’ of everything which Harvard will achieve with that gift – and that of course we don’t know. But we can see that the bar is pretty high: Harvard has to benefit the human race as much as saving those lives in order to simply ‘break even’. To be clear, I’m not arguing against funding universities or research – far from it, I have argued that research can be hugely valuable and its uncertainties make private donors and foundations uniquely important for enabling it, but am asking donors to be aware of the height of that bar and of what their money might achieve elsewhere. [A note on Harvard. Though $400m is a lot to you and me, Harvard could fund that nearly 100 times over from its endowment of $32.7bn.]

Other financiers seemed to say that any donation was beyond challenge: ‘I have a hard time imagining anyone being critical of a charitable gift’ and ‘he doesn’t have to give it away at all if he doesn’t want…. I can’t imagine criticizing that’. Well what if it went to an organisation which is harmful? What if it had gone to Homeopaths Without Borders, dubbed ‘one of the worst charities in the world’ because it encourages people to use homeopathy which we know often makes people stop taking or seeking treatments which do work. Surely we should criticize that. And what if a donor supported an organisation which delivers bednets expensively when they could have found one that did it cheaper? Surely, criticizing gifts like those is perfectly right and proper.

Hence the ethicist Peter Singer does. Last month he tweeted: ‘For the love of God, rich people, stop giving Ivy League colleges money’. He should know, since he works at one (Princeton), and co-founded the effective altruism movement which encourages people to give where they can make most difference, where the opportunity cost is minimal.

Donors’ choices matter. We should welcome this debate about them, and thank Malcolm Gladwell for taking the flak in starting that debate.

__________

This article first published in Alliance Magazine

Vox had a much more critical article: “There is a special plaque in philanthropist hell for John Paulson…made a fortune betting against the subprime mortgage market in the mid to late ’00s, and he’s given big chunks of it away to the least worthy charitable endeavors he can find… [O n Harvard] If you want to make the world a better place, your dollars are better spent literally anywhere else…Giving to Harvard is not an act of altruism. It’s a gigantic, immoral waste of money, and it’s long past time we started treating it as such.”

Jeff Sachs unearthed where that $400m came from: “taken from the good peole of Dusseldorf through an infamous swindle… When Securities and Exchange Commission got wind of this , it charged Goldman Sachs with financial fraud…Goldman settled by paying a fine of $550 million… The whole sordid experience reminds one of a Soviet-era story.

 

Posted in Uncategorized | Leave a comment

Is grantee / beneficiary feedback a substitute for RCTs?

The short answer is no. At first sight, it seems that randomized controlled trials (RCTs) and Constituent Voice (CV: a good way of gathering feedback from programme beneficiaries or grantees) could substitute for each other because they both seek to ascertain a programme’s effect. In fact they’re not interchangeable at all. An RCT is an experimental design, a way of isolating the variable of interest, whereas CV is a ‘ruler’ – a way of gathering information that might be used in an experiment or in other ways.

Let’s look at an example RCT. Suppose we want to know the effect of Tostan’s human rights education programme in West Africa (which works on many things but is most famous for significant reductions in what its founder Molly Melching calls female genital cutting). The most rigorous test would be as follows. First, measure what’s going on in a load of villages. Then, choose some villages to have Tostan’s involvement and others not: choose them at random. (It’s no good to have villages opt in because maybe only the most progressive villages will opt in, meaning that we won’t know if changes result from their progressiveness – ‘a selection effect’ – or from the programme itself.) Finally, after the programme, measure again what’s going on in each village, and compare the change in the villages that got the programme with the change in those that didn’t.

CV and RCTs can – and I’d argue should – sit alongside each other. The classic uses of CV are to understand what people want and what they think of what they’re getting. Those are obviously important – and I champion work on both – but answers to these questions may not accurately identify the ‘impact’, which a well-run RCT would do.

Take, for example, two microfinance ‘village bank’ programmes that targeted poor people in north-east Thailand. It’s quite possible that people in these villages wanted to be less poor, and liked the microcredit programme they received. So the programme would have come out well if measured using CV. It came out well on some other measures too. But it fared badly when analysed with a well-run RCT (and there are plenty of ways that RCTs can be run badly): people who got microloans did do better than those who didn’t, but RCTs showed that those differences were entirely due to selection effects and had nothing to do with the microloans themselves.

Distinguishing selection effects from programme effects is hard – routinely foxing even highly trained doctors and researchers – and can’t be done by the naked eye alone. It’s quite possible that ‘beneficiaries’ might think that a programme is helping because they (like everyone else) conflate selection effects with programme effects. We can’t rely on CV to identify impact.

Well then, in a world of rigorous evaluations, why do we need CV?

First, why should we ask people what they want? Answer: because there are legion tales of donors plonking (say) a school in a community that really wanted a well. Rigorously evaluating the effect of the school totally misses that it wasn’t wanted, and the erosion of self-determination caused by non-consultative ‘donor plonking’. We can tell that consultation with ‘beneficiaries’ is complementary to rigorous research because they’re both used in evidence-based medicine (eg to establish what to research: see the article about the James Lind Alliance on p33).

Second, why should we ask people what they think of what they’re getting? Answer: again because they’ll tell us things that we didn’t know that could improve delivery. That staff are rude, often late. That the clinic should open half an hour earlier because that’s when the bus arrives. That the nurse giving the vaccines could be less scary.

Well-run RCTs are unparalleled in their ability to isolate a single factor and thereby identify the effect of that factor. But there are obviously instances where they’re inappropriate. They include: when controlling for that factor would be unethical or illegal; when you couldn’t get a sample big enough to yield statistically significant results; when the cost of conducting the study would outweigh the benefits; when the outcome is unmeasurable (such as measuring the effectiveness of alternative ways of honouring the dead); when a cheaper method is available (perhaps you have decent historical data and just need to analyse it). They are also inappropriate when you want to find out something other than the effect of a particular factor, eg users’ opinions or perceptions.

So no, CV is not a proxy for RCTs. As so often, the answer is ‘both’.

This article first published in Alliance Magazine (£). A PDF version is here.

Posted in Effective giving, Impact & evaluation, meta-research | Leave a comment

Do gongs from HM Queen make any difference?

It’s June, which brings the Queen’s official birthday, and perhaps this year you – like many charity sector people before you – will get lucky and appear in the Birthday Honours list. If so, arise, Sir or Dame Reader, for I have an important task for you.

This auspicious occasion presents an opportunity to find out whether Her Majesty’s gongs actually make any difference. We currently don’t know, despite all the sound and fury about them.  Continue reading

Posted in Uncategorized | Leave a comment

Why I support AllTrials & suggest that you do too

This article was first published by The Life You Can Save.

Alessandro Liberato was suffering from multiple myeloma and trying to decide whether to go through the trauma – for the second time – of a bone marrow transplant.

There were four [clinical] trials that might have answered my questions, but I was forced to make my decision without knowing the results because, although the trials had been completed, they had not been published,” he said.

Alessandro’s predicament isn’t unique. Millions of patients like Liberato and their doctors are avoidably in the dark. Amazingly, fully half of all clinical trials are unpublished.

As a result, the effects of most medicines are effectively unknown,” says Dr. Ben Goldacre, who has studied the problem of why clinical trials often go unpublished. Continue reading

Posted in Uncategorized | Leave a comment

Helping mainstream donors to give better

If you want to give to, say, cancer and want to find a good charity in that, how can you currently find out which org is any good? Essentially you can’t: charity ‘due diligence’ is way too hard for almost any non-professional donor.
It matters since most £s given are given by ‘normal people’ (for whom philanthropy isn’t a job) and those people are the majority of donors. The pattern is the same in most developed countries. Those donors really don’t have much option but to give randomly or based on hear-say. 
We’ve thought long about fixing this, and are now moving to action. Our ‘strategy’ is to borrow other people’s homework: create & market a website which compiles the recommendations of (charities funded by) sensible grant-makers, & of independent analysts.
A brief paper outlining the concept is here. It’s very early days but you’ll get the drift. We’re very interested in your views: please send them to admin [at] giving-evidence [dot] com stating your location, experience and day rate.
We’re looking for a freelancer with experience of market research as part of new product development (NPD) to help in these early stages. Ideally they’d have done some NPD and be familiar with human centred-design / rapid prototyping. They can be anywhere in the UK. If that’s you, please get in touch.
Posted in Uncategorized | Leave a comment

The key barriers to strategic philanthropy are practical

This was published by Stanford Social Innovation Review in a series about strategic philanthropy.

Encouraging more strategic philanthropy is a behavior change exercise. Paul Brest and I are fellow travellers and co-conspirators in that mission. But his article implies that he and I see different barriers to achieving that change. (We may of course both be right.) Brest lays out the objections to strategic philanthropy and refutes them—and does so excellently. By contrast, the barriers which I see and encounter are primarily practical. 

To change donor behavior, we can usefully learn from the patron saint of “nudging,” University of Chicago Professor Richard Thaler, who first deployed behavioral insights in economics. He has developed two ‘mantras’ while overseeing ‘nudge units’ in various governments globally:

  • “You can’t make evidence-based policy decisions without evidence.”
  • “If you want to encourage some activity, make it easy.”

Strategic philanthropy comes out badly on both mantras: we have barely any evidence about either how to do it or the location or extent of most of the problems it might tackle; and (not unrelatedly) strategic philanthropy is not easy to do. Continue reading

Posted in Effective giving, Impact & evaluation, Uncategorized | Leave a comment

Behavioural insights are rocket-fuel for charities

Few people can claim that their work has been used routinely to inform or improve fundraising, reproductive health, the governance of African countries or road safety, or to help people to get jobs or quit smoking; but the US economist Richard Thaler can. He has the rare distinction of having revolutionised a major discipline, and in his new book, Misbehaving: the Making of Behavioral Economics, he recounts how he did it.

Thaler realised that much of what economics says about how people behave conflicts with how we actually behave. Predictions which collide with observation are bad news in science. He suspected that economics would make better predictions if it absorbed insights from experimental psychology. This resulted in the new discipline of behavioural economics, which has since become mainstream.

Behavioural insights become rocket fuel when they are applied to social and development problems, and to public policy. They are useful to charities in at least three ways. Continue reading

Posted in Effective giving, Fundraising, Impact & evaluation | Leave a comment

Charities should get good at research uptake

Every school child knows that vitamin C prevents scurvy. But how long was it from when James Lind, a Scottish naval surgeon, made that important discovery in 1747 until the British Navy started providing fruit juice to sailors? At that time, scurvy was killing more sailors than military action, so answer is surprising. It was 38 years.

‘Research uptake’ as this has become known, is hard. Luckily it’s becoming a discipline in its own right, which looks at both its strands: uptake by governments in policy, and uptake by front-line practitioners. Charities and charitable funders produce research and insights which we aim to have ‘taken up’ in both strands.

The scurvy story shows how it’s not enough ‘just’ to be right – even if the insight is vitally important to national security and cheap to implement. This year’s BBC Reith Lecturer, the doctor Atul Gawande, talked about how his Indian grandmother died of malaria well after chloroquine was discovered to be a prophylaxis. The news must travel to where it’s needed. Continue reading

Posted in Effective giving, Impact & evaluation | Leave a comment

Systematic review of evidence to inform funding practice: outdoor learning

What is known about what works and what doesn’t? What can we learn from the existing literature and experience of other organisations about what works and what doesn’t – and for whom and in what circumstances – which can help us make better funding decisions?”

These questions are the genesis of a study of outdoor education for 8-25 year olds commissioned by the Blagrave Trust, a family foundation which supports disadvantaged young people in southern England. Giving Evidence is working on the study in partnership with the EPPI Centre (Evidence for Policy and Practice Information and Co-ordinating Centre) at UCL, which routinely does systematic reviews of literature in particular areas to inform decision-makers. Continue reading

Posted in Effective giving, Great charities, Impact & evaluation | Leave a comment

Don’t wish for a giving culture like the US

This In February, Mark Zuckerberg, the Facebook founder, made the largest- ever single gift to a US hospital – $75m (£49m) to a San Francisco institution. We often hear that the charitable sector in the UK should emulate the giving culture in the US. Well, we should be careful what we wish for: it’s far from clear that this would be of any help at all. 

Most obviously, the UK and the US are very different countries. Perhaps the comparison arises only because we more or less share a latitude and more or less share a language. Although US giving per capita looks higher, it’s counting something completely different.

Continue reading

Posted in Uncategorized | Leave a comment