This article was first published by The Life You Can Save.
Alessandro Liberato was suffering from multiple myeloma and trying to decide whether to go through the trauma – for the second time – of a bone marrow transplant.
“There were four [clinical] trials that might have answered my questions, but I was forced to make my decision without knowing the results because, although the trials had been completed, they had not been published,” he said.
But Alessandro’s predicament isn’t unique. Millions of patients like Liberato and their doctors are avoidably in the dark. Amazingly, fully half of all clinical trials are unpublished.
“As a result, the effects of most medicines are effectively unknown,” says Dr. Ben Goldacre, who has studied the problem of why clinical trials often go unpublished. Continue reading
If you want to give to, say, cancer and want to find a good charity in that, how can you currently find out which org is any good? Essentially you can’t: charity ‘due diligence’ is way too hard for almost any non-professional donor.
It matters since most £s given are given by ‘normal people’ (for whom philanthropy isn’t a job) and those people are the majority of donors. The pattern is the same in most developed countries. Those donors really don’t have much option but to give randomly or based on hear-say.
We’ve thought long about fixing this, and are now moving to action. Our ‘strategy’ is to borrow other people’s homework: create & market a website which compiles the recommendations of (charities funded by) sensible grant-makers, & of independent analysts.
A brief paper outlining the concept is here
. It’s very early days but you’ll get the drift. We’re very interested in your views: please send them to admin [at] giving-evidence [dot] com stating your location, experience and day rate.
We’re looking for a freelancer with experience of market research as part of new product development (NPD) to help in these early stages. Ideally they’d have done some NPD and be familiar with human centred-design / rapid prototyping. They can be anywhere in the UK. If that’s you, please get in touch.
This was published by Stanford Social Innovation Review in a series about strategic philanthropy.
Encouraging more strategic philanthropy is a behavior change exercise. Paul Brest and I are fellow travellers and co-conspirators in that mission. But his article implies that he and I see different barriers to achieving that change. (We may of course both be right.) Brest lays out the objections to strategic philanthropy and refutes them—and does so excellently. By contrast, the barriers which I see and encounter are primarily practical.
To change donor behavior, we can usefully learn from the patron saint of “nudging,” University of Chicago Professor Richard Thaler, who first deployed behavioral insights in economics. He has developed two ‘mantras’ while overseeing ‘nudge units’ in various governments globally:
- “You can’t make evidence-based policy decisions without evidence.”
- “If you want to encourage some activity, make it easy.”
Strategic philanthropy comes out badly on both mantras: we have barely any evidence about either how to do it or the location or extent of most of the problems it might tackle; and (not unrelatedly) strategic philanthropy is not easy to do. Continue reading
Few people can claim that their work has been used routinely to inform or improve fundraising, reproductive health, the governance of African countries or road safety, or to help people to get jobs or quit smoking; but the US economist Richard Thaler can. He has the rare distinction of having revolutionised a major discipline, and in his new book, Misbehaving: the Making of Behavioral Economics, he recounts how he did it.
Thaler realised that much of what economics says about how people behave conflicts with how we actually behave. Predictions which collide with observation are bad news in science. He suspected that economics would make better predictions if it absorbed insights from experimental psychology. This resulted in the new discipline of behavioural economics, which has since become mainstream.
Behavioural insights become rocket fuel when they are applied to social and development problems, and to public policy. They are useful to charities in at least three ways. Continue reading
Every school child knows that vitamin C prevents scurvy. But how long was it from when James Lind, a Scottish naval surgeon, made that important discovery in 1747 until the British Navy started providing fruit juice to sailors? At that time, scurvy was killing more sailors than military action, so answer is surprising. It was 38 years.
‘Research uptake’ as this has become known, is hard. Luckily it’s becoming a discipline in its own right, which looks at both its strands: uptake by governments in policy, and uptake by front-line practitioners. Charities and charitable funders produce research and insights which we aim to have ‘taken up’ in both strands.
The scurvy story shows how it’s not enough ‘just’ to be right – even if the insight is vitally important to national security and cheap to implement. This year’s BBC Reith Lecturer, the doctor Atul Gawande, talked about how his Indian grandmother died of malaria well after chloroquine was discovered to be a prophylaxis. The news must travel to where it’s needed. Continue reading
“What is known about what works and what doesn’t? What can we learn from the existing literature and experience of other organisations about what works and what doesn’t – and for whom and in what circumstances – which can help us make better funding decisions?”
These questions are the genesis of a study of outdoor education for 8-25 year olds commissioned by the Blagrave Trust, a family foundation which supports disadvantaged young people in southern England. Giving Evidence is working on the study in partnership with the EPPI Centre (Evidence for Policy and Practice Information and Co-ordinating Centre) at UCL, which routinely does systematic reviews of literature in particular areas to inform decision-makers. Continue reading
This In February, Mark Zuckerberg, the Facebook founder, made the largest- ever single gift to a US hospital – $75m (£49m) to a San Francisco institution. We often hear that the charitable sector in the UK should emulate the giving culture in the US. Well, we should be careful what we wish for: it’s far from clear that this would be of any help at all.
Most obviously, the UK and the US are very different countries. Perhaps the comparison arises only because we more or less share a latitude and more or less share a language. Although US giving per capita looks higher, it’s counting something completely different.
Dreadful practice at the well-known charity Kids Company – around services, governance and evaluation – are exposed in The Spectator and by Genevieve Maitland Hudson here.
Evaluations of a charity’s work are the main tool by which public donors and taxpayers can know if a charity is doing a good job. But these are normally conducted and/or funded by the charity itself which creates two major problems: first, charities have an obvious incentive to present themselves favourably; and second, most charities lack the research skills to run evaluations well.
This matters. The ‘answer’ from an evaluation depends markedly on how well the research was done. A National Audit Office study found that positive claims about government programmes often come from ropey evaluations whereas robust evaluations only allow for modest claims. And good evaluations usually cost more than bad ones. So it’s hardly surprising that one of the few reviews of the reliability of charities’ evaluations – by the Paul Hamlyn Foundation – found that 70% are not ‘good’. Continue reading
This article first published in Third Sector.
There has been a huge rise in interest recently in the impact charities have, so it’s remarkable that only now are we seeing rigorous evidence emerging about whether donors actually care. It’s a mixed picture.
A paper published last year reported on an experiment with a US charity, Freedom From Hunger. It divided its donor list into two random groups. Those in one group received a conventional solicitation with an emotional appeal and a personal story of a beneficiary, with a final paragraph suggesting that FFH had helped that beneficiary. Those in the other group received a letter identical in all respects – except that the final paragraph stated (truthfully) that “rigorous scientific methodologies” had shown the positive impact of FFH’s work.
Donations were barely affected. The mention or omission of scientific rigour had no effect at all on whether someone donated. It also had only a tiny effect on the total amount raised. People who had supported that charity infrequently were not swayed. However, people who had previously given ‘a lot’ – more than $100 – were prompted by the material on effectiveness to increase their gifts by an average of $12.98 more than those in the control group. On the downside, people who had previously made frequent gifts of less than $100 became less likely to give and also shrank their average gifts by $0.81 – all told, the net effect was about nil. But on the upside, this implies that more serious donors will give more if they are presented with decent evidence of effectiveness. Continue reading