Influential charities you’ve never heard of: Lucy Faithfull Foundation

This Advent calendar, of influential charities you’ve never heard of, appeared in Spears Magazine.

In the run-up to Christmas, Spear’s highlighted four charities which it recommend supporting. You’ve probably never heard of them, and that’s deliberate. This week is the Lucy Faithfull Foundation, which tries to prevent child abuse.

Just like investment opportunities, the best charities aren’t necessarily the ones which make most noise or which come to find you. So in the tradition of Advent calendars, each week this Advent, philanthropy expert Caroline Fiennes will be showcasing one of the best charities you’ve never heard of.

1. The Lucy Faithfull Foundation

As the former Director General of the BBC can tell you, child abuse is bad news.

To reduce it, many organisations work with children, their parents and schools. They may educate children about ‘stranger danger’, and educate parents and teachers about signs to watch for in children and adults. These charities work with ‘the good guys’.

The Lucy Faithfull Foundation also aims to prevent child sexual abuse but, unusually, it also works with the offenders and potential offenders – the bad guys – as well as with victims and non-offending family members.

Working with child abusers is controversial, to say the least, and the foundation sometimes gets a hostile reaction from local press and public. But it’s often essential to getting the abuse to stop. A video by the foundation quotes ‘Steve’, a convicted sex offender who spells out the problem: ‘When I started offending, I really wanted to ask for help. But it was easier to offend than to ask for help’.

The Lucy Faithfull Foundation provides that help, uniquely. It has decent evidence that this work prevents people like ‘Steve’ stealing other childhoods.

Its confidential helpline has seen nearly 50 per cent growth in calls since the Jimmy Savile story broke. Some callers have had what the foundation diplomatically calls ‘inappropriate thoughts or behaviours regarding children’. Previously, those people had dismissed them, but in light of the Jimmy Savile story, they’ve felt prompted to get advice about preventing them.

Victims too have sought the foundation’s support as they’ve started to see that they’re not alone. For example, the NSPCC helpline referred a lady who’d suffered abuse at a hospital where Jimmy Savile worked. Her call to the Lucy Faithfull Foundation was the first time she had felt able to tell her story fully. Another lady got in touch seeking advice on protecting her children from the stepfather who’d abused her as a child. Clearly the growth in calls requires more resources so the foundation needs additional donations just now.

Difficult problems like child abuse rarely yield to easy solutions, or even to the most obvious. But the aim is to solve them somehow, and Lucy Faithfull Foundation makes good progress on this debilitating problem.

Donations
Contact 01527 591 922 or email bordesley@lucyfaithfull.org.uk or give here

The Lucy Faithfull Foundation is a Registered Charity No. 1013025

Week two in the Advent Calendar—>

Posted in Great charities, Uncategorized | Leave a comment

Why I’m delighted to join the advisory panel of Charity Navigator

Charity Navigator is the world’s largest charity ‘ratings agency’, providing online ratings of 6,000 US-based charities which are used by over 3million donors each year. It’s also the sole organisation slagged off in my book about how donors can best find and support great charities. I’ve joined because it’s moving away from its previous dangerous model and towards something really rather clever.

Bad cop

Until 2011, Charity Navigator rated charities solely on their financial data. This often means rating them based on the percentage of their total costs which go on ‘administration’. While these data are attractive in being readily available, they’re often misleading.

Contrary to the popular notion that charities’ admin is waste and should be minimised, admin actually includes lots of activities which make charities better at serving beneficiaries. For instance, it includes deeply researching the problem they’re trying to solve, co-ordinating with other organizations and figuring out how to improve. Since charities are addressing problems which are hard – the legendary investor Warren Buffett calls them ‘problems which have already resisted great intellects and often great money’- that money is well-spent.

And hence it turns out that high-performing charities spend more on their admin than do poorer-performing charities. More, not less. Giving Evidence and Professor Dean Karlan, an economist at Yale University, compared the admin spend of charities ranked highly by the rigorous analysts at Give Well, and found it to be consistently higher than the admin spend of charities ranked lower. Scrimping on admin is a false economy.

Thus I and other intelligent observers oppose anything which nudges donors towards charities with low admin costs – on the basis that it demonstrably hurts beneficiaries.

Good cop

Laudibly, Charity Navigator is reinventing its analytical machine. It’s currently using a totally different algorithm to analyse charities serving children’s and family services: the new ratings will launch in December 2012, to be followed by similar analysis of 33 other cause areas, collectively covering 10,000 charities and therein 70% of all US charitable expenditure.

The new algorithm includes (as far as we can see) many factors which indicate that the charity is good at serving beneficiaries. It’s based on three pillars:

  1. Accountability – Does the organisation have ethical practices, good governance and transparency? Is it accountable to its constituents
  2. Financial health – Is the nonprofit sustainable? Does it have robust financial strength to survive in good times and bad? Is the overhead not at the extreme end of the continuum? Charity Navigator will still consider a charity’s admin costs, but only in the sense (which I discuss in my book, and agree with) that exceptionally high admin costs are often a good signal that there’s a problem somewhere
  3. Results. This is the really clever part. It includes seeing whether the charity can clearly articulate how its activities are supposed to achieve its organisational goals. Though this sounds rather obvious, these ‘theories of change’ can be pretty complicated and remarkably few charities actually have them. To rate highly, a charity will also need to cite some evidence for its ‘theory of change’ – this increases our confidence that the charity will succeed – and to have a sensible process for measuring its achievements. Excellently, Charity Navigator will give additional marks to charities which actively listen to what beneficiaries think of the charity’s work: because a charity (usually) doesn’t get paid by beneficiaries, there’s no natural mechanism by which the charity hears their feedback. Charities see no price signals, for example, as commercial companies do. Furthermore charities have no direct incentive to listen to beneficiaries – there’s no financial penalty for not doing so – and as a result, many don’t listen. It’s therefore just fantastic that Charity Navigator is using its unique muscle with donors to incentivise charities to do so. 

Isn’t this the same as the analysis by GiveWell?

Yes and no. GiveWell, a small US group, analyses charities in great detail, often publishing 20 or 30 pages on each. Its analysis is among most detailed I’ve seen, as I’ve said before. And that’s obviously great – if, but only if, they’ve looked at the charity or cause in which you’re interested. But it’s hard to scale that model. Charity Navigator aims to have ratings for 10,000 in pretty short-order. It’s therefore broad whereas GiveWell is deep.

I’m delighted and proud to be involved with Charity Navigator now that it’s developing a mechanism for analysing a very broad range of charities based on what really matters – their continued ability to serve beneficiaries.

On beneficiaries, shouldn’t they be allowed to speak at charities’ public meetings? Yes —> 

 

Posted in Admin costs, Great charities, Impact & evaluation | Tagged , , , , , , , | Leave a comment

Why Fewer Is More in Charitable Giving

This article was first published by Freakonomics and is co-authored with Phil Buchanan

As any 10-year-old can tell you, multiplication is commutative: 2 x $70 is the same as 70 x $2.

But not in charitable giving, it turns out. Making two donations of $70 is a good deal more valuable to charity than making 70 donations of $2.

The reason lies in the fixed transaction costs. Many charities (unavoidably) get charged a fee for each deposit into their bank account. So two large donations create only two dollops of that fee, whereas 70 smaller donations attract 70 dollops. That fee might be $0.25 per transaction. So if the $140 is given in two donations, less than 1 percent of the two donations gets lost in transit between the donor and the charity; if the $140 is given in 70 donations, 12.5 percent gets lost in transit; and if $140 were given in 140 donations of $1, fully 25 percent would fail to reach the charity. Of course, if you gave 1400 donations of only $0.25 each, nothing would reach the charity at all.

The pattern persists even if you’re giving a lot. To get money from philanthropic foundations, charities typically have to apply and then later report on what they then do with the money. This work creates another type of transaction cost. Research by the U.S. Center for Effective Philanthropy shows that these transaction costs are much higher if the foundation makes several small grants than if it makes a few large ones of the same total value:

Median time which charities spend applying & reporting on a grant of: Median amount raised per hour spent No. hours work in raising & managing $100,000
$10,000 12 hours $833 /hour 120 (three weeks)
$100,000 27 hours $3,704 /hour 27   (less than four days)

So when you’re choosing charities to support this Christmas, divide your total giving between fewer charities, whatever the scale of your giving.

Phil Buchanan is CEO of the Center for Effective Philanthropy, a board of which includes Caroline Fiennes.

How your giving can learn from crypto-genius Alan Turing—>

Posted in Donor behaviour & giving stats, Effective giving, Uncategorized | Leave a comment

The Truth, The Whole Truth

This article was first published by the Alliance for Useful Evidence.

Thomas Edison failed more than 1,000 times before he eventually found a successful design for a lightbulb. When asked about it, he said: 
“I have not failed 1,000 times.  I have successfully discovered 1,000 ways to not make a lightbulb.” 

Useful evidence concerns not only innovations or interventions which work, but also those which don’t. Indeed, perhaps we learn more from those which don’t work than from those that do.

So it’s lamentable and dangerous that evidence about failures in ‘social purpose organisations’ (a good phrase encompassing charities, philanthropy organisations, social enterprises and international development agencies) is almost entirely suppressed. Many social purpose programmes fail. This is visible to the naked eye since the problems they are ostensibly solving manifestly persist despite them. In any case, in a sector which trumpets its penchant for innovating (meaning, doing things for which there’s no evidence yet), simple probability implies that many won’t work.

The important lessons from these failings are invisible because the organisations have no incentive to share them – and indeed normally a disincentive since publicising failings will hamper fundraising. As a result, if the literature is to be believed, practically everybody is in the top quartile. A miracle!

So hats off to the few brave souls who do publicly confess their failings and thereby enable the rest of us to learn.

Oxfam GB is the most recent joiner. It randomly selected 26 of its 362 completed programmes to include in a meta-review* of its work. The ‘review of reviews’ it recently published discusses a bunch of programmes which it believed worked and several which it thinks didn’t.

Engineers Without Borders in Canada began publishing “Failure Reports” in 2008, and has now published four. They run to 30 pages and, as you might expect of engineers’ reports, are rather forensic about the reasons for the failures.

The incentives are easier on philanthropic foundations who don’t have to raise money from outsiders. So it’s disappointing that so few of them publish about their failures, or even candidly about learnings. Two stand out as exceptions. The giant Hewlett Foundation is so aware of the tendency for failures to be concealed that it has an annual award for the staff member who made its ‘worst grant’. Failure is hard to learn from if it’s undiscussable.

The stand-out leader is surely the Shell Foundation, which as it approached its 10th anniversary in 2010 wondered whether it was doing a good job. It couldn’t find out, as it said in this rather excellent understatement:

‘This report was triggered by a simple question: “Has our performance to date in achieving scale been good, average or poor when compared with our peers?” Given the lack of other published information around performance – including both success and failure – from peer organisations, this proved to be a very difficult question to answer. That is surprising given the billions of dollars managed by foundations.’

Surprising indeed. So Shell Foundation went it alone, publishing a warts-and-all account of its own performance, including citing the amount of money it felt it has effectively wasted. To my extensive knowledge, the report is unparalled in the philanthropy world. Crucially, Shell Foundation had tracked that as it went along, so was able to change its strategy and improve – moving through three quite different operating models during that decade. It thus swapped a 80% failure rate (yes really) for a 80% hit-rate.

In improving its performance, Shell Foundation found that data about the locations and causes of its failures were the most useful evidence of them all.

*not a meta-analysis in the statistical sense.

Why are charitable foundations stupidly discouraging charities from merging? –>

Posted in Impact & evaluation, Uncategorized | 1 Comment

Publicising charities’ admin spend would be a disaster

This first appeared in The Guardian, and is co-authored with Kurt Hoffman, DIrector of the Institute of Philanthropy

Joe Saxton suggested last month that charities must do more to explain their finances but it’s charities’ results that matter.

The public don’t know what charities do with their money. On this, we agree with Joe Saxton’s article last month. But his solution, that we publicise more prominently the proportion of charities’ budgets spent on “charitable activities”, would be disastrous.

We say this not because we are, in his rather good term, “apologists for opaqueness” about charities’ activities. In fact, the converse: we campaign vigorously for more openness but about charities’ results, which are what matters, rather than about badly labelled and misleading segmentations of their cost structures.

What’s in there?

The first problem with the Charity Commission’s segregation of a charity’s costs into those which pertain to its charitable activities and those which don’t is that it’s meaningless. In a well-run charity, all expenditure pertains to charitable activities. Where else would it go? Something unrelated to the charities’ goals and legal mandate? Shoes and handbags? That’s not a matter for a donor to rumble: rather it would be fraud for the regulator and police to sort out. Expenditure on raising more money to finance more work, or on governance to improve its work, is absolutely appropriate because it can help beneficiaries.

And so we see the flaw in the conventional notion that “admin costs” or fundraising costs are separate from the charity’s “real work” – that they’re waste and should be minimised. Of course, waste should be minimised, but let’s not conflate waste with admin. Admin includes all manner of useful things such as deeply researching the problem they’re trying to solve, co-ordinating with other organisations and figuring out how to improve. Since charities are addressing problems which are difficult to resolve, what the investor Warren Buffett calls “problems which have already resisted great intellects and often great money”, then preventing charities from properly understanding those problems is unlikely to help.

And hence it turns out that high-performing charities spend more on their administration than do poorer-performing charities. More, not less – perfectly counter to the common “wisdom” that a higher percentage spend on supposed charitable activities is desirable. Professor Dean Karlan, an economist at Yale University, compared the admin spend and fundraising spend of charities ranked highly by the analysts at Give Well, and found it to be higher than the admin spend of charities ranked lower.

The US Centre on Non-Profits and Philanthropy concluded the same in 2004 after interviewing a quarter of a million nonprofit organisations: “No organisation in our study was an extravagant spender on fundraising and administration. Yet contrary to the popular idea that spending less in these areas is a virtue, our cases suggest that nonprofits that spend too little on infrastructure have more limited effectiveness than those that spend more reasonably.”

Dan Pallotta in his 2008 book Uncharitable: How Restraints on Nonprofits Undermine Their Potential spends an entire 47-page chapter on the same topic and uses 137 references to refute the notion that low admin correlates to high performance.

So Saxton’s proposal that the Charity Commission put a big cross sign next to charities with a high spend outside of their “charitable activities” would probably guide donors towards low-performing charities.

We also note in passing that the accounting rules governing the segregation are pretty vague. So there’s no reason to believe the percentages used by different charities are even consistent and comparable. Saxton’s proposed tick/cross system would not even be unreliable.

A more subtle problem is that it penalises charities which are cost-effective. Suppose Charity A hires doctors to work in less developed countries and buys equipment. It incurs some admin costs to secure and co-ordinate those resources. Suppose Charity B does similar work but gets doctors to volunteer and gets companies to donate the equipment. Pretty much the sole costs in Charity B are admin, but that’s because the other costs aren’t there – its model is cheaper. But if you only look at the admin costs as a percentage of the whole (small) cost base, Charity B would look awful precisely because it’s cheap. The admin percentage analysis fails because it doesn’t ask: “a percentage of what?”

These are just a few of the problems which arise from analysing charities based on the percentage they spend on admin or charitable activities. Hence we oppose suggestions which promote that because they demonstrably harm beneficiaries.

So what should a donor look at?

We absolutely agree with Saxton that donors need to know what their donations achieve, and that, as an industry, we should guide donors towards the most effective charities.

Which metrics that should involve is a much harder question. Why? Because social science is (necessarily) hard: charities work in a wide range of fields which makes their results incomparable, their results appear over a range of timescales, and they rightly take risks meaning that their work may sometimes fail. There is no single algorithm of human happiness which we can apply to all charitable activity.

As advocates for effectiveness, we both work to get charities to report more clearly on the impact of their work. And we know that choosing between charities is a fundamentally comparative exercise – often, many charities working on an issue will each achieve some results but some charities will achieve more than others. The issue isn’t finding whether there are results, but whether those results exceed other charities’ results. Hence we very much support work by Bond to get international development agencies to share metrics and by the Inspiring Impact group to get other charities to do likewise. Such approaches have been hugely valuable in medicine, for example.

We also support US-based Charity Navigator, the world’s largest charity-ranking agency, which is moving away from assessing charities just on their admin percentages towards measures of transparency and effectiveness, including beneficiaries’ views of charities’ performance.

Einstein would have agreed with us on the danger of using admin costs simply because they’re relatively easily available. Not only did he observe that “not everything that counts can be counted; and not everything that can be counted counts”, but also that “everything should be as simple as possible – but no simpler”.

Evaluating who’s making the world a better place simply isn’t simple.

Posted in Admin costs, Fundraising, Uncategorized | Leave a comment

What philanthropy can learn from Alan Turing

This article was first published in Spears Wealth Management

Philanthropists can learn a lot from the quiet mathematician who helped win World War II and whose centenary is celebrated this year. Alan Turing and the geniuses at Bletchley Park weren’t doing the type of work you’d associate with winning a war at all. They did maths, built groundbreaking computers and analysed data and patterns — yet they took two years off the conflict.

Too often philanthropists forget how much important work is done like this, in a backroom rather than on the frontline. They shouldn’t neglect the maths ­— and indeed the science — behind philanthropy, which finds out what actually works.

Systematic reviews, done by people analysing statistics away from the frontline, save lives: possibly even your own. The Cochrane Collaboration does such reviews and has stopped airlines selling you devices which ostensibly prevent malaria but were ineffective. It also prevented people in South India after the 2004 tsunami from getting ‘brief debriefing’, a single-session counselling service designed to prevent trauma which is sometimes used (successfully) after bank raids, but was shown to be at best pointless after a natural disaster.

The Cochrane Collaboration has done over 5,000 reviews through over 20,000 researchers in a hundred countries. At only £18 million a year, its cost is virtually trivial compared to global health spending, yet (with luck) it influences every doctor, nurse and procedure in many countries. 

Many smart donors such as Bill Gates therefore support this type of analytical work which, though less visible and immediate, is ultimately more influential than more classic frontline work. The upside of this work is its reach, but its downsides include time and complexity — results can take ages to appear, and it’s tough to say which donor or charity ‘caused’ them.

There remain many, many opportunities for analytical work with this kind of reach, and it’s almost always under-resourced.

Let’s take two from education. Phonics is used to teach children to read, despite having pretty much zero reliable evidence of whether it actually works. And donors and others enthusiastically promote One Laptop Per Child, yet we still don’t really know whether (or when or where) that works either.

Turing would be on the case.

Philanthro-scientists are fighting each other. Here’s why that’s good news —>

Posted in Effective giving, Uncategorized | 2 Comments

Why I’m delighted to join a Board of the US Center* for Effective Philanthropy

The primary constraint on the effectiveness of philanthropy is that, “The problems of
philanthropy are not experienced as problems by the philanthropists”, as Katherine Fulton of the Monitor Institute rather brilliantly pointed out. Those ‘problems of philanthropy’ include what donors support – they sometimes choose programmes which are actively harmful and other times programmes which do less good than others – and also how they support them. The latter category gets much less attention than does the former, which is remarkable. Many charities feel pretty badly treated by grant-making foundations, which the data show waste huge amounts of their time and money. But because those are the hands that feed them, the charities never let on, so foundations never hear the feedback and hence don’t learn. This situation – and some of the foundations involved – are many centuries old.

So the idea of collecting anonymous feedback from charities about grant-making foundations and sharing that with the foundations in order that they can learn is a very good one. This is what the US Center* for Effective Philanthropy has been doing for some years. Its surveys are often the sole way that the learning can be heard and used. CEP has done its ‘Grantee Perception Reports’ for dozens of US foundations, various community foundations and a handful of non-US foundations, including, in the UK, the Pears Foundation, Paul Hamlyn Foundation and Friends Provident Foundation.

Philanthropy lacks the price signals through which players in many industries can tell what their customers/’beneficiaries’ think of them. The Center for Effective Philanthropy has constructed an important feedback loop within philanthropy which should help to improve it – and insodoing, improve the issue which we all ultimately care about – improving beneficiaries’ lives. I’m delighted to have recently joined its Advisory Board, to support it grow and help more foundations become response and effective.

*It’s just called the Center for Effective Philanthropy. The ‘US’ is there so you don’t think I can’t spell…

This idea would sure make  charities more accountable to us all —>

Posted in Effective giving, Great charities, Impact & evaluation, Uncategorized | 1 Comment

How philanthropic money makes major change: Moving the tanker

This article was written with Jeff Mosenkis and first published by Alliance Magazine.

We are a tiny, tiny little organization,’ says Bill Gates about the largest foundation that the world has ever seen. He’s right: the Gates Foundation’s annual grantmaking is only a tiny fraction of governments’ budgets. But smart philanthropic money can act like a tug, guiding tankers much bigger than itself such as companies or governments. It does that by identifying what works.

For example, in India in 2005, a third of children couldn’t read even a short paragraph, according to one study. As school enrolment grew, even more students were falling behind. Innovations for Poverty Action (IPA) – which uses randomized control trials to evaluate programmes which counter extreme poverty – worked with an Indian NGO to investigate how helpful it is to have assistants drawn from the community teach basic skills to the lowest-performing students. The trial found that the assistants significantly increased basic competency for the lowest achievers, and cost only $2.25 per student.

The government of Ghana had a similar problem: it spends £450 million a year on basic education but only 20 per cent of pupils reach national proficiency levels in English. Based on the success of the community assistants programme in India, IPA partnered with the Ghanaian government to design a programme for Ghana. With philanthropic money from the Children’s Investment Fund Foundation, it was tested against several alternative variations in 400 schools. To everyone’s surprise, the original programme produced the best results.

IPA frequently finds that programmes assumed to work don’t work, or don’t work as expected, even if they have been running for a long time. The Ghanaian government’s positive experience has strengthened its commitment to testing policies and rigorously evaluating their impact before implementing them. Other countries where policymakers seem to be open to the findings of rigorous research include Kenya, Zambia, Mexico and even Liberia as it recovers from conflict.

Politics rewards the bold, so politicians often shy from running experiments for fear of being seen as lacking conviction. Yet, As Richard Thaler, American co-author of the best-selling book Nudge, is fond of saying: ‘Governments can’t make evidence-based policy decisions unless they have some evidence.’ Philanthropic money is uniquely well placed to provide it.

OK, I’m in! But what does decent evidence actually look like? Like this –>

Posted in Effective giving, Impact & evaluation, Uncategorized | 1 Comment

Development Controversies Are A Sign of Sophistication

This article, written with Professor Dean Karlan of Yale University, appeared in Stanford Social Innovation Review.

Public debate about two prominent poverty-alleviation programs shows that over the past 15 years international development has become much more scientific.

The international development world is currently hosting rows about whether two poverty-alleviation programs actually work.

The Millennium Villages Project, founded by economist Jeffrey Sachs and supported by Angelina Jolie and others, aims to help nearly 500,000 out of extreme poverty. A paper published in June in The Lancet, a leading health journal, was scrutinized and roundly criticized for the logic and analysis it used to argue that observed changes were due to the Millennium Villages rather than changes already taking place in society.

The second row concerns treating children in less developed countries for intestinal worms, which are endemic in many countries. Because the worms share a child’s food, they are thought to contribute to malnutrition, reduced physical and cognitive development, and lethargy. Deworming children has been found by randomized control trials to reduced absenteeism from school, and hence is recommended by the World Health Organization and the Copenhagen Consensus Center, a think tank that publicizes the best uses of development money. But a systematic review and meta-analysis published last month by the respected Cochrane Collaboration, focusing on non-educational outcomes, found that “deworming children seems like a good idea, but the evidence for it just doesn’t stack up.”

The striking shift here is not in the details or merits of the specific programs, but in that these rows happen at all. They are precisely how science is supposed to work. For instance, Maxwell published his theory of electromagnetism, which turns out to be inconsistent with the maths of radiation from a black box, and from that tension arose the much broader quantum theory. Andrew Wiles published his “proof” of Fermat’s last theorem in 1993, somebody spotted an error, and Wiles revised and strengthened the proof as a result. In Einstein’s maths for his general theory of relativity, the Russian mathematician Alexander Friedmann found a term being divided by zero (“a complicated form of zero,” a physicist once said), which suggested—contrary to the prevailing view—that the universe is expanding, subsequently confirmed by observation and from which cosmologists have estimated the universe’s age.

International development has become much more scientific in the last 15 years: evaluating ideas through randomized control trials; publishing enough detail about a program’s methods and results that it can be replicated elsewhere; subjecting analysis to peer review; and publishing in respected journals. The organizations whose data are being contested should be proud that their data are capable of such contest. They contrast starkly with much activity in charities, philanthropy, and even social policy where performance data are often too scarce, too private, too vague, and/or otherwise too flaky to be meaningfully debated.

Science—knowledge—progresses through vigorous public debate about rigorous data. This process has shown that many things that everyone “just knew” to be true are actually false—from the “fact” that the Sun goes round the Earth, to the “fact” that severe brain injuries should be treated with steroids, a practice common until 2005 when randomized control trials showed it to be fatal. Similarly, many things which we “just know” to be true about international development are being shown by this careful, empirical, scientific approach to be false: providing more text books to Indian schools rarely actually improves learning; microcredit does not singlehandedly lift millions out of poverty; anti-malarial bednets should not be sold but rather given away for free; cooking stoves that use less wood as fuel do not always reduce respiratory diseases from reduced smoke inhalation.

The current rows are therefore a sign that international development is moving beyond “just knowing because I saw it with my own eyes” into properly understanding what works. We need more and better data to enable more quality debates on many subjects about development— debates that get settled, not by personalities or popularity or politics, but by the evidence.

How can innovative organisations produce evidence of their impact? Like this–>

Posted in Impact & evaluation | Tagged , , , , , , , , , , , , | 2 Comments

Has the worm turned on deworming?

The world-renowned Cochrane Collaboration has recently published a systematic review of the evidence about mass programmes to treat children in less developed countries for intestinal worms. It found that “deworming children seems like a good idea, but the evidence for it just doesn’t stack up”. Giving Evidence’s blog and book have vaunted deworming, using it to show how some programmes are much cheaper than equally sensible-sounding alternatives, which have also been tested by rigorous randomised control trials (RCTs) conducted by academics and peer-reviewed by others.

“When the facts change, I change my mind. What do you do sir?” – John Maynard Keynes

Deworming is based on the logic that worms share a child’s food, so reduce the child’s nutrition and hence their physical and cognitive development, and, by sapping their energy, also the child’s ability to attend school. Some RCTs have indeed found this, but by looking at all RCTs of mass deworming programmes, the Cochrane study found:

a)      some studies which found that mass deworming had no effect on a child’s weight, cognitive development or school attendance, and

b)      that some large studies haven’t been published. They include one which looked at a million children in India. Obviously they can’t conclude much from an inaccessible study, but withholding a study is normally a sign that it’s found either no effect or an unwelcome effect. This ‘publication bias’ is precisely why it’s valuable for somebody independent and knowledgeable to review all the data systematically, which is why Cochrane was set up.

Cochrane’s study – which concerns only mass treatments for soil-transmitted helminths (worms) and is its third on that subject – has been greeted with criticism from the authors of some of the studies which found it effective, and Cochrane has defended itself. Here’s the BMJ on it.

The striking feature of this row is not the detail or merits of deworming, but that these rows can happen at all. They are precisely how science is supposed to work. Many charities don’t make their impact data public, or publish only small samples, or don’t use control groups in their studies, or have a million other flaws in their data such that these debates couldn’t even happen.

Systematic reviews save lives: for example, premature babies were sometimes given  steroids but sometimes they weren’t because the evidence of whether they prevented ‘complications’ (which Ben Goldacre says is ‘a medical term meaning death’) seemed ambiguous. A systematic review of the evidence showed that steroids do save lives and hence are now normally given. The information to save lives were all there but only a systematic review showed up the real insight. Cochrane’s systematic reviews only consider randomised control trials – because they are a uniquely robust way to identify the impact of an intervention – and amongst them, only those which have sensibly large sample sizes and which are publicly available. That is, it is only because the development people are so sophisticated in rigorously collecting data and publishing it that this analysis can happen.

The organisations whose data are being contested should be proud that their data are capable of being contested in a way that will ensure that the true story of the data can emerge. With Professor Dean Karlan of Yale University, I wrote an op-ed on this topic.

What else can philanthropy learn from science and medicine? Masses–>

Posted in Impact & evaluation, Uncategorized | Tagged , , , , , , , , , , , | 2 Comments