Publishing the whole truth

This article first appeared in Third Sector.

The C&A Foundation – linked to the department store that closed in the UK but is flourishing elsewhere – joins a small clutch of non-profits this month that publicise the lows as well as the highs in their work.

It ran a programme in 18 garment factories in Asia designed to improve both working conditions and productivity. Some aspects worked in some factories, some aspects worked in others. Rather than taking the conventional option of reporting only the glories, in a kind of self-promoting publication bias, the C&A Foundation is publishing it all: data from each factory, correlation coefficients, statistical significance tests and operational challenges all feature in the report, which is called Frankly Speaking. (Disclosure: I am a co-author.)

Likewise, Engineers Without Borders, a Canadian NGO, has been publishing an annual failure report since 2008. In each one, volunteer engineers recount tales of an error they made in the field. Yet, despite the praise for EWB and the obvious value of hearing the whole truth, EWB remains an anomaly. To my knowledge, it’s the only operating charity that publishes so candidly. When I asked its chief executive why it discloses when others don’t, he said: “Well, if your bridge falls down, it’s pretty obvious.” Indeed. By contrast, plenty of social programmes appear to work but are shown by sophisticated analysis not to work. The crime-prevention programme Scared Straight and some micro-finance programmes are examples of this.

The C&A Foundation encountered something similar – but the opposite way round. When Giving Evidence got involved, it looked as though working conditions in the factories hadn’t improved much, but the inclusion of later data in the analysis showed that they had.

Medical doctor John Ioannidis, now of Stanford University in California, uses complex statistical tests to unearth often shocking stories within clinical trial data and says his work echoes a theme from the ancient literature of his native Greece that “you need to pursue the truth, no matter what the truth might be”.

Dogwood Initiative is an environmental campaign on Canada’s west coast. It was inspired by EWB to publish its own failure report and found an important issue in its internal culture. “Dogwood Initiative could change our relationship with failure,” says the report. “It involves piloting an idea, measuring results, figuring out what works and what failed, adapting and rebooting.” Giving colleagues that right to admit failure can take time. Dogwood Initiative’s first annual failure report took so long to agree internally that it ended up covering the next year too.

The World Bank also holds internal “failure fairs” and finds it needs rules to ensure the discussions are safe – each speaker makes a case but can’t blame anyone except themselves and can’t name anyone else.

The funding and competition in the charity sector undoubtedly discourage confessions of weakness, but if we don’t do it, we won’t learn from ourselves and we’ll bias the literature from which our peers – and funders – can also learn. EWB’s Admitting Failure website lets anybody upload stories.

Go on, I dare you.

Posted in Impact & evaluation, transparency, Uncategorized | Leave a comment

Go see your MP

This article first published in Third Sector.

Why is so little policy based on sound evidence? Many voluntary organisations, academics and others spend time producing research in order to influence the government. There are some successes – but much policy appears to disregard the evidence.

Mark Henderson is head of communications at the UK’s largest charitable funder, the Wellcome Trust, and author of The Geek Manifesto, which calls for a more scientific approach to policy and politics. He says there’s little political price to be paid when MPs ignore the evidence. He also says that, in their constituencies, most MPs know the business people – who, after all, will ensure that people of influence have the benefit of their views – but rarely know the scientists. They probably don’t know the charity sector people, either.

I am an advocate of evidence and I’m often in meetings about the importance of getting evidence into policy with organisations such as the Alliance for Useful Evidence, the Institute for Government, the Hewlett Foundation or the National Institute for Health and Care Excellence. If Henderson is right, we’re all stuck in an echo chamber and missing a trick.

So I went to see my MP. And with haste: she is Justine Greening, a Cabinet minister – for international development, as it happens – and the recent reshuffle was looming.

“Hello, I’d like to talk about how keen I am that government policy be based on robust evidence,” I said – an unusual opening in an MP’s surgery, to say the least; but it led to a spirited conversation. Before my visit, I had, by way of a focus group, asked on Facebook what I should raise with a secretary of state. The doctors all rattled on about distinguishing between good and bad evidence, and everybody cited weariness of politicians cherry-picking data that suited them. Having removed the names, I printed out the responses and presented them.

Most revealing were two interconnected things. First, when I told Greening that the Department for International Development was generally very sophisticated in its use of evidence, she seemed amazed. “Most of my constituents think foreign aid is a waste of money,” she said. Actually, plenty of her constituents don’t think that – I socialise with them and they say so – but, clearly, those views had never reached her.

If we want MPs to act on evidence, we should go and tell them that that’s what we want

Second, not a single person I know had ever been to see their MP. Many of us battle attitudes voiced in what we might call “the uncharitable press” and bemoan MPs who pander to it. They hear and heed calls to continue the Work Programme, to cut the Third Sector Research Centre and so on. If we’ve never told them our contrasting views, we’ve nobody to blame but ourselves.

So go and see your MP – it’s your democratic privilege and weirdly empowering.

Does it make any difference? I don’t know. Perhaps we should gather evidence on this. Ben Goldacre, the broadcaster and science campaigner, and the innovation charity Nesta created an online tool, RandomiseMe, which enables anybody to run a randomised, controlled trial. We could all participate in such a trial: half of us go to see our MPs and half don’t, and we watch for subsequent differences in their voting behaviour. If we want MPs to act on evidence, we should go and tell them that that’s what we want.

What IS good evidence? Not this–>

Posted in Uncategorized | 1 Comment

Does the charity sector have a publication bias problem?

This article first published in Third Sector.

It’s hard to make evidence-based decisions if much of the evidence is missing or ropey. So it’s disastrous that the results of many medical clinical trials are missing, preventing doctors from using them.

It’s thought that fully half of all clinical trials remain unpublished. It’s not difficult to guess which half. Apparently, trials published by pharmaceutical companies are four times more likely to show that the drugs have a positive effect than identical trials conducted independently. So why is that?

Well, trials themselves don’t lie. Magically, however, the negative ones don’t get published. This publication-bias costs lives, yet is perfectly legal. Dr Ben Goldacre, author, broadcaster, campaigner and debunker of bad science, says that the near-fatal effects of the drug trial in Northwick Park hospital a few years ago – when all the men in the trial ended up in A&E with multiple organ failure – could have been predicted from results that were known but not published.

So we should all applaud the AllTrials campaign, initiated by Goldacre and and seed-funded by Simon Singh, to ensure that the results of all trials are published. Goldacre and Singh take scientific integrity seriously: both were accused of libel for highlighting bogus claims, refused to recant, endured horrible, long legal trials – and won.

Does the charity sector have the AllTrials problem? Do we withhold some of our monitoring and evaluation research and publish only unrepresentatively positive and misleading material? I suspect so. I did it myself when I was a charity chief executive: graphs that go up made for good meetings, so we published them; graphs that go down made for terrible meetings, so we didn’t. I don’t believe we were alone.

Monitoring and evaluation is research. It’s not always framed as that – it’s often seen as compliance with the requirements of funders; but it’s there to investigate possible causal links. Whether and when does intervention X lead to outcome Y? Do breakfast clubs improve children’s learning? Does teaching prisoners to read help them get jobs when they are released? These are research questions.

Ideally, that research would not be private but would be published to constitute evidence-based practice, just as clinical trials guide doctors. Any other charity could use it to decide whether the intervention might work in its context and whether it should replicate it.

But we can’t make evidence-based decisions if the literature is incomplete or biased; and, as ever, it’s our beneficiaries who miss out.

Research-withholding and publication-bias are commonly studied in medicine to establish whether and where there’s a problem and work can be targeted to fix it. But to my knowledge, neither has ever been studied in our sector. Not once. One study (itself unpublished) in Canada that was looking at something else found – shockingly – that the proportion of research carried out by charities that was published was 2 per cent.

Investigating the withholding of research and publication-bias is neither difficult nor expensive. It’s time we knew whether we too need a fix.

Charity data is missing and ropey a lot, such as here –>

Posted in Effective giving, Impact & evaluation | Leave a comment

Moneyball Philanthropy? Not Always

This article, by Ehren Reed of the Skoll Foundation and Caroline Fiennes, first published in Forbes.

Some charities are better than others, so we should find the good ones. On that we can all agree. We should support the charities that will be most effective in addressing the world’s pressing problems. And understanding that effectiveness requires measurement. But a reliance on quantitative analysis, which is helpful in understanding some charities, could prevent us from finding ones that are doing important, system-changing work.

The charities which are easiest to measure are those whose work is proximate to the beneficiary. They distribute mosquito nets to families in sub-Saharan Africa or deliver wheelchairs to disabled children. Their theory of change – the link between their work and the intended benefit – is simple. The intervention is well understood, the outcome is predictable, and most of the variables are clear. From a funder’s perspective, the risk is low.

These interventions are like machines, and advertise themselves as such. Three dollars in = one bed net out. Five pounds in = one case of diarrhoea avoided. Cause and effect are clear. They operate within a system in which the relevant factors are known.

Working within complex systems

But a lot of important work done by charities is quite different. It involves trying to change legislation to outlaw discrimination; it’s research to uncover the human genome; it’s changing societal attitudes on same sex couples. These efforts aim to change the system. Here, success depends on factors which are unseen, often unknowable, and mainly beyond the charity’s control. The causal chains are long and uncertain.

Working on hard problems within highly complex systems, a charity’s results can take ages to materialize. Even then, the results may not be predictable, attributable, or even measurable.

Yet this is probably the most consequential work that a donor can enable. In 2006, the Institute for Philanthropy surveyed a thousand experts on UK philanthropy’s greatest achievements. The resulting list is dominated by system-changing work: campaigns which ended the slave trade, created the welfare state, and ensured universal education. This type of work generates effects that are much broader and more profound than delivering services to a limited group of recipients.

There’s frequently a trade-off. The more certainty a donor wants about results, the smaller they will be. If she’ll accept more uncertainty, by operating further away from beneficiaries and engaging more with the system around them, the ultimate effect may be greater. In other words, if donors limit their risk, they may simultaneously limit their return.

Furthermore, philanthropy is uniquely able to fund these kinds of system-changing efforts, since governments and companies are inherently more risk averse and less likely to support them.

Moneyball philanthropy?

It has become trendy to liken effective philanthropy to Moneyball, the strategy pioneered by the Oakland Athletics baseball team, which involved choosing players based on statistical analysis instead of experts’ intuition.

But the analogy doesn’t hold. In baseball, the playing field is bounded, the rules are clear, the results are immediately evident, and the variables are visible and knowable. The Moneyball approach worked because the system was reasonably simple. Certainly the same approach can help to analyse charities whose work is based on simple models of cause and effect. The charities recommended by analysts like GiveWell, for example, can all show what they achieve for £10, though none of them has much effect beyond their immediate beneficiaries.

But the Moneyball approach is hopeless for assessing charities trying to change the system. Take the work of Global Witness in exposing the economic networks behind conflict, corruption and environmental destruction around the world. The ultimate value of their work, in terms of lives saved and natural resources protected, is literally incalculable.

Unintended danger

The Moneyball approach – like much of the current debate in philanthropic sectors on how to define and measure impact – is dangerous because it leads donors to seek out only the most easily provable results. It pushes them towards interventions within the current system and beguiles them into thinking that the best charities must be able to produce simple cost-benefit figures.

As we’ve seen, this approach would have precluded some of philanthropy’s greatest successes. Many of us owe our liberty, our freedom of speech, and our education to such philanthropy. We’d be crazy to sacrifice these kinds of achievements in pursuit of an immediate “return on investment.”

Giving slow, not fast

Two factors complicate effective philanthropy. The first is that it involves making decisions under considerable uncertainty. Because donors have finite resources, they must decide between competing activities. Yet as we’ve seen, many determining factors are in principle unknowable when working within complex systems: basic medical research may be stellar or may find nothing; a campaign to ban handguns will rely on political will which may or may not materialise.

The second complicating factor is that human brains love shortcuts, and are much better at making decisions which don’t require much thought. Indeed, as Daniel Kahneman explains in Thinking, Fast and Slow, we often fail to notice that many decisions require proper thought and instead make them on the fly, leading to predictable errors. Charities whose work is based on simple theories of change don’t require much thought. Those whose work is more complex require that we take proper time to make good decisions.

In the end, this shouldn’t be surprising. Philanthropy is about making the world a better place. And making the world a better place is going to be a lot more difficult than winning a baseball game. Let’s not let an idea like Moneyball distract us from the challenge.

How donors change the system to increase evidence in government policies–>

Posted in Effective giving, Impact & evaluation | Leave a comment

Don’t Die of Ignorance

This was first published by Third Sector, in Caroline Fiennes’ regular column.

It sounds pretty good – a programme that aims to break the cycle of poverty and illiteracy by improving the educational opportunities of low-income families through early childhood education, adult literacy support and parenting education. It has served more than 30,000 families and has run in more than 800 locations nationally. Would you put money into it? Might your organisation take it on? It sounds highly plausible and clearly has attracted considerable funding.

But research has shown that this programme had no effect at all. The gains made by children who were served by it were indistinguishable from those of children who weren’t. The money might as well not have been spent.

Let’s try another example – a policy that children who fall behind at school retake the year. Again, it sounds pretty sensible and is common practice in some countries. So should we do it?

Well, compared with this policy, the parenting early intervention programme mentioned above looks like a great idea: whereas it achieved nothing, the schooling policy achieved less than nothing by making things worse. Children who retook a year typically ended up about four months further behind than if they hadn’t.

These examples, and many others like them, show that our intuition about programmes or organisations is no guide. It might lead us to waste our time and efforts or even to make things worse. We do better when our decisions – as donors, managers or trustees – are based on evidence.

Now suppose that for some medical condition there are two competing drugs. Drug A solves your problem and has the side effect of reducing your risk of heart attack by 29 per cent; drug B also solves your problem but increases your risk of heart attack by about half. What do you say? It’s not a hard choice.

In fact, drug A and drug B are the same drug. Again, this example is real: it’s for hormone-replacement therapy. One type of test (observational non-randomised cohort studies using markers for heart attacks) showed that it reduced heart attacks by 29 per cent, whereas another (randomised studies that monitor actual heart attacks) showed that it increased fatal heart attacks by half. What do you say now?

Only one answer can be accurate, so you want to know which test to believe. The research method matters – indeed, your life might depend on it.

With social programmes, too, the answer depends on the research method. When a reading programme in India was evaluated using various research methods, one implied that it worked brilliantly, a second that it worked only a bit and a third that it was worse than nothing. They can’t all be right. So we need to ensure we make decisions not on any old evidence, but on sound evidence.

We should be on our guard against bad research that leads us to waste time and money – and possibly lives. The National Audit Office’s study of evaluations of government programmes found that where the research technique was weakest, the claims made about the programme’s effectiveness were strongest.

Smart decision-makers rely on evidence that is actually reliable, and know what that looks like. Don’t die of ignorance.

Want to see a crashing example of bad research by a charity? Here–>

Posted in Analysing giving, Impact & evaluation, Uncategorized | Leave a comment

Making charities’ research more findable and useful

Quite possibly, some NGO has discovered a great way to, say, prevent re-offending or improve literacy, but that nobody else knows about it so their genius innovation doesn’t spread. Surely this is unacceptable. 

The problem seems to be that, although NGOs conduct masses of research (including their ‘monitoring and evaluation’, which is research even though it’s often not framed as such), a lot of it isn’t findable and/or isn’t clear. A lot of NGO-generated research never gets published: it’s obviously hard to know how much but one study we came across recently (and are trying to get published!) implied that just a measly 2% gets published. Other research is published but only in places that nobody would know to look, such as on a small organisation’s own website. And some which is published isn’t at all clear about what the intervention actually was, or what research they did, or what the results were. For example, this systematic review of programmes to teach cookery skills found that ‘very few reports described the actual content and format of classes’.

Giving Evidence is therefore delighted to announce a project to explore getting NGOs to publish more of their research in a way that’s findable, and publishing enough detail about the research that somebody could to see whether the intervention was effective and enough clarity about the intervention that they could replicated if they wanted. The concept is for NGOs to publish in a database or through a meta-data standard the following for each piece of research they undertake: description of the intervention; their research question; the research method and how it was used (e.g., sample size and how the sample was chosen); the results.

We’re taking as a case study the UK criminal justice sector. The project is essentially a big consultation on the concept and we invite your input. More detail on the project is here, together with more detail on the concept we’re exploring, and its likely benefits. Do get in touch if you are interested.

This project is a side-effect of our work on learning lessons from how medicine uses evidence, which is here.

It’s part of our general theme that: it’s hard to make evidence-based decisions if lots of the evidence is missing, or unfindable, or unclear, or garbage, discussed more here.

This project will enable work to improve the quality of research by NGOs. Why does research quality matter? –>

Posted in Uncategorized | 1 Comment

Giving Evidence’s mission and work

Many thanks to the Social Progress Imperative!

More videos on our insights and approach are here.

Posted in Uncategorized | Leave a comment

Assessing Funders’ Performance: Five Easy Tools

This article was first published in the Stanford Social Innovation Review

Measuring impact is so tough that many funders give up, but there are some insightful and actionable tools for funders that aren’t daunting.

When I was a charity CEO, we approached a family foundation. There was no formal application process. Instead, we had to write various emails, and I had to attend various meetings (not unusually, the foundation wanted to see only the CEO, the highest paid staff member). A physicist by background, I kept a tally of the time all this took and the implied cost. Eventually we got a grant, of £5,000. This required that we (I) attend more meetings—for “grantee networking,” meeting the family, and so on. We noted the cost of those too. Towards the grant’s end, the foundation asked us to compile a short report on what we’d done with the grant. By now, the tally stood at £4,500. I felt like saying: “What grant? Honestly, you spent it all yourselves.”

One hears worse. A physicist at Columbia University has calculated that some grants leave him worse off. And I’ve heard of a heritage funder requiring that applications have input from consultants; this made the cost of applying £100,000, though the eventual grant was just £50,000.

Clearly it’s important for any organism to learn, adapt, and improve. Much of the discussion about how funders should do that, and the tools available to them, revolve around “measuring impact.” But measuring impact is complicated—perhaps even impossible. I wonder whether, in our quest for the perfect measure of performance, we overlook some simpler but nonetheless useful measures, such as whether a funder is essentially spending a grant on itself. As Voltaire warned, perfect is the enemy of the good.

Let’s look at why getting a perfect measure is so hard, and then at some simpler “good” tools.

Funders: Don’t measure your impact …

A funder’s impact is the change in the world that happened that would not have happened otherwise. Making a perfect estimate of impact is difficult for two reasons.

First, most funders support work that is too diverse to aggregate its effect. Hence, articulating or identifying “the change that has happened” can be impossible.

Second, there normally isn’t an “otherwise” that we can compare with reality. Constructing an “otherwise,” or counterfactual, would be very difficult; it would require comparing achievements of grantees with non-grantees. Ensuring that the groups were equivalent would require that the funder choose between eligible organizations at random, which few would be willing to do. And to establish that the funder rather than other factors (such as changes in legislation or technology) caused the change in the world, both groups would need very many organizations. And again, the heterogeneity of work may prevent comparisons of the two groups’ results anyway.

Many funders give up. A recent study found that, though 91 percent of funders think that measuring their impact can help them improve, one in 5 measures nothing pertaining to their impact at all.

… rather, understand your performance.

Compared to this complexity, seeing how a funder can save time and money for applicants and grantees looks like child’s play. In fact, it may be an even better thing to examine, because it shows pretty clearly what the funder might change. BBC Children in Need (a large UK grantmaker) realized that getting four applications for every grant was too many (it imposed undue cost), so it clarified its guidelines to deter applicants unlikely to succeed.

Giving Evidence has found several such tools in our work with donors (collated in a white paper released this week); each is relatively easy and gives a valuable insight into a funder’s performance. We make no claim that these tools provide the perfect answer, but we’ve seen that they are all good and helpful for ambitious donors wanting to improve:

  • Monitoring the “success rate”—the proportion of grants that do well, that do all right, and that fail. Though clearly the definition of success varies between grants, presumably funders craft each one with some purpose; this tool simply asks how many grants succeed on their own terms. Shell Foundation found that only about 20 percent of its grants were succeeding. This pretty clearly indicated that it needed to change its strategy, which it did, eventually doubling and then tripling that success rate. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is doing well if barely any of its grants succeed.
  • Tracking whether “the patient is getting better”—whether that means biodiversity is increasing around the lake or whether malaria decreasing in prevalence. This of course indicates nothing about cause. But sometimes funders find that their target problem has gone away, or moved, or morphed, and they should morph with it.
  • Measure the costs that funder application and reporting processes create for nonprofits. The prize here is huge: It’s estimated that avoidable costs from application and reporting processes in the UK alone are about £400 million a year.
  • Hear what your grantees think. Grantees can’t risk offending organizations that they may need in future, so funders need to ask. Listening to beneficiaries and constituents benefits medicine, public services, and philanthropy.
  • Clarify what you’re learning, and tell others. Engineers Without Borders finds that its annual Failure Report—a series of confessions from engineers in the field—is invaluable for internal learning and accountability. Funders pride themselves on taking risks and many programs just don’t work out; there shouldn’t be shame in learning.

We hope that these tools are useful and that funders use them, and we welcome any discussion.

Download the White Paper here

Posted in Uncategorized | 1 Comment

Easy ways for philanthropic donors to see if they’re doing well

This article was first  published by the Social Impact Analysts Association.

Some skiers are better than others. Some singers are better than others. The same for teaching, nursing and curling. So it seems reasonable to suppose that some people are better at supporting charities than others.

But how do you tell? Curlers can easily see if they beat their opponents, and surgeons see if patients live or die, but success in philanthropy is less evident. Whereas businesses get feedback immediately and constantly – unpopular or over-priced products don’t sell – donors don’t. They can’t rely on charities telling them since they’re daren’t bite the hand that feeds them. Steve Jobs cited the difficulty of knowing if you’re giving well or badly as deterring him from giving much at all.

Happily, it is possible – and not terribly hard. Giving Evidence, a consultancy and campaign which helps donors to give well by using sound evidence, has found various tools which help almost any donor to understand their performance. They’re collated in a new white paper, and are simple: they may even seem rather obvious, but have proven useful to individuals, companies, and foundations who give. They are:

  • Monitoring the ‘success rate’: the proportion of your gifts which do well, do alright and which fail. Though clearly the definition of success varies between grants, presumably each one is made with some purpose: this tool simply asks how many succeed in their own terms. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is succeeding if barely any of its grants succeed. We’re not saying that every grant should succeed: many funders sensibly support experimental or exploratory work, but, like venture capitalists, donors should expect some failures, though should have some system for noticing which grants those are to enable learning from the patterns. The Shell Foundation (attached to the energy company) used this measure to triple its success.
  • Tracking whether ‘the patient is getting better’: whether biodiversity is increasing around the lake, or whether malaria is becoming less prevalent. This of course indicates nothing about why anything is changing nor the donor’s contribution. Nonetheless, it’s imperative to know if the problem is worsening – in which case, we might re-double our efforts or invite other funders in – or if it’s gone away. Often data from public or commercial sources shows progress on a funder’s goals.
  • Measure the costs created for charities (and others) by the funder’s application and reporting processes. These can be huge: as a charity CEO myself, I had some grants where the donor’s processes consumed 90% of the grant given. It can be even worse: a physicist at Colombia University calculates that some grants leave his lab worse off, and we’ve heard stories of application processes which cost twice the amount eventually given.  Grantees may make great progress despite a meddlesome funder. The avoidable costs from application and reporting processes in the UK alone are estimated at about £400m every single year. BBC Children in Need has examined its process and found ways to make savings, and other large donors can too.
  • Hearing what your grantees think. When I ran a charity, I often saw ways that donors could be more helpful but never told them because the stakes are too high: charities can’t risk offending people whose help they may need in future. So the learning is lost. Yet listening to grantees and beneficiaries has brought great benefits in medicine and social services – and to many philanthropic donors.
  • Lastly, clarify what you’re learning, and tell others. Funders do publish, but mainly about their successes. ‘Publication bias’ in medicine in which positive stories are disproportionately likely to be shared means that ‘the true effects of loads of prescribed medicines are essentially unknown’, according to epidemiologist Dr Ben Goldacre. Philanthropy can avoid the same fate. We’re currently working with a foundation to clarify and publish the ‘whole truth’ about how an innovative programme fared. Tales of failure and challenges, however inglorious, teach us a great deal.

Perhaps ‘measuring impact’ is too hard and too off-putting, and we should all instead talk about ‘understanding performance’. The tools in this white paper help with that. Giving Evidence is working with donors on several of them, and will happily talk to anybody about them.

It’s hard to make data-driven decisions if loads of the data are missing or garbage–>

Posted in Analysing giving, Donor behaviour & giving stats, Effective giving | Leave a comment

Philanthropy in transition

Caroline Fiennes was one of 11 leaders interviewed by The Guardian for the Philanthropy in Transition series. 

A new generation of donors wants impact and engagement

Out of the boom came a new breed of donors for whom good intentions are not enough an evidence is key

How do you think philanthropy is changing, and what’s driving those changes?

The most obvious changes in the past 15 years are the arrival of many new donors, new ways of giving, and much higher profile.

It started with the boom: money from eBay, Microsoft, Google et al. They brought tools common in business, but not used in philanthropy: high-engagement, focus on results, and financial instruments beyond grants such as loans and quasi-equity investments. We often think of them as flashy, and while that’s true of some, there are major European and Asian donors who keep out of sight.

Growth is driven by self-made wealth. The UK’s rich list shows that 50 years ago, most wealth was inherited, now it is self-made. This has brought an urgency about getting things done, which has spurred interest in new ways of engaging.

And it’s not just the rich. People giving modest amounts also want to be effective. The donations influenced by GiveWell’s independent analysis of charities’ performance have risen about 700% in just four years. Giving What We Can, which began as a student movement, encourages people to pledge part of their income to non-profits: many members ‘earn to give‘ by taking high-paid jobs to maximise the amount they can donate.

All this has brought a focus on effectiveness … though, ironically, we have no idea whether it’s achieving anything.

In the past few decades, awareness has grown that good intentions are not enough. Donors also wonder if their work is optimised.

What’s the potential impact of these changes?

We don’t know! Because funders don’t make comparative assessment of how various models of giving perform. For example, are your grants more likely to succeed if you are hands-on with them or not? People have lots of opinions about this but there’s no actual data.

Yet it’s not hard to find out. Shell Foundation made many grants, and graded each according to whether ‘succeeded’, ‘did OK’ or ‘failed’. Hardly any succeeded. So the foundation changed its model: away from making many, small, reactive grants, to making fewer and being more engaged. The success rate picked up. The foundation intensified the change, which increased the success rate further.

We need lots of funders to do this analysis and to publish it along with details of their model. And it is not rocket science.

Of course, it doesn’t ‘measure the full impact’ of the funder’s work, but funders often get hung up on that. It’s extremely hard to accurately measure impact, because it’s normally through grantees and may include diverse types of work which can’t be aggregated. At some level that doesn’t matter because, the aggregate impact of grantees is different to the impact of the funder: the grantees may do great work despite a really annoying and wasteful funder. To understand the funder’s effect, we need to look at the funder’s own processes. Shell Foundation’s analysis assessed processes for making decisions and providing support.

However, to find the best model in particular circumstances it’s unlikely that one model will outperform another model in all circumstances.

What one thing could foundations do better to increase their sustainable impact?

Funders could vastly increase their impact by basing their decisions on sound evidence. That covers their decisions about both what to fund and how to fund.

On what to fund, that means:

• When deciding on programmes or focus areas, look at where need is and where supply is. There’s currently a chronic mismatch: for instance, in global health, about 90% of funding goes to just 10% of the disease burden.

• When deciding which programmes to fund, look for existing independent and rigorous evidence, rather than just what the applicant provides. Many interventions have been studied independently: health in the UK is quite well-studied, as are many areas in international developmentcrime in the UK is just starting. The Children’s Investment Fund Foundation – more rigorous than most – puts more weight on a proper literature review than on the information in the application form.

• Know the difference between reliable evidence and unreliable evidence. For example, a charity claiming impact might show that people it helps get jobs more quickly than those it doesn’t help. But that comparison is no good. It may say solely indicate that people who chose to ask for its help are unusually motivated. (This real example is discussed here.)

• If no decent quality evidence exists, consider funding researchers to produce more.

Understanding how to fund means:

• Measuring your ‘success rate’ as described above, seeing how it varies if you change your practice. Publish what you find so that others can learn.

• Seeing if you can find free money! Measure the costs that are borne by charities (and/or social enterprises and others) in applying to you and in reporting to you. I personally once got a net grant which was entirely consumed by the funder’s processes. Such stories are quite common. Streamlining these processes could easily release £400m every year.

• Asking your grantees for their views of your processes. The US Center for Effective Philanthropy does this through its grantee perception reports, and others do too, such as Keystone Accountability.

And lastly, publish tales of things which don’t really work. That evidence is hugely insightful, and though 92% of funders believe that ‘charities should be encouraged to report failures or negative results’, no funders publish theirs. Giving Evidence is working with a corporate foundation to publish soon the first in a series of ‘honesty reports’, based loosely on Engineers Without Border’s annual Failure Reports.

Why many charities shouldn’t evaluate their own work –>

Posted in Donor behaviour & giving stats, Effective giving | Tagged , , , , | Leave a comment