Giving Evidence’s mission and work

Many thanks to the Social Progress Imperative!

More videos on our insights and approach are here.

Posted in Uncategorized | Leave a comment

Assessing Funders’ Performance: Five Easy Tools

This article was first published in the Stanford Social Innovation Review

Measuring impact is so tough that many funders give up, but there are some insightful and actionable tools for funders that aren’t daunting.

When I was a charity CEO, we approached a family foundation. There was no formal application process. Instead, we had to write various emails, and I had to attend various meetings (not unusually, the foundation wanted to see only the CEO, the highest paid staff member). A physicist by background, I kept a tally of the time all this took and the implied cost. Eventually we got a grant, of £5,000. This required that we (I) attend more meetings—for “grantee networking,” meeting the family, and so on. We noted the cost of those too. Towards the grant’s end, the foundation asked us to compile a short report on what we’d done with the grant. By now, the tally stood at £4,500. I felt like saying: “What grant? Honestly, you spent it all yourselves.”

One hears worse. A physicist at Columbia University has calculated that some grants leave him worse off. And I’ve heard of a heritage funder requiring that applications have input from consultants; this made the cost of applying £100,000, though the eventual grant was just £50,000.

Clearly it’s important for any organism to learn, adapt, and improve. Much of the discussion about how funders should do that, and the tools available to them, revolve around “measuring impact.” But measuring impact is complicated—perhaps even impossible. I wonder whether, in our quest for the perfect measure of performance, we overlook some simpler but nonetheless useful measures, such as whether a funder is essentially spending a grant on itself. As Voltaire warned, perfect is the enemy of the good.

Let’s look at why getting a perfect measure is so hard, and then at some simpler “good” tools.

Funders: Don’t measure your impact …

A funder’s impact is the change in the world that happened that would not have happened otherwise. Making a perfect estimate of impact is difficult for two reasons.

First, most funders support work that is too diverse to aggregate its effect. Hence, articulating or identifying “the change that has happened” can be impossible.

Second, there normally isn’t an “otherwise” that we can compare with reality. Constructing an “otherwise,” or counterfactual, would be very difficult; it would require comparing achievements of grantees with non-grantees. Ensuring that the groups were equivalent would require that the funder choose between eligible organizations at random, which few would be willing to do. And to establish that the funder rather than other factors (such as changes in legislation or technology) caused the change in the world, both groups would need very many organizations. And again, the heterogeneity of work may prevent comparisons of the two groups’ results anyway.

Many funders give up. A recent study found that, though 91 percent of funders think that measuring their impact can help them improve, one in 5 measures nothing pertaining to their impact at all.

… rather, understand your performance.

Compared to this complexity, seeing how a funder can save time and money for applicants and grantees looks like child’s play. In fact, it may be an even better thing to examine, because it shows pretty clearly what the funder might change. BBC Children in Need (a large UK grantmaker) realized that getting four applications for every grant was too many (it imposed undue cost), so it clarified its guidelines to deter applicants unlikely to succeed.

Giving Evidence has found several such tools in our work with donors (collated in a white paper released this week); each is relatively easy and gives a valuable insight into a funder’s performance. We make no claim that these tools provide the perfect answer, but we’ve seen that they are all good and helpful for ambitious donors wanting to improve:

  • Monitoring the “success rate”—the proportion of grants that do well, that do all right, and that fail. Though clearly the definition of success varies between grants, presumably funders craft each one with some purpose; this tool simply asks how many grants succeed on their own terms. Shell Foundation found that only about 20 percent of its grants were succeeding. This pretty clearly indicated that it needed to change its strategy, which it did, eventually doubling and then tripling that success rate. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is doing well if barely any of its grants succeed.
  • Tracking whether “the patient is getting better”—whether that means biodiversity is increasing around the lake or whether malaria decreasing in prevalence. This of course indicates nothing about cause. But sometimes funders find that their target problem has gone away, or moved, or morphed, and they should morph with it.
  • Measure the costs that funder application and reporting processes create for nonprofits. The prize here is huge: It’s estimated that avoidable costs from application and reporting processes in the UK alone are about £400 million a year.
  • Hear what your grantees think. Grantees can’t risk offending organizations that they may need in future, so funders need to ask. Listening to beneficiaries and constituents benefits medicine, public services, and philanthropy.
  • Clarify what you’re learning, and tell others. Engineers Without Borders finds that its annual Failure Report—a series of confessions from engineers in the field—is invaluable for internal learning and accountability. Funders pride themselves on taking risks and many programs just don’t work out; there shouldn’t be shame in learning.

We hope that these tools are useful and that funders use them, and we welcome any discussion.

Download the White Paper here

Posted in Uncategorized | 1 Comment

Easy ways for philanthropic donors to see if they’re doing well

This article was first  published by the Social Impact Analysts Association.

Some skiers are better than others. Some singers are better than others. The same for teaching, nursing and curling. So it seems reasonable to suppose that some people are better at supporting charities than others.

But how do you tell? Curlers can easily see if they beat their opponents, and surgeons see if patients live or die, but success in philanthropy is less evident. Whereas businesses get feedback immediately and constantly – unpopular or over-priced products don’t sell – donors don’t. They can’t rely on charities telling them since they’re daren’t bite the hand that feeds them. Steve Jobs cited the difficulty of knowing if you’re giving well or badly as deterring him from giving much at all.

Happily, it is possible – and not terribly hard. Giving Evidence, a consultancy and campaign which helps donors to give well by using sound evidence, has found various tools which help almost any donor to understand their performance. They’re collated in a new white paper, and are simple: they may even seem rather obvious, but have proven useful to individuals, companies, and foundations who give. They are:

  • Monitoring the ‘success rate’: the proportion of your gifts which do well, do alright and which fail. Though clearly the definition of success varies between grants, presumably each one is made with some purpose: this tool simply asks how many succeed in their own terms. It’s unashamedly a basic measure, but then it’s hard to argue that a funder is succeeding if barely any of its grants succeed. We’re not saying that every grant should succeed: many funders sensibly support experimental or exploratory work, but, like venture capitalists, donors should expect some failures, though should have some system for noticing which grants those are to enable learning from the patterns. The Shell Foundation (attached to the energy company) used this measure to triple its success.
  • Tracking whether ‘the patient is getting better’: whether biodiversity is increasing around the lake, or whether malaria is becoming less prevalent. This of course indicates nothing about why anything is changing nor the donor’s contribution. Nonetheless, it’s imperative to know if the problem is worsening – in which case, we might re-double our efforts or invite other funders in – or if it’s gone away. Often data from public or commercial sources shows progress on a funder’s goals.
  • Measure the costs created for charities (and others) by the funder’s application and reporting processes. These can be huge: as a charity CEO myself, I had some grants where the donor’s processes consumed 90% of the grant given. It can be even worse: a physicist at Colombia University calculates that some grants leave his lab worse off, and we’ve heard stories of application processes which cost twice the amount eventually given.  Grantees may make great progress despite a meddlesome funder. The avoidable costs from application and reporting processes in the UK alone are estimated at about £400m every single year. BBC Children in Need has examined its process and found ways to make savings, and other large donors can too.
  • Hearing what your grantees think. When I ran a charity, I often saw ways that donors could be more helpful but never told them because the stakes are too high: charities can’t risk offending people whose help they may need in future. So the learning is lost. Yet listening to grantees and beneficiaries has brought great benefits in medicine and social services – and to many philanthropic donors.
  • Lastly, clarify what you’re learning, and tell others. Funders do publish, but mainly about their successes. ‘Publication bias’ in medicine in which positive stories are disproportionately likely to be shared means that ‘the true effects of loads of prescribed medicines are essentially unknown’, according to epidemiologist Dr Ben Goldacre. Philanthropy can avoid the same fate. We’re currently working with a foundation to clarify and publish the ‘whole truth’ about how an innovative programme fared. Tales of failure and challenges, however inglorious, teach us a great deal.

Perhaps ‘measuring impact’ is too hard and too off-putting, and we should all instead talk about ‘understanding performance’. The tools in this white paper help with that. Giving Evidence is working with donors on several of them, and will happily talk to anybody about them.

It’s hard to make data-driven decisions if loads of the data are missing or garbage–>

Posted in Analysing giving, Donor behaviour & giving stats, Effective giving | Leave a comment

Philanthropy in transition

Caroline Fiennes was one of 11 leaders interviewed by The Guardian for the Philanthropy in Transition series. 

A new generation of donors wants impact and engagement

Out of the dot.com boom came a new breed of donors for whom good intentions are not enough an evidence is key

How do you think philanthropy is changing, and what’s driving those changes?

The most obvious changes in the past 15 years are the arrival of many new donors, new ways of giving, and much higher profile.

It started with the dot.com boom: money from eBay, Microsoft, Google et al. They brought tools common in business, but not used in philanthropy: high-engagement, focus on results, and financial instruments beyond grants such as loans and quasi-equity investments. We often think of them as flashy, and while that’s true of some, there are major European and Asian donors who keep out of sight.

Growth is driven by self-made wealth. The UK’s rich list shows that 50 years ago, most wealth was inherited, now it is self-made. This has brought an urgency about getting things done, which has spurred interest in new ways of engaging.

And it’s not just the rich. People giving modest amounts also want to be effective. The donations influenced by GiveWell’s independent analysis of charities’ performance have risen about 700% in just four years. Giving What We Can, which began as a student movement, encourages people to pledge part of their income to non-profits: many members ‘earn to give‘ by taking high-paid jobs to maximise the amount they can donate.

All this has brought a focus on effectiveness … though, ironically, we have no idea whether it’s achieving anything.

In the past few decades, awareness has grown that good intentions are not enough. Donors also wonder if their work is optimised.

What’s the potential impact of these changes?

We don’t know! Because funders don’t make comparative assessment of how various models of giving perform. For example, are your grants more likely to succeed if you are hands-on with them or not? People have lots of opinions about this but there’s no actual data.

Yet it’s not hard to find out. Shell Foundation made many grants, and graded each according to whether ‘succeeded’, ‘did OK’ or ‘failed’. Hardly any succeeded. So the foundation changed its model: away from making many, small, reactive grants, to making fewer and being more engaged. The success rate picked up. The foundation intensified the change, which increased the success rate further.

We need lots of funders to do this analysis and to publish it along with details of their model. And it is not rocket science.

Of course, it doesn’t ‘measure the full impact’ of the funder’s work, but funders often get hung up on that. It’s extremely hard to accurately measure impact, because it’s normally through grantees and may include diverse types of work which can’t be aggregated. At some level that doesn’t matter because, the aggregate impact of grantees is different to the impact of the funder: the grantees may do great work despite a really annoying and wasteful funder. To understand the funder’s effect, we need to look at the funder’s own processes. Shell Foundation’s analysis assessed processes for making decisions and providing support.

However, to find the best model in particular circumstances it’s unlikely that one model will outperform another model in all circumstances.

What one thing could foundations do better to increase their sustainable impact?

Funders could vastly increase their impact by basing their decisions on sound evidence. That covers their decisions about both what to fund and how to fund.

On what to fund, that means:

• When deciding on programmes or focus areas, look at where need is and where supply is. There’s currently a chronic mismatch: for instance, in global health, about 90% of funding goes to just 10% of the disease burden.

• When deciding which programmes to fund, look for existing independent and rigorous evidence, rather than just what the applicant provides. Many interventions have been studied independently: health in the UK is quite well-studied, as are many areas in international developmentcrime in the UK is just starting. The Children’s Investment Fund Foundation – more rigorous than most – puts more weight on a proper literature review than on the information in the application form.

• Know the difference between reliable evidence and unreliable evidence. For example, a charity claiming impact might show that people it helps get jobs more quickly than those it doesn’t help. But that comparison is no good. It may say solely indicate that people who chose to ask for its help are unusually motivated. (This real example is discussed here.)

• If no decent quality evidence exists, consider funding researchers to produce more.

Understanding how to fund means:

• Measuring your ‘success rate’ as described above, seeing how it varies if you change your practice. Publish what you find so that others can learn.

• Seeing if you can find free money! Measure the costs that are borne by charities (and/or social enterprises and others) in applying to you and in reporting to you. I personally once got a net grant which was entirely consumed by the funder’s processes. Such stories are quite common. Streamlining these processes could easily release £400m every year.

• Asking your grantees for their views of your processes. The US Center for Effective Philanthropy does this through its grantee perception reports, and others do too, such as Keystone Accountability.

And lastly, publish tales of things which don’t really work. That evidence is hugely insightful, and though 92% of funders believe that ‘charities should be encouraged to report failures or negative results’, no funders publish theirs. Giving Evidence is working with a corporate foundation to publish soon the first in a series of ‘honesty reports’, based loosely on Engineers Without Border’s annual Failure Reports.

Why many charities shouldn’t evaluate their own work –>

Posted in Donor behaviour & giving stats, Effective giving | Tagged , , , , | Leave a comment

Are we relying on unreliable research?

Ask an important question and answer it reliably” is a fundamental tenet of clinical research. And you’d hope so: you’d hope that medics don’t waste time on questions that don’t matter or which have been answered already, and you’d hope that their research yields robust guidance on how to treat us. Does research in our sector aimed at understanding the effects of our interventions adhere to that tenet?

We suspect not. It’s a problem because poor quality research leads us to use our resources badly. The example of microloans to poor villagers in Northeastern Thailand illustrates why.  In evaluations which compared the outcomes (such as the amounts that households save, the time they spend working or the amount they spend on education) of people who got loans with those of people who didn’t, the loans looked pretty good. But those evaluations didn’t take account of possible ‘selection bias’ in the people who took the loans: perhaps only the richer people or better networked people wanted them or were allowed to have them. A careful study which did correct for selection bias found that in fact the loans made no difference. The authors conclude that ‘‘‘naive’’ estimates significantly overestimate impact.”

Such examples are rife. There is one in the current edition of Stanford Social Innovation Review, about a back-to-work programme, discussed here. Another example is from a reading programme in India. Five different evaluation methods produce five quite different estimates of its impact: at least four of them must be wrong and might lead us to misuse our money:

IPA research methods

Spotting unreliable research requires assessing research against a quality standard. Though foundations fund masses of research – through charities’ M&E, sometimes conducted by the charities themselves and sometimes done independently – to my knowledge, only one has ever assessed the quality of the research it sees. It didn’t look pretty. The Paul Hamlyn Foundation looked at the research it received from grantees between Oct 2007 and Mar 2012: only 30% was ‘good’, and even that was using a rather generous quality scale. It even found ‘some instances of outcomes being reported with little or no evidence’.

Assessing the quality of research is bog-standard in medicine and increasingly common in the public sector. The Education Endowment Foundation already does it (in its toolkit) and the government’s other What Works Centres will too. The National Audit Office (NAO) recently published analysis of the quality of almost 6,000 government evaluations, which contains a salutary nugget. Buried on page 25 is the finding below that the strongest claims about effectiveness are based on the weakest research. This (probably) isn’t because the researchers are wicked, but rather because you can infer almost anything from a survey of two people: most social interventions have quite small effects, and robust research won’t let you show anything bigger.

NAO graph

Let’s put that the finding other way round. Charities competing for funding have an incentive to show that their impact is sizeable. The NAO’s finding implies that that is easier if they do bad research. So funders who rely on charities’ own research inadvertently encourage them to produce unreliable research.

Funders should look carefully – and independently – at the quality of evidence that we fund and use, let we be misled to ineffective work. As mentioned, the methods and tools for assessing research quality (‘meta-research’) are established and proven in other disciplines. Giving Evidence is exploring doing some work to assess the reliability of research produced by charities and used by funders. We are in discussion with relevant academics who would run the analysis and hope to get the work funded by academic sources. We would like to talk to funders who are interested in understanding the quality of the research that they commission and use with a view to improving it.  If you are a funder or impact investor and are interested in being involved, please get in touch.

This article was first published by the Association of Charitable Foundations and London Funders. Indian data are from Innovations for Poverty Action. 

What is decent quality evidence? –>

Why most charities shouldn’t evaluate their own work–>

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

Assessing impact needs a reliable comparison group

This letter discusses an article in Stanford Social Innovation Review and was first published there.

Dressed to Thrive” [in Stanford Social Innovation Review, Winter, 2013] describes the work of Fitted For Work (FFW) in helping women into work. By way of demonstrating FFW’s effectiveness, it reports that “75 percent of women who received wardrobe support and interview coaching from FFW find employment within three months… In comparison…about 48 percent of women who rely on Australian federal job agencies find work within a three-month period.”

But the comparison isn’t valid, and doesn’t demonstrate anything about FFW’s effect. This is because women who get FFW’s support differ from those who don’t in (at least) two respects. First, they found out about FFW and chose to approach it for help. It’s quite possible that the women who do this are better networked and motivated than those who don’t. That would be a ‘selection bias’ in the women which FFW serves. And second, of course, the women who come to FFW get FFW’s support. The comparison doesn’t show how much of the difference is due to the selection effect versus how much is due to FFW’s support.

The purpose of any social intervention is to improve something more than would have happened anyway. So it’s important that we use reliable comparisons. That means isolating the effect of the intervention from other effects such as selection bias.

This isn’t just theory. Microloans to poor villages in Northeast Thailand appeared to beRandomize_That_Stamp_3 having a positive effect when analysed using readily-available comparators. But these analyses didn’t deal with selection bias in the people who took the loans. A careful study which did correct for selection bias and looked at how those people would have fared anyway found that that loans had little impact. They had no effect at all on the amounts that households save, the time they spend working or the amount they spend on education.

Without such careful research, we risk wasting precious resources on programmes which don’t actually work. Worse, selection effects are sometimes so strong that a programme can appear to work even if it’s actually producing harm. [Example below.] The importance of using reliable comparisons is clear from the unusually ardent title of a medical journal editorial last year about medical trials: it was called ‘Blood on Our Hands: See The Evil In Inappropriate Comparators’.

None of which is to say that FFW’s program doesn’t work. Rather that these data don’t show whether it works or not. We all need to be rigorous in assessing social programmes, lest we waste our resources helping people only a little when we could be helping a lot.

[Added later]: Medicine has many examples of practices being withdrawn after a proper comparison shows them to be harmful. Here‘s Ben Goldacre on just one:

“We used to think that hormone-replacement therapy reduced the risk of heart attacks by around half, for example, because this was the finding of a small trial, and a large observational study. That research had limitations. The small trial looked only at “surrogate outcomes”, blood markers that are associated with heart attack, rather than real-world attacks; the observational study was hampered by the fact that women who got prescriptions for HRT from their doctors were healthier to start with. But at the time, this research represented our best guess, and that’s often all you have to work with.

When a large randomised trial looking at the real-world outcome of heart attacks was conducted, it turned out that HRT increased the risk by 29%.”

This high-profile innovation relies on an unreliable comparison too–>

How to get a reliable comparison –>

Posted in Impact & evaluation | Tagged , , , , , , , | Leave a comment

Why policy change takes more than just funding research

This article, written with Annie Duflo, was first published in Alliance Magazine. A pdf version is here.

Don’t just tell me what to do, come and help me do it!’ said an Indian government official to a researcher bearing results from studies into effective aid programmes. His response is salutary: there is much work now on increasing the use of evidence in public policy, so we need to understand what policymakers actually need and want, and what will help them be more evidence-driven. For foundations there is a clear message: it isn’t enough just to fund research. You have to make sure it reaches the relevant policymakers and in a form that is useful to them.

Over ten years, Innovations for Poverty Action (IPA) has run more than 350 studies in 51 countries to find what works in alleviating poverty. We have had some success in influencing policies of governments, NGOs, foundations and others. Here’s what we have found.

The basic lesson is that there is often a disconnect between the people who produce evidence and those who use it. Though they may share a goal, the evidence ‘producers’ (researchers and academics) often work on a different timescale and in different technical language from the ‘users’ (government officials and practitioners in NGOs, foundations, companies and elsewhere). Even within the same organization, they may not be used to dealing with each other.

Getting evidence into policy requires much more than producing evidence and publishing it. Rather, ‘diffusion [of ideas] is essentially a social process through which people talking to people spread an innovation’, said Everett Rogers, who studied the process (and who coined the term ‘early adopter’). This involves behavioural change, and we have found that it’s at least as difficult as the research itself. Hence IPA works with both producers and users of evidence, facilitating, translating and supporting.

We use a structure articulated by Professor Richard Thaler of the University of Chicago, who developed behavioural economics. He wrote in the New York Times of his visits to the UK Government’s Behavioural Insights Team, which he advises:
‘. . . I make the rounds of government. We usually meet with a minister and some senior staff. In these meetings, I have found myself proposing two guidelines so often that they have come to be team mantras:

1) You can’t make evidence -based policy decisions without evidence.

2) If you want to encourage some activity, make it easy.’
IPA follows those guidelines. In fact, the second starts before the first: we find it useful to engage policymakers and practitioners right at the start, making a three -stage process.

First, work out what questions policymakers want answered. We are keen to solve problems that somebody actually has, and which they have budget, energy and permission to solve. These may not be the questions that interest researchers or campaigners or the press, but they are the problems where evidence is likely to make a difference. This can be seen as market research, since policymakers are the customers for the evidence.

For example, IPA’s work in Ghana led to conversations with the government which showed that they were concerned about low educational attainment, and potentially interested in solutions from elsewhere that might work. We are sometimes a ‘match -maker’ between policymakers with questions and researchers interested in answering them. Key to building these relationships is having a permanent presence in -country (IPA has offices in 12 countries).

Second, design programmes that may work, and test them rigorously. IPA works with leading researchers from top institutions such as Harvard, Yale, MIT and the London School of Economics. We often design programmes using behavioural insights which recognize that people aren’t perfectly rational, emotionless, net benefit -calculating machines. They’re complicated, busy and more liable to copy their neighbours than to read endless small print, and they make bad decisions when they’re stressed or tired.

For example, if Kenyan farmers used more fertilizer they could increase crop yields and hence income: it’s available, but they don’t buy it when they need it, at planting time. Standard solutions involve giving out fertilizer free or subsidizing it, which are obviously expensive options. A behavioural solution is to sell it to them when they have money – right after harvest, before the money all gets eaten or lost or spent. So Thaler’s ‘make it easy’ mantra applies here too.

From the beginning of the research process, we try to involve in the design the policymakers and others who have a stake in the answers. This helps ensure that the research really provides what the ‘customer’ needs. Long term, in -country relationships with policymakers in government, NGOs, foundations and others are valuable here.

IPA produces evidence through randomized control trials (RCTs): for example, comparing the crop yields of Kenyan farmers who are offered fertilizer at harvest time with those of farmers who are offered it during the growing season. By choosing at random which farmers get which offer, we eliminate other differences between the groups, so we can be pretty sure that differences in crop yields result from the timing of the offer. RCTs are pretty easy to understand: they’re just a fair race between a programme and a control group, or two variants of a programme. A well -run RCT is the best way there is of determining the effect of a programme. (Plenty of RCTs are run badly: the BBC recently ran one with just seven participants, which is far too small to demonstrate anything reliably.)

Providing reliable evidence greatly assists in engaging policymakers and practitioners. However, rigorous research is often more difficult, more time -consuming and more expensive than unreliable research. It’s not hard to give fertilizer at harvest time to the first 20 farmers you find and later ask them to recount how this changed their crop yields (or, worse, ask them hypothetically how it would change their behaviour and crops). But plenty of research shows that these answers are unreliable and riddled with errors: people aren’t very good at knowing how much they benefited from something or anticipating how they would react to something new.

Finally, make it easy for policymakers to find, understand and use the answers. To Thaler’s second point, we want policymakers to incorporate evidence, so we make it easy for them. We communicate in places and in language that policymakers use. Rather than just distributing copies of academic papers, we produce concise, plain -language, nicely designed summaries of each research project, and briefings about related research. We make sure they are findable through searches by country, research area or keyword. Our staff speak at meetings and conferences of policymakers, and our work is publicized in parts of the press that people read, such as The Economist , the Financial Times , here, and so on.

We have found that we need to support policymakers to understand how research applies to their contexts, since it can rarely be applied blindly. For instance, many children in Kenya have intestinal worms, which make them ill. So deworming reduces absenteeism in schools. But it won’t achieve much in Scotland, because there aren’t worms there. However, other, more general findings do apply in Scotland – for example, the finding that unless things are available when they have money, people won’t buy them – even things it might be in their interest to buy like fertilizer.

Even when findings are applicable, we are often asked to help with implementation. The opening quote was from an Indian state government official who had seen evidence about deworming. The government realized that the findings were relevant to them and wanted to trial it, but they faced many practical issues in doing so. We sent a deworming expert from our Deworm the World programme in Kenya to help them.

Notice the duality here: we need academics to run rigorous research, but we need different teams for communication and implementation support. Academics are generally trained and rewarded for publishing in specialized journals, not for reaching out to governments and practitioners.

The role for funders and practitioners:

Often foundations fund charities to produce research in the hope of influencing policy, but both foundations and charities effectively assume that policymakers will find it, understand it and apply it. This rarely happens. Like everyone else, policymakers are much more likely to value evidence or innovations from people they know and trust. It ‘makes it easy’ to find and believe the evidence. This experience isn’t unique to IPA nor to less -developed countries. The Institute for Government, for instance, a UK think – and do -tank, recognizes that research alone will not achieve its mission of improving government effectiveness. So it devotes time, energy and resources to building personal relations with the people it needs to influence and constantly interacts with them through events, blog posts, private meetings and joining government working groups.

This work requires dedicated time and people. They of course need funding and resourcing. The upside is that they leverage not only the research budget but also the significant government spending that it influences.

For more information about subscribing to Alliance, please visit www.alliancemagazine.org/subscribe

Posted in Effective giving | Leave a comment

It’s hard to make evidence-driven decisions if loads of data are missing, or garbage

First, missing data. Philanthropic donors, operational charities and others often have to deal with this. Hence unearthing the missing data is a theme in Giving Evidence’s work: 

  • Massive emergency aid is now flowing to the Philippines following Typhoon Haiyan. Operational NGOs and government aid agencies can only make evidence-based decisions about what’s needed and where to prioritise if they all share data about their activities and plans – in real-time and in a machine-readable format. Generally they don’t: so after the Asian tsunami, for example, some children got vaccinated three times and others (presumably) not at all. Owen Barder who invented such a format, myself and others had a letter in the Financial Times requesting that these data be shared in order to avoid such nonsense in the Philippines. Owen then was on BBC Radio 4’s PM programme and Newsnight.
  • International development in general could be better based on evidence of need and supply if donors disclosed what they’re doing and where. Hence my rant in The Economist in support of Publish What You Fund, which finds that most donors don’t.
  • Many foundations run programmes which fail, but they don’t tell anybody and hence that evidence is missing. This is self-created publication bias. Giving Evidence is currently working with a foundation to share the tale and lessons from one failed programme. We hope it’ll be the first of many such ‘confessions’, and that these will be used by donors. (We’re inspired by Engineers Without Borders’ annual Failure Report. When I asked their CEO why they do it, he said ‘Well, engineers are attuned to failure. It’s pretty bad if your bridge collapses’!)
  • We support registries where social scientists register their trials before they start, in order that we can all see if some don’t get published (e.g., this one). 
  • We suspect that many charities withhold impact data which are unflattering (I inadvertently did it myself as a charity CEO, before I’d even heard of publication bias). The charity/philanthropy sectors have no mechanism for spotting or avoiding it. Probably all charities are in the top quartile – a miracle! This is a major hole, and Giving Evidence has some remedies currently in the incubator.
  • We support the AllTrials campaign to force pharmaceutical companies to disclose results of all trials they run. Ludicrously they currently don’t have to. Estimates are that about half of all trials are unpublished: and you can guess which half. Ben Goldacre says that ‘as a result, the effect of most prescribed medicines are essentially unknown’. Actually I’ve spoken about AllTrials in several recent press pieces but it gets chopped out :-) Irony not unnoticed.

Second, data which are dismal quality. It’s also hard to make evidence-driven decisions if the data-quality is awful, which it often is, in charities and philanthropy. The sole study we’ve ever seen of charity-sector data quality found that 15% was poor, only 30% was good, and some claims had no evidence at all. The technical term for this latter is ‘fiction’. Part of the problem is the conflicting incentives (and lack of social science research skills) in the common situation where charities evaluate themselves: hence we’ve written about why most charities shouldn’t evaluate their work. We’ll write more on this soon.

If you’re a funder and interested in publishing tales of failure, or enabling work to sort out data-quality and/or data-hiding in charities/philanthropy, get in touch.

A few charities and do donors do fess up. Here’s what they say—>

Posted in Impact & evaluation | Tagged , , , , , | 1 Comment

Shameful story of Rockefeller and Einstein

This was first published by the Huffington Post.

100 years old this year, The Rockefeller Foundation likes to tell the tale of its founders’ responsiveness and foresight:

‘When a young Albert Einstein sent a request for $500 to John D. Rockefeller’s top lieutenant, Rockefeller instructed his deputy, “Let’s give him $1,000. He may be onto something.” It was bold and daring, intrepid and risk-taking.’

Time is important, as Einstein of all people taught us, so it’s relevant to know when that story took place. The answer is astonishing: it was 1924. The ‘young’ Einstein was 45 years old. He’d won a Nobel Prize the year before. [The cheque, below, even says 'Professor' on it.] The request came 19 years after his special theory of relativity, which shows among other things that E=mc2. It was 19 years after he laid the foundations of quantum theory (by explaining the photo-electric effect). And also 19 years after he’d explained the bouncing around of gas atoms that you probably saw down a microscope at school. (1905 was a big year for Einstein: one of the most significant for any scientist, ever.) It was seven years after publication of his general theory of relativity about the nature of space-time: arguably the greatest achievement of the human mind, and five years after observational confirmation of a major prediction of general relativity, hailed by The Times of London as a ‘Revolution in Science – New Theory of the Universe’.

So what’s with the ‘may be onto something’?

The story is striking for two reasons. First, that despite all those achievements, a Nobel winner was scrabbling around for tiny amounts of money, and having to approach donors himself. In today’s money, Einstein’s request was for just $6,500 – about £4,300 – not even enough to rent a decent office in London. And as for ‘risk-taking’, Rockefeller was probably the richest person in history, worth in today’s money 10 times what Bill Gates is worth. 

And second, Rockefeller didn’t even trust Einstein with that money, but divided his gift into four installments. The Nobel winner, re-framer of the space and time, could apparently not be trusted with more than $1600 at a time.

Hardly ‘bold and daring, intrepid and risk-taking’, then. Instead, a depressing tale of a lionised genius forced to beg. No wonder Einstein commented that: “Only two things are infinite, the universe and human stupidity. And I’m not sure about the former.”

Why charities shouldn’t evaluate their work—>

Posted in Analysing giving, Donor behaviour & giving stats | Leave a comment

Do matched giving schemes work?

This article was first published by Philanthropy Impact magazine.

Many fundraisers tell us that donors give more if a match is available, that is, somebody else will also give if, and only if, they give.  Fundraisers’ confidence is based largely on anecdote and imprecise comparisons. Happily there is now a growing – if still small – body of solid evidence about whether matches really work, and whether they are really a fundraiser’s best friend.

Size doesn’t  matter

An early experiment found that matching does increase giving – at least in the US. In 2005 Dean Karlan and John List, economists at Yale and Chicago respectively, ran an experiment in which over 50,000 donors to a USA civil liberties NGO were randomly assigned to receive one of several versions of fundraising letter. One group received a letter without mention of a match. The other groups’ letters all (truthfully) offered different matches: some donors were offered a straightforward match of $1 for each $1 given; other donors were offered a larger match ($2 for every $1 given); and, other donors were offered an even larger match ($3 for every £1 given).

The match offers worked. Karlan and List found that offering a match increased the probability that each recipient would give by 19%, and that the average gift increased by 22%. Pretty impressive gains, but then the surprises start.

The level of the match does not matter. Donors were no more likely to give, nor to give more, if offered at 2:1 or 3:1 match than if the donor was offered a 1:1 match. [This is pretty interesting in relation to debates about tax breaks for giving. Gift Aid is essentially a match provided by the tax-payer, and people often claim that the level of tax relief affects the level and number of gifts.]

 

Matching can make it worse

In Germany, matching seems to reduce a fundraisers’ success. Steffan Huck and Imran Rasul of University College London sent various different types of letters to over 22,500 patrons of the Bavarian State Opera House in Munich asking for donations. The ‘vanilla’ letter gathered an average donation of €74.30, but recipients of a letter which, again truthfully said that a major donor would increase any donation by €20 gave only €69.20. [Technically this is a leverage, not a match, but the ideas are very similar.] Huck and Rasul also found that increasing the level of a match does not help, though their results were even more stark than the those of American researchers’: donors offered a 50% match (that is,  50¢ for every € given) gave on average €101, whereas donors offered a 100% match gave only €92.30. The response rates for the two groups were identical, at 4.2%.

Better options

So if a major donor is interested in encouraging other donors, what beyond matching might a charity ask them to do?

Unsexy but effective, a charity might use the funds simply to ask potential supporters again.  This is described in a different experiment also with the Bavarian State Opera House. In this experiment, Huck and Rasul found that of donors who were asked once, not one donor gave again several weeks later, but when they were asked again (within a six week period) 1.6% of donors gave again. Of the 22,500 recipients, that is 360 people.

The charity could also use the money to enclose a pre-filled bank transfer form.  Huck and Rasul tested this and found that including a pre-filled form more than doubled the number of donors who gave.

The charity could also create a newsletter, in which all donors are listed against the level of  their donation. In an experiment in the US (this trick might not work elsewhere), Dean Karlan and Margaret McConnell found that including in the asking letter the possibility of having the gift recognised in this way increased the probability of recipients giving by 2.7 percentage points.

Big gains seem to come from offering a match only for fairly large donations. In one of the opera house experiments, some patrons were offered a match only if their gift was above €50. They then gave on average €97.90. This is somewhat irrational if you think about it, because patrons whose letter offered no match gave on average of €74.30, which implies that most patrons would not have needed to increase their gift to qualify for the match.

However, the lesson is not to suggest large amounts. A new study by Huck, Rasul and Maja Adena of Berlin’s Science Centre for Social Progress found that suggesting donations of €100 and €200 does increase the average size of donation, but reduces the number of them, making the net effect virtually nil.

Socialise with the big and famous

The biggest gains of all seem to be from donors wanting to emulate leading donors.  One device increased opera patrons’ gifts by over 75%.  The letter to donors asking for support simply stated  that an anonymous donor had already committed €60,000. This is somewhat surprising: the response was  more than 400 times the average donation, so most donors are not simply copying the major donor. The researchers think that a large gift is a quality signal: the donor must have done his or her homework before making such a commitment.

Another experiment by Karlan and List supports this. A poverty-reduction charity sent donation request letters to two groups of donors stating that a donor would match donations. One group received a letter which described an anonymous donation, whereas the other groups’ letters identified the donor as the Gates Foundation. Citing the Gates Foundation generated more and larger donations, presumably because it is more salient and memorable to donors and hence a stronger quality signal. The implication of this for fundraisers is to try to get your match from somebody known and credible, and willing to be identified.

Method in the madness 

All of these results come from experiments in which the NGOs or opera houses were sending fundraising requests to their existing donor databases. The researchers simply worked with the organisations to create various types of letter, to randomly determine which donors got which type of letter, and then to track the results. That is, they are all randomised control trials, all with decent sample sizes. They were all inexpensive to implement, and yet they provide evidence which is not subjective, is not anecdotal but is reliable. Good fundraisers track their response rates, so testing  fundraising activities  in this way is fertile ground for generating many more rigorous and useful insights.

What other types of evidence are any good? —>

Posted in Analysing giving, Fundraising, Impact & evaluation | Leave a comment