What constitutes good evidence?

Lovely interview, about what constitutes good evidence, which donors is this relevant to, doesn’t requiring evidence impede innovation or encourage donors to focus on short-term outcomes, etc. (Gets into English after about 2 minutes.)

This is the Forbes article which I reference.

Posted in Uncategorized | Leave a comment

Getting people to give better

New initiative aims to get donors to give better

Many people look at getting people to give more. Giving Evidence and the Social Enterprise Initiative at the University of Chicago Booth School of Business are starting work looking at getting people to give better. First, we’re developing a ‘white paper’, to be published early in 2015, to collate what is known about effective giving, what isn’t yet known, and what would be useful for researchers to find out. [The University of Chicago Booth School of Business was recently ranked by The Economist as the best business school in the world.]

The way donors give is important, so perhaps persuading them to give better would have the same social effect as getting them to give more. For instance: the cost of raising capital for charities is about 20-40 per cent, against only about 3-5 per cent for companies, and charities turn away some donors who are fiddly to deal with.  Plus money doesn’t always go where it’s most needed: for example, about 90 per cent of global health spending goes on 10 per cent of the disease burden. And making many small gifts is demonstrably more wasteful than making a few large ones.

Perhaps it’s easier to get somebody to give better than to get them to give more.

We aim to identify questions which non-profits, funders and other practitioners want answered about making giving better, and to encourage researchers to address them. Those questions include the following:

  • How do various donors (including foundations, corporates, individuals) define a ‘successful gift’?
  • Is success affected by (eg):
    • being hands-on?
    • donors working together (eg in giving circles)?
    • gift size?
    • how and whether the grant is tracked?
    • whether the donor gives, lends or invests?
  • What does it cost to raise and manage grants of different sizes
  • How and when can one influence the cause that a person supports?
  • How do donors choose causes, charities or grantees, and how influence-able is that?
  • How do donors choose processes (eg for sourcing grantees, selecting which to support)?

However one defines success for a grant, it would be useful to know (wouldn’t it?) whether and when and how the chance of success is affected by how the donor gives.

Our purpose is to identify questions which non-profits, funders and other practitioners would like to have answered, which would help make giving better, and to encourage researchers to address them.

In terms of scope, we’re looking at all giving: ‘retail’ individuals, endowed foundations, fund-raising foundations, private family foundations, companies – the lot.

Do get involved!

Please send relevant material to jo [dot] beaver [at] giving-evidence [dot] com

Feedback from readers suggests that an example might. We’re interested in what makes for successful giving. So if a particular donor or funder has data on the success rate of its grants (ie., the no. which ‘succeed’, on whatever measure of success that donor uses) and how that success rate varies with (things like) grant size, grantee size, how the donor came across that organisation (e.g., open application process, in the pub, network), how hands-on the grant was, duration, whether it was co-funded… we’re VERY interested in that.

We’re not at this stage looking to do primary research (eg., working with funders through their historical grants, assigning ‘success scores’ to them & cross-tabbing that with things like size) though we may get to that in future.

To be clear, this project isn’t (just) about getting donors to choose high-impact charities. Not because that isn’t important, but because many others are looking at that. It’s about all the other choices which major donors /foundations make which can have just as much impact, and indeed can anhiliate the impact of their grant. A simple example of the effect of how one gives (as opposed to where one gives) is that funders sometimes create so much work for grantees that the grantee would be better off without that grant/relationship at all.  In that case the choice of charity doesn’t matter much!

So questions we’re looking at include: should donors give individually or in groups? should they proactively search out grantees or let grantees find them? how engaged should they be? how many focus areas should they have? That is, which giving behaviours (of those types) drive the success of grants -in whatever way the donor defines success.
There’s no shortage of opinions on these topics, but we’re looking for data.
The Shell Foundation published data on the %of its grants which succeeded when it was, various, spray-and-pray, somewhat focused, and latterly very focused. That’s what we’re after: some empirical basis for ascertaining what makes for effective philanthropy. Obviously the ‘right’ answer may vary between circumstances, just as the ‘right medical treatment’ depends on the patient’s condition, and those variations are interesting too.
Posted in Uncategorized | Leave a comment

Caroline Fiennes: best philanthropy advisor

Newsflash! Giving Evidence’s Caroline Fiennes has been nominated a ‘best philanthropy advisor’ by Spears Wealth Management magazine, here.

The profile of Caroline (here) says: CF Barcelona

“Caroline Fiennes’ work in philanthropy focuses on making giving as effective as possible by basing it on sound evidence. A physicist in a previous career, Fiennes became interested in the fact that some charities are better than others and wanted to figure out which ones are most effective in order to guide donors to them. This is also true of ways of giving.

The founder of Giving Evidence feels there is ‘often a big mismatch between where the money goes and where it’s needed’, and advises clients on using the available evidence to choose issues and organisations to focus on and support in the most effective ways.

Caroline works a lot on the quality of research available to donors, because charities produce a lot of information, but much of it is of too low a quality to be reliable. This has led to some of her clients giving funds to help produce better evidence.

Caroline and Giving Evidence are working on creating a mechanism for anybody to rate a charity with which they’ve had contact, a little like TripAdivsor or Toptable. This ‘opening up of reputations’ would greatly help donors to make much more informed decisions.”

Stay in touch to hear more as this project progresses!

Why is rating charities important?—>

Posted in Uncategorized | Leave a comment

Listening to those we seek to help

This article first appeared in Third Sector.

Unlike in business – where companies must heed their customers because they’re the sole

From a school wall in Zimbabwe, tweeted by Melinda Gates

source of funds – charities don’t normally get funds from beneficiaries and hence feel no financial pressure to listen to them. A recent report by Médecins Sans Frontières shows the result, recounting the apparent abandonment of war-torn areas and emergency situations by most aid agencies, which seem instead to follow funders’ wishes to operate in safer countries.

It’s tough for beneficiaries to tell an NGO, government or funder directly what they want or what they think of what they’re getting. It’s harder still for would-be beneficiaries. There are few feedback loops in our sector, though there is an outbreak of intriguing work to create more.

GlobalGiving UK is “an eBay for development”, directing donors to grass-roots organisations. To enable the organisations to hear and heed the constituencies they’re ostensibly serving, it developed a simple tool that requires a charity to recruit about 20 volunteer “scribes” who go out and interview local people. The charity chooses the questions, which GlobalGiving recommends be very open. Typically, the scribes ask for a story about a need in the community and one about an organisation being helpful (or not). They do not ask “what is your opinion of organisation x?”

There are two clever bits: first, the interviewer is not part of the charity, which probably makes the stories more honest; second, GlobalGiving – using a system designed by a man with a neuroscience PhD – analyses the stories for patterns. For instance, the contexts in which the charity is mentioned and the sentiments expressed about it. Maybe the charity doesn’t get mentioned much, implying that it’s not achieving much. The frequency with which various problems are mentioned can show the charity where it might target its work.

Stories collected this way over the summer by going house-to-house in Lambeth, south London, featured cancer, the passport fiasco and alleged corruption in local government. Charities were often cited unprompted: one advertising executive was so struck by Greenpeace protesters risking jail for their altruism that he’d started volunteering locally and taken his children to protest at Shell’s sponsorship of Lego.

The Department for International Development is piloting ways of getting feedback from beneficiaries into its programmes. The Association of Charitable Foundations recently provided training to foundations on listening to beneficiaries, and the White House hosted a summit last year about improving feedback to US government programmes. And a group of foundations, led by the Hewlett Foundation, has announced a fund to improve philanthropy by “listening to, learning from and acting on what we hear from the people we seek to help”.

We should all get good at doing that.

Posted in Uncategorized | Leave a comment

Publishing the whole truth

This article first appeared in Third Sector.

The C&A Foundation – linked to the department store that closed in the UK but is flourishing elsewhere – joins a small clutch of non-profits this month that publicise the lows as well as the highs in their work.

It ran a programme in 18 garment factories in Asia designed to improve both working conditions and productivity. Some aspects worked in some factories, some aspects worked in others. Rather than taking the conventional option of reporting only the glories, in a kind of self-promoting publication bias, the C&A Foundation is publishing it all: data from each factory, correlation coefficients, statistical significance tests and operational challenges all feature in the report, which is called Frankly Speaking. (Disclosure: I am a co-author.)

Likewise, Engineers Without Borders, a Canadian NGO, has been publishing an annual failure report since 2008. In each one, volunteer engineers recount tales of an error they made in the field. Yet, despite the praise for EWB and the obvious value of hearing the whole truth, EWB remains an anomaly. To my knowledge, it’s the only operating charity that publishes so candidly. When I asked its chief executive why it discloses when others don’t, he said: “Well, if your bridge falls down, it’s pretty obvious.” Indeed. By contrast, plenty of social programmes appear to work but are shown by sophisticated analysis not to work. The crime-prevention programme Scared Straight and some micro-finance programmes are examples of this.

The C&A Foundation encountered something similar – but the opposite way round. When Giving Evidence got involved, it looked as though working conditions in the factories hadn’t improved much, but the inclusion of later data in the analysis showed that they had.

Medical doctor John Ioannidis, now of Stanford University in California, uses complex statistical tests to unearth often shocking stories within clinical trial data and says his work echoes a theme from the ancient literature of his native Greece that “you need to pursue the truth, no matter what the truth might be”.

Dogwood Initiative is an environmental campaign on Canada’s west coast. It was inspired by EWB to publish its own failure report and found an important issue in its internal culture. “Dogwood Initiative could change our relationship with failure,” says the report. “It involves piloting an idea, measuring results, figuring out what works and what failed, adapting and rebooting.” Giving colleagues that right to admit failure can take time. Dogwood Initiative’s first annual failure report took so long to agree internally that it ended up covering the next year too.

The World Bank also holds internal “failure fairs” and finds it needs rules to ensure the discussions are safe – each speaker makes a case but can’t blame anyone except themselves and can’t name anyone else.

The funding and competition in the charity sector undoubtedly discourage confessions of weakness, but if we don’t do it, we won’t learn from ourselves and we’ll bias the literature from which our peers – and funders – can also learn. EWB’s Admitting Failure website lets anybody upload stories.

Go on, I dare you.

Posted in Impact & evaluation, transparency, Uncategorized | Leave a comment

Go see your MP

This article first published in Third Sector.

Why is so little policy based on sound evidence? Many voluntary organisations, academics and others spend time producing research in order to influence the government. There are some successes – but much policy appears to disregard the evidence.

Mark Henderson is head of communications at the UK’s largest charitable funder, the Wellcome Trust, and author of The Geek Manifesto, which calls for a more scientific approach to policy and politics. He says there’s little political price to be paid when MPs ignore the evidence. He also says that, in their constituencies, most MPs know the business people – who, after all, will ensure that people of influence have the benefit of their views – but rarely know the scientists. They probably don’t know the charity sector people, either.

I am an advocate of evidence and I’m often in meetings about the importance of getting evidence into policy with organisations such as the Alliance for Useful Evidence, the Institute for Government, the Hewlett Foundation or the National Institute for Health and Care Excellence. If Henderson is right, we’re all stuck in an echo chamber and missing a trick.

So I went to see my MP. And with haste: she is Justine Greening, a Cabinet minister – for international development, as it happens – and the recent reshuffle was looming.

“Hello, I’d like to talk about how keen I am that government policy be based on robust evidence,” I said – an unusual opening in an MP’s surgery, to say the least; but it led to a spirited conversation. Before my visit, I had, by way of a focus group, asked on Facebook what I should raise with a secretary of state. The doctors all rattled on about distinguishing between good and bad evidence, and everybody cited weariness of politicians cherry-picking data that suited them. Having removed the names, I printed out the responses and presented them.

Most revealing were two interconnected things. First, when I told Greening that the Department for International Development was generally very sophisticated in its use of evidence, she seemed amazed. “Most of my constituents think foreign aid is a waste of money,” she said. Actually, plenty of her constituents don’t think that – I socialise with them and they say so – but, clearly, those views had never reached her.

If we want MPs to act on evidence, we should go and tell them that that’s what we want

Second, not a single person I know had ever been to see their MP. Many of us battle attitudes voiced in what we might call “the uncharitable press” and bemoan MPs who pander to it. They hear and heed calls to continue the Work Programme, to cut the Third Sector Research Centre and so on. If we’ve never told them our contrasting views, we’ve nobody to blame but ourselves.

So go and see your MP – it’s your democratic privilege and weirdly empowering.

Does it make any difference? I don’t know. Perhaps we should gather evidence on this. Ben Goldacre, the broadcaster and science campaigner, and the innovation charity Nesta created an online tool, RandomiseMe, which enables anybody to run a randomised, controlled trial. We could all participate in such a trial: half of us go to see our MPs and half don’t, and we watch for subsequent differences in their voting behaviour. If we want MPs to act on evidence, we should go and tell them that that’s what we want.

What IS good evidence? Not this–>

Posted in Uncategorized | 1 Comment

Does the charity sector have a publication bias problem?

This article first published in Third Sector.

It’s hard to make evidence-based decisions if much of the evidence is missing or ropey. So it’s disastrous that the results of many medical clinical trials are missing, preventing doctors from using them.

It’s thought that fully half of all clinical trials remain unpublished. It’s not difficult to guess which half. Apparently, trials published by pharmaceutical companies are four times more likely to show that the drugs have a positive effect than identical trials conducted independently. So why is that?

Well, trials themselves don’t lie. Magically, however, the negative ones don’t get published. This publication-bias costs lives, yet is perfectly legal. Dr Ben Goldacre, author, broadcaster, campaigner and debunker of bad science, says that the near-fatal effects of the drug trial in Northwick Park hospital a few years ago – when all the men in the trial ended up in A&E with multiple organ failure – could have been predicted from results that were known but not published.

So we should all applaud the AllTrials campaign, initiated by Goldacre and and seed-funded by Simon Singh, to ensure that the results of all trials are published. Goldacre and Singh take scientific integrity seriously: both were accused of libel for highlighting bogus claims, refused to recant, endured horrible, long legal trials – and won.

Does the charity sector have the AllTrials problem? Do we withhold some of our monitoring and evaluation research and publish only unrepresentatively positive and misleading material? I suspect so. I did it myself when I was a charity chief executive: graphs that go up made for good meetings, so we published them; graphs that go down made for terrible meetings, so we didn’t. I don’t believe we were alone.

Monitoring and evaluation is research. It’s not always framed as that – it’s often seen as compliance with the requirements of funders; but it’s there to investigate possible causal links. Whether and when does intervention X lead to outcome Y? Do breakfast clubs improve children’s learning? Does teaching prisoners to read help them get jobs when they are released? These are research questions.

Ideally, that research would not be private but would be published to constitute evidence-based practice, just as clinical trials guide doctors. Any other charity could use it to decide whether the intervention might work in its context and whether it should replicate it.

But we can’t make evidence-based decisions if the literature is incomplete or biased; and, as ever, it’s our beneficiaries who miss out.

Research-withholding and publication-bias are commonly studied in medicine to establish whether and where there’s a problem and work can be targeted to fix it. But to my knowledge, neither has ever been studied in our sector. Not once. One study (itself unpublished) in Canada that was looking at something else found – shockingly – that the proportion of research carried out by charities that was published was 2 per cent.

Investigating the withholding of research and publication-bias is neither difficult nor expensive. It’s time we knew whether we too need a fix.

Charity data is missing and ropey a lot, such as here –>

Posted in Effective giving, Impact & evaluation | Leave a comment

Moneyball Philanthropy? Not Always

This article, by Ehren Reed of the Skoll Foundation and Caroline Fiennes, first published in Forbes.

Some charities are better than others, so we should find the good ones. On that we can all agree. We should support the charities that will be most effective in addressing the world’s pressing problems. And understanding that effectiveness requires measurement. But a reliance on quantitative analysis, which is helpful in understanding some charities, could prevent us from finding ones that are doing important, system-changing work.

The charities which are easiest to measure are those whose work is proximate to the beneficiary. They distribute mosquito nets to families in sub-Saharan Africa or deliver wheelchairs to disabled children. Their theory of change – the link between their work and the intended benefit – is simple. The intervention is well understood, the outcome is predictable, and most of the variables are clear. From a funder’s perspective, the risk is low.

These interventions are like machines, and advertise themselves as such. Three dollars in = one bed net out. Five pounds in = one case of diarrhoea avoided. Cause and effect are clear. They operate within a system in which the relevant factors are known.

Working within complex systems

But a lot of important work done by charities is quite different. It involves trying to change legislation to outlaw discrimination; it’s research to uncover the human genome; it’s changing societal attitudes on same sex couples. These efforts aim to change the system. Here, success depends on factors which are unseen, often unknowable, and mainly beyond the charity’s control. The causal chains are long and uncertain.

Working on hard problems within highly complex systems, a charity’s results can take ages to materialize. Even then, the results may not be predictable, attributable, or even measurable.

Yet this is probably the most consequential work that a donor can enable. In 2006, the Institute for Philanthropy surveyed a thousand experts on UK philanthropy’s greatest achievements. The resulting list is dominated by system-changing work: campaigns which ended the slave trade, created the welfare state, and ensured universal education. This type of work generates effects that are much broader and more profound than delivering services to a limited group of recipients.

There’s frequently a trade-off. The more certainty a donor wants about results, the smaller they will be. If she’ll accept more uncertainty, by operating further away from beneficiaries and engaging more with the system around them, the ultimate effect may be greater. In other words, if donors limit their risk, they may simultaneously limit their return.

Furthermore, philanthropy is uniquely able to fund these kinds of system-changing efforts, since governments and companies are inherently more risk averse and less likely to support them.

Moneyball philanthropy?

It has become trendy to liken effective philanthropy to Moneyball, the strategy pioneered by the Oakland Athletics baseball team, which involved choosing players based on statistical analysis instead of experts’ intuition.

But the analogy doesn’t hold. In baseball, the playing field is bounded, the rules are clear, the results are immediately evident, and the variables are visible and knowable. The Moneyball approach worked because the system was reasonably simple. Certainly the same approach can help to analyse charities whose work is based on simple models of cause and effect. The charities recommended by analysts like GiveWell, for example, can all show what they achieve for £10, though none of them has much effect beyond their immediate beneficiaries.

But the Moneyball approach is hopeless for assessing charities trying to change the system. Take the work of Global Witness in exposing the economic networks behind conflict, corruption and environmental destruction around the world. The ultimate value of their work, in terms of lives saved and natural resources protected, is literally incalculable.

Unintended danger

The Moneyball approach – like much of the current debate in philanthropic sectors on how to define and measure impact – is dangerous because it leads donors to seek out only the most easily provable results. It pushes them towards interventions within the current system and beguiles them into thinking that the best charities must be able to produce simple cost-benefit figures.

As we’ve seen, this approach would have precluded some of philanthropy’s greatest successes. Many of us owe our liberty, our freedom of speech, and our education to such philanthropy. We’d be crazy to sacrifice these kinds of achievements in pursuit of an immediate “return on investment.”

Giving slow, not fast

Two factors complicate effective philanthropy. The first is that it involves making decisions under considerable uncertainty. Because donors have finite resources, they must decide between competing activities. Yet as we’ve seen, many determining factors are in principle unknowable when working within complex systems: basic medical research may be stellar or may find nothing; a campaign to ban handguns will rely on political will which may or may not materialise.

The second complicating factor is that human brains love shortcuts, and are much better at making decisions which don’t require much thought. Indeed, as Daniel Kahneman explains in Thinking, Fast and Slow, we often fail to notice that many decisions require proper thought and instead make them on the fly, leading to predictable errors. Charities whose work is based on simple theories of change don’t require much thought. Those whose work is more complex require that we take proper time to make good decisions.

In the end, this shouldn’t be surprising. Philanthropy is about making the world a better place. And making the world a better place is going to be a lot more difficult than winning a baseball game. Let’s not let an idea like Moneyball distract us from the challenge.

How donors change the system to increase evidence in government policies–>

Posted in Effective giving, Impact & evaluation | Leave a comment

Don’t Die of Ignorance

This was first published by Third Sector, in Caroline Fiennes’ regular column.

It sounds pretty good – a programme that aims to break the cycle of poverty and illiteracy by improving the educational opportunities of low-income families through early childhood education, adult literacy support and parenting education. It has served more than 30,000 families and has run in more than 800 locations nationally. Would you put money into it? Might your organisation take it on? It sounds highly plausible and clearly has attracted considerable funding.

But research has shown that this programme had no effect at all. The gains made by children who were served by it were indistinguishable from those of children who weren’t. The money might as well not have been spent.

Let’s try another example – a policy that children who fall behind at school retake the year. Again, it sounds pretty sensible and is common practice in some countries. So should we do it?

Well, compared with this policy, the parenting early intervention programme mentioned above looks like a great idea: whereas it achieved nothing, the schooling policy achieved less than nothing by making things worse. Children who retook a year typically ended up about four months further behind than if they hadn’t.

These examples, and many others like them, show that our intuition about programmes or organisations is no guide. It might lead us to waste our time and efforts or even to make things worse. We do better when our decisions – as donors, managers or trustees – are based on evidence.

Now suppose that for some medical condition there are two competing drugs. Drug A solves your problem and has the side effect of reducing your risk of heart attack by 29 per cent; drug B also solves your problem but increases your risk of heart attack by about half. What do you say? It’s not a hard choice.

In fact, drug A and drug B are the same drug. Again, this example is real: it’s for hormone-replacement therapy. One type of test (observational non-randomised cohort studies using markers for heart attacks) showed that it reduced heart attacks by 29 per cent, whereas another (randomised studies that monitor actual heart attacks) showed that it increased fatal heart attacks by half. What do you say now?

Only one answer can be accurate, so you want to know which test to believe. The research method matters – indeed, your life might depend on it.

With social programmes, too, the answer depends on the research method. When a reading programme in India was evaluated using various research methods, one implied that it worked brilliantly, a second that it worked only a bit and a third that it was worse than nothing. They can’t all be right. So we need to ensure we make decisions not on any old evidence, but on sound evidence.

We should be on our guard against bad research that leads us to waste time and money – and possibly lives. The National Audit Office’s study of evaluations of government programmes found that where the research technique was weakest, the claims made about the programme’s effectiveness were strongest.

Smart decision-makers rely on evidence that is actually reliable, and know what that looks like. Don’t die of ignorance.

Want to see a crashing example of bad research by a charity? Here–>

Posted in Analysing giving, Impact & evaluation, Uncategorized | Leave a comment

Making charities’ research more findable and useful

Quite possibly, some NGO has discovered a great way to, say, prevent re-offending or improve literacy, but that nobody else knows about it so their genius innovation doesn’t spread. Surely this is unacceptable. 

Giving Evidence has been exploring whether this risk could be reduced if research by charities (including ‘monitoring and evaluation’) were easier to find and clearer. We started with a suspicion that (i) some charity research is published but only in places that few people would know to look, such as on a small organisation’s website, and (ii) some of it could be clearer about what the intervention actually was, or what research they did, or what the results were.  Report cover

We started in UK criminal justice, and consulted many experts, funders and practitioners on two proposals: (i) creating a repository to hold charities’ research, and (ii) creating a little checklist of items for charities’ research to detail: the intervention; the research question; the research method and how it was used (e.g., if 20 people were interviewed, how were those 20 chosen?); the findings are here.

The response was very positive. On the checklist, people really welcomed this, and the medics have been using them for years with good success (e.g., CONSORT for reporting clinical trials) and are happy to lend us their expertise. Some great additions to our four items were suggested. On the repository, the consensus was to use open meta-data for tagging research, rather than building a database. Various dogs didn’t bark: nobody said that it had already been done and failed, or that anybody else was already doing it. Full details and results of the ‘consultation’ are here.

Giving Evidence is now proceeding to pilot both the checklist and the open meta-data. We hope to start the pilot in early 2015. We have an ‘anchor funder’ and are currently talking with other funders.

We suspect that findability and clarity of charities’ research could usefully be improved in many sectors. We happened to start in UK criminal justice, but suspect that the checklist and meta-data ‘solutions’ may be helpful elsewhere too. We’ll share results from the criminal justice pilot, and are happy to explore these issues in other sectors.

As ever, do get in touch if you are interested.

This project is a side-effect of our work on learning lessons from how medicine uses evidence, which is here.

It’s part of our general theme that: it’s hard to make evidence-based decisions if lots of the evidence is missing, or unfindable, or unclear, or garbage, discussed more here.

This project will enable work to improve the quality of research by NGOs. Why does research quality matter? –>

Posted in Uncategorized | 1 Comment